You are currently browsing the category archive for the ‘Mathematics’ category.
Asgar Jamneshan, Or Shalom, and myself have just uploaded to the arXiv our preprints “A Host–Kra -system of order 5 that is not Abramov of order 5, and non-measurability of the inverse theorem for the
norm” and “The structure of totally disconnected Host–Kra–Ziegler factors, and the inverse theorem for the
Gowers uniformity norms on finite abelian groups of bounded torsion“. These two papers are both concerned with advancing the inverse theory for the Gowers norms and Gowers-Host-Kra seminorms; the first paper provides a counterexample in this theory (in particular disproving a conjecture of Bergelson, Ziegler and myself), and the second paper gives new positive results in the case when the underlying group is bounded torsion, or the ergodic system is totally disconnected. I discuss the two papers more below the fold.
Tamar Ziegler and I have just uploaded to the arXiv our paper “Infinite partial sumsets in the primes“. This is a short paper inspired by a recent result of Kra, Moreira, Richter, and Robertson (discussed for instance in this Quanta article from last December) showing that for any set of natural numbers of positive upper density, there exists a sequence
of natural numbers and a shift
such that
for all
this answers a question of Erdős). In view of the “transference principle“, it is then plausible to ask whether the same result holds if
is replaced by the primes. We can show the following results:
Theorem 1
- (i) If the Hardy-Littlewood prime tuples conjecture (or the weaker conjecture of Dickson) is true, then there exists an increasing sequence
of primes such that
is prime for all
.
- (ii) Unconditionally, there exist increasing sequences
and
of natural numbers such that
is prime for all
.
- (iii) These conclusions fail if “prime” is replaced by “positive (relative) density subset of the primes” (even if the density is equal to 1).
We remark that it was shown by Balog that there (unconditionally) exist arbitrarily long but finite sequences of primes such that
is prime for all
. (This result can also be recovered from the later results of Ben Green, myself, and Tamar Ziegler.) Also, it had previously been shown by Granville that on the Hardy-Littlewood prime tuples conjecture, there existed increasing sequences
and
of natural numbers such that
is prime for all
.
The conclusion of (i) is stronger than that of (ii) (which is of course consistent with the former being conditional and the latter unconditional). The conclusion (ii) also implies the well-known theorem of Maynard that for any given , there exist infinitely many
-tuples of primes of bounded diameter, and indeed our proof of (ii) uses the same “Maynard sieve” that powers the proof of that theorem (though we use a formulation of that sieve closer to that in this blog post of mine). Indeed, the failure of (iii) basically arises from the failure of Maynard’s theorem for dense subsets of primes, simply by removing those clusters of primes that are unusually closely spaced.
Our proof of (i) was initially inspired by the topological dynamics methods used by Kra, Moreira, Richter, and Robertson, but we managed to condense it to a purely elementary argument (taking up only half a page) that makes no reference to topological dynamics and builds up the sequence recursively by repeated application of the prime tuples conjecture.
The proof of (ii) takes up the majority of the paper. It is easiest to phrase the argument in terms of “prime-producing tuples” – tuples for which there are infinitely many
with
all prime. Maynard’s theorem is equivalent to the existence of arbitrarily long prime-producing tuples; our theorem is equivalent to the stronger assertion that there exist an infinite sequence
such that every initial segment
is prime-producing. The main new tool for achieving this is the following cute measure-theoretic lemma of Bergelson:
Lemma 2 (Bergelson intersectivity lemma) Letbe subsets of a probability space
of measure uniformly bounded away from zero, thus
. Then there exists a subsequence
such that
for all
.
This lemma has a short proof, though not an entirely obvious one. Firstly, by deleting a null set from , one can assume that all finite intersections
are either positive measure or empty. Secondly, a routine application of Fatou’s lemma shows that the maximal function
has a positive integral, hence must be positive at some point
. Thus there is a subsequence
whose finite intersections all contain
, thus have positive measure as desired by the previous reduction.
It turns out that one cannot quite combine the standard Maynard sieve with the intersectivity lemma because the events that show up (which roughly correspond to the event that
is prime for some random number
(with a well-chosen probability distribution) and some shift
) have their probability going to zero, rather than being uniformly bounded from below. To get around this, we borrow an idea from a paper of Banks, Freiberg, and Maynard, and group the shifts
into various clusters
, chosen in such a way that the probability that at least one of
is prime is bounded uniformly from below. One then applies the Bergelson intersectivity lemma to those events and uses many applications of the pigeonhole principle to conclude.
Over the last few years, I have served on a committee of the National Academy of Sciences to produce some posters and other related media to showcase twenty-first century and its applications in the real world, suitable for display in classrooms or math departments. Our posters (together with some associated commentary, webinars on related topics, and even a whimsical “comic“) are now available for download here.
This post is an unofficial sequel to one of my first blog posts from 2007, which was entitled “Quantum mechanics and Tomb Raider“.
One of the oldest and most famous allegories is Plato’s allegory of the cave. This allegory centers around a group of people chained to a wall in a cave that cannot see themselves or each other, but only the two-dimensional shadows of themselves cast on the wall in front of them by some light source they cannot directly see. Because of this, they identify reality with this two-dimensional representation, and have significant conceptual difficulties in trying to view themselves (or the world as a whole) as three-dimensional, until they are freed from the cave and able to venture into the sunlight.
There is a similar conceptual difficulty when trying to understand Einstein’s theory of special relativity (and more so for general relativity, but let us focus on special relativity for now). We are very much accustomed to thinking of reality as a three-dimensional space endowed with a Euclidean geometry that we traverse through in time, but in order to have the clearest view of the universe of special relativity it is better to think of reality instead as a four-dimensional spacetime that is endowed instead with a Minkowski geometry, which mathematically is similar to a (four-dimensional) Euclidean space but with a crucial change of sign in the underlying metric. Indeed, whereas the distance between two points in Euclidean space
is given by the three-dimensional Pythagorean theorem
That said, the analogy between Minkowski space and four-dimensional Euclidean space is strong enough that it serves as a useful conceptual aid when first learning special relativity; for instance the excellent introductory text “Spacetime physics” by Taylor and Wheeler very much adopts this view. On the other hand, this analogy doesn’t directly address the conceptual problem mentioned earlier of viewing reality as a four-dimensional spacetime in the first place, rather than as a three-dimensional space that objects move around in as time progresses. Of course, part of the issue is that we aren’t good at directly visualizing four dimensions in the first place. This latter problem can at least be easily addressed by removing one or two spatial dimensions from this framework – and indeed many relativity texts start with the simplified setting of only having one spatial dimension, so that spacetime becomes two-dimensional and can be depicted with relative ease by spacetime diagrams – but still there is conceptual resistance to the idea of treating time as another spatial dimension, since we clearly cannot “move around” in time as freely as we can in space, nor do we seem able to easily “rotate” between the spatial and temporal axes, the way that we can between the three coordinate axes of Euclidean space.
With this in mind, I thought it might be worth attempting a Plato-type allegory to reconcile the spatial and spacetime views of reality, in a way that can be used to describe (analogues of) some of the less intuitive features of relativity, such as time dilation, length contraction, and the relativity of simultaneity. I have (somewhat whimsically) decided to place this allegory in a Tolkienesque fantasy world (similarly to how my previous allegory to describe quantum mechanics was phrased in a world based on the computer game “Tomb Raider”). This is something of an experiment, and (like any other analogy) the allegory will not be able to perfectly capture every aspect of the phenomenon it is trying to represent, so any feedback to improve the allegory would be appreciated.
If , a Poisson random variable
with mean
is a random variable taking values in the natural numbers with probability distribution
Proposition 1 (Bennett’s inequality) One hasfor
and
for
, where
From the Taylor expansion for
we conclude Gaussian type tail bounds in the regime
(and in particular when
(in the spirit of the Chernoff, Bernstein, and Hoeffding inequalities). but in the regime where
is large and positive one obtains a slight gain over these other classical bounds (of
type, rather than
).
Proof: We use the exponential moment method. For any , we have from Markov’s inequality that
Remark 2 Bennett’s inequality also applies for (suitably normalized) sums of bounded independent random variables. In some cases there are direct comparison inequalities available to relate those variables to the Poisson case. For instance, supposeis the sum of independent Boolean variables
of total mean
and with
for some
. Then for any natural number
, we have
As such, for
small, one can efficiently control the tail probabilities of
in terms of the tail probability of a Poisson random variable of mean close to
; this is of course very closely related to the well known fact that the Poisson distribution emerges as the limit of sums of many independent boolean variables, each of which is non-zero with small probability. See this paper of Bentkus and this paper of Pinelis for some further useful (and less obvious) comparison inequalities of this type.
In this note I wanted to record the observation that one can improve the Bennett bound by a small polynomial factor once one leaves the Gaussian regime , in particular gaining a factor of
when
. This observation is not difficult and is implicitly in the literature (one can extract it for instance from the much more general results of this paper of Talagrand, and the basic idea already appears in this paper of Glynn), but I was not able to find a clean version of this statement in the literature, so I am placing it here on my blog. (But if a reader knows of a reference that basically contains the bound below, I would be happy to know of it.)
Proposition 3 (Improved Bennett’s inequality) One hasfor
and
for
.
Proof: We begin with the first inequality. We may assume that , since otherwise the claim follows from the usual Bennett inequality. We expand out the left-hand side as
Now we turn to the second inequality. As before we may assume that . We first dispose of a degenerate case in which
. Here the left-hand side is just
It remains to consider the regime where and
. The left-hand side expands as
The same analysis can be reversed to show that the bounds given above are basically sharp up to constants, at least when (and
) are large.
Rachel Greenfeld and I have just uploaded to the arXiv our paper “A counterexample to the periodic tiling conjecture“. This is the full version of the result I announced on this blog a few months ago, in which we disprove the periodic tiling conjecture of Grünbaum-Shephard and Lagarias-Wang. The paper took a little longer than expected to finish, due to a technical issue that we did not realize at the time of the announcement that required a workaround.
In more detail: the original strategy, as described in the announcement, was to build a “tiling language” that was capable of encoding a certain “-adic Sudoku puzzle”, and then show that the latter type of puzzle had only non-periodic solutions if
was a sufficiently large prime. As it turns out, the second half of this strategy worked out, but there was an issue in the first part: our tiling language was able (using
-group-valued functions) to encode arbitrary boolean relationships between boolean functions, and was also able (using
-valued functions) to encode “clock” functions such as
that were part of our
-adic Sudoku puzzle, but we were not able to make these two types of functions “talk” to each other in the way that was needed to encode the
-adic Sudoku puzzle (the basic problem being that if
is a finite abelian
-group then there are no non-trivial subgroups of
that are not contained in
or trivial in the
direction). As a consequence, we had to replace our “
-adic Sudoku puzzle” by a “
-adic Sudoku puzzle” which basically amounts to replacing the prime
by a sufficiently large power of
(we believe
will suffice). This solved the encoding issue, but the analysis of the
-adic Sudoku puzzles was a little bit more complicated than the
-adic case, for the following reason. The following is a nice exercise in analysis:
Theorem 1 (Linearity in three directions implies full linearity) Letbe a smooth function which is affine-linear on every horizontal line, diagonal (line of slope
), and anti-diagonal (line of slope
). In other words, for any
, the functions
,
, and
are each affine functions on
. Then
is an affine function on
.
Indeed, the property of being affine in three directions shows that the quadratic form associated to the Hessian at any given point vanishes at
,
, and
, and thus must vanish everywhere. In fact the smoothness hypothesis is not necessary; we leave this as an exercise to the interested reader. The same statement turns out to be true if one replaces
with the cyclic group
as long as
is odd; this is the key for us to showing that our
-adic Sudoku puzzles have an (approximate) two-dimensional affine structure, which on further analysis can then be used to show that it is in fact non-periodic. However, it turns out that the corresponding claim for cyclic groups
can fail when
is a sufficiently large power of
! In fact the general form of functions
that are affine on every horizontal line, diagonal, and anti-diagonal takes the form
During the writing process we also discovered that the encoding part of the proof becomes more modular and conceptual once one introduces two new definitions, that of an “expressible property” and a “weakly expressible property”. These concepts are somewhat analogous to that of sentences and
sentences in the arithmetic hierarchy, or to algebraic sets and semi-algebraic sets in real algebraic geometry. Roughly speaking, an expressible property is a property of a tuple of functions
,
from an abelian group
to finite abelian groups
, such that the property can be expressed in terms of one or more tiling equations on the graph
This is a spinoff from the previous post. In that post, we remarked that whenever one receives a new piece of information , the prior odds
between an alternative hypothesis
and a null hypothesis
is updated to a posterior odds
, which can be computed via Bayes’ theorem by the formula
A PDF version of the worksheet and instructions can be found here. One can fill in this worksheet in the following order:
- In Box 1, one enters in the precise statement of the null hypothesis
.
- In Box 2, one enters in the precise statement of the alternative hypothesis
. (This step is very important! As discussed in the previous post, Bayesian calculations can become extremely inaccurate if the alternative hypothesis is vague.)
- In Box 3, one enters in the prior probability
(or the best estimate thereof) of the null hypothesis
.
- In Box 4, one enters in the prior probability
(or the best estimate thereof) of the alternative hypothesis
. If only two hypotheses are being considered, we of course have
.
- In Box 5, one enters in the ratio
between Box 4 and Box 3.
- In Box 6, one enters in the precise new information
that one has acquired since the prior state. (As discussed in the previous post, it is important that all relevant information
– both supporting and invalidating the alternative hypothesis – are reported accurately. If one cannot be certain that key information has not been withheld to you, then Bayesian calculations become highly unreliable.)
- In Box 7, one enters in the likelihood
(or the best estimate thereof) of the new information
under the null hypothesis
.
- In Box 8, one enters in the likelihood
(or the best estimate thereof) of the new information
under the null hypothesis
. (This can be difficult to compute, particularly if
is not specified precisely.)
- In Box 9, one enters in the ratio
betwen Box 8 and Box 7.
- In Box 10, one enters in the product of Box 5 and Box 9.
- (Assuming there are no other hypotheses than
and
) In Box 11, enter in
divided by
plus Box 10.
- (Assuming there are no other hypotheses than
and
) In Box 12, enter in Box 10 divided by
plus Box 10. (Alternatively, one can enter in
minus Box 11.)
To illustrate this procedure, let us consider a standard Bayesian update problem. Suppose that a given point in time, of the population is infected with COVID-19. In response to this, a company mandates COVID-19 testing of its workforce, using a cheap COVID-19 test. This test has a
chance of a false negative (testing negative when one has COVID) and a
chance of a false positive (testing positive when one does not have COVID). An employee
takes the mandatory test, which turns out to be positive. What is the probability that
actually has COVID?
We can fill out the entries in the worksheet one at a time:
- Box 1: The null hypothesis
is that
does not have COVID.
- Box 2: The alternative hypothesis
is that
does have COVID.
- Box 3: In the absence of any better information, the prior probability
of the null hypothesis is
, or
.
- Box 4: Similarly, the prior probability
of the alternative hypothesis is
, or
.
- Box 5: The prior odds
are
.
- Box 6: The new information
is that
has tested positive for COVID.
- Box 7: The likelihood
of
under the null hypothesis is
, or
(the false positive rate).
- Box 8: The likelihood
of
under the alternative is
, or
(one minus the false negative rate).
- Box 9: The likelihood ratio
is
.
- Box 10: The product of Box 5 and Box 9 is approximately
.
- Box 11: The posterior probability
is approximately
.
- Box 12: The posterior probability
is approximately
.
The filled worksheet looks like this:
Perhaps surprisingly, despite the positive COVID test, the employee only has a
chance of actually having COVID! This is due to the relatively large false positive rate of this cheap test, and is an illustration of the base rate fallacy in statistics.
We remark that if we switch the roles of the null hypothesis and alternative hypothesis, then some of the odds in the worksheet change, but the ultimate conclusions remain unchanged:
So the question of which hypothesis to designate as the null hypothesis and which one to designate as the alternative hypothesis is largely a matter of convention.
Now let us take a superficially similar situation in which a mother observers her daughter exhibiting COVID-like symptoms, to the point where she estimates the probability of her daughter having COVID at . She then administers the same cheap COVID-19 test as before, which returns positive. What is the posterior probability of her daughter having COVID?
One can fill out the worksheet much as before, but now with the prior probability of the alternative hypothesis raised from to
(and the prior probablity of the null hypothesis dropping from
to
). One now gets that the probability that the daughter has COVID has increased all the way to
:
Thus we see that prior probabilities can make a significant impact on the posterior probabilities.
Now we use the worksheet to analyze an infamous probability puzzle, the Monty Hall problem. Let us use the formulation given in that Wikipedia page:
Problem 1 Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
For this problem, the precise formulation of the null hypothesis and the alternative hypothesis become rather important. Suppose we take the following two hypotheses:
- Null hypothesis
: The car is behind door number 1, and no matter what door you pick, the host will randomly reveal another door that contains a goat.
- Alternative hypothesis
: The car is behind door number 2 or 3, and no matter what door you pick, the host will randomly reveal another door that contains a goat.
However, consider the following different set of hypotheses:
- Null hypothesis
: The car is behind door number 1, and if you pick the door with the car, the host will reveal another door to entice you to switch. Otherwise, the host will not reveal a door.
- Alternative hypothesis
: The car is behind door number 2 or 3, and if you pick the door with the car, the host will reveal another door to entice you to switch. Otherwise, the host will not reveal a door.
Here we still have and
, but while
remains equal to
,
has dropped to zero (since if the car is not behind door 1, the host will not reveal a door). So now
has increased all the way to
, and it is not advantageous to switch! This dramatically illustrates the importance of specifying the hypotheses precisely. The worksheet is now filled out as follows:
Finally, we consider another famous probability puzzle, the Sleeping Beauty problem. Again we quote the problem as formulated on the Wikipedia page:
Problem 2 Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: “What is your credence now for the proposition that the coin landed heads?”‘
- If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
- If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.
- In either case, she will be awakened on Wednesday without interview and the experiment ends.
Here the situation can be confusing because there are key portions of this experiment in which the observer is unconscious, but nevertheless Bayesian probability continues to operate regardless of whether the observer is conscious. To make this issue more precise, let us assume that the awakenings mentioned in the problem always occur at 8am, so in particular at 7am, Sleeping beauty will always be unconscious.
Here, the null and alternative hypotheses are easy to state precisely:
- Null hypothesis
: The coin landed tails.
- Alternative hypothesis
: The coin landed heads.
The subtle thing here is to work out what the correct prior state is (in most other applications of Bayesian probability, this state is obvious from the problem). It turns out that the most reasonable choice of prior state is “unconscious at 7am, on either Monday or Tuesday, with an equal chance of each”. (Note that whatever the outcome of the coin flip is, Sleeping Beauty will be unconscious at 7am Monday and unconscious again at 7am Tuesday, so it makes sense to give each of these two states an equal probability.) The new information is then
- New information
: One hour after the prior state, Sleeping Beauty is awakened.
With this formulation, we see that ,
, and
, so on working through the worksheet one eventually arrives at
, so that Sleeping Beauty should only assign a probability of
to the event that the coin landed as heads.
There are arguments advanced in the literature to adopt the position that should instead be equal to
, but I do not see a way to interpret them in this Bayesian framework without a substantial alteration to either the notion of the prior state, or by not presenting the new information
properly.
If one has multiple pieces of information that one wishes to use to update one’s priors, one can do so by filling out one copy of the worksheet for each new piece of information, or by using a multi-row version of the worksheet using such identities as
An unusual lottery result made the news recently: on October 1, 2022, the PCSO Grand Lotto in the Philippines, which draws six numbers from to
at random, managed to draw the numbers
(though the balls were actually drawn in the order
). In other words, they drew exactly six multiples of nine from
to
. In addition, a total of
tickets were bought with this winning combination, whose owners then had to split the
million peso jackpot (about
million USD) among themselves. This raised enough suspicion that there were calls for an inquiry into the Philippine lottery system, including from the minority leader of the Senate.
Whenever an event like this happens, journalists often contact mathematicians to ask the question: “What are the odds of this happening?”, and in fact I myself received one such inquiry this time around. This is a number that is not too difficult to compute – in this case, the probability of the lottery producing the six numbers in some order turn out to be
in
– and such a number is often dutifully provided to such journalists, who in turn report it as some sort of quantitative demonstration of how remarkable the event was.
But on the previous draw of the same lottery, on September 28, 2022, the unremarkable sequence of numbers were drawn (again in a different order), and no tickets ended up claiming the jackpot. The probability of the lottery producing the six numbers
is also
in
– just as likely or as unlikely as the October 1 numbers
. Indeed, the whole point of drawing the numbers randomly is to make each of the
possible outcomes (whether they be “unusual” or “unremarkable”) equally likely. So why is it that the October 1 lottery attracted so much attention, but the September 28 lottery did not?
Part of the explanation surely lies in the unusually large number () of lottery winners on October 1, but I will set that aspect of the story aside until the end of this post. The more general points that I want to make with these sorts of situations are:
- The question “what are the odds of happening” is often easy to answer mathematically, but it is not the correct question to ask.
- The question “what is the probability that an alternative hypothesis is the truth” is (one of) the correct questions to ask, but is very difficult to answer (it involves both mathematical and non-mathematical considerations).
- The answer to the first question is one of the quantities needed to calculate the answer to the second, but it is far from the only such quantity. Most of the other quantities involved cannot be calculated exactly.
- However, by making some educated guesses, one can still sometimes get a very rough gauge of which events are “more surprising” than others, in that they would lead to relatively higher answers to the second question.
To explain these points it is convenient to adopt the framework of Bayesian probability. In this framework, one imagines that there are competing hypotheses to explain the world, and that one assigns a probability to each such hypothesis representing one’s belief in the truth of that hypothesis. For simplicity, let us assume that there are just two competing hypotheses to be entertained: the null hypothesis , and an alternative hypothesis
. For instance, in our lottery example, the two hypotheses might be:
- Null hypothesis
: The lottery is run in a completely fair and random fashion.
- Alternative hypothesis
: The lottery is rigged by some corrupt officials for their personal gain.
At any given point in time, a person would have a probability assigned to the null hypothesis, and a probability
assigned to the alternative hypothesis; in this simplified model where there are only two hypotheses under consideration, these probabilities must add to one, but of course if there were additional hypotheses beyond these two then this would no longer be the case.
Bayesian probability does not provide a rule for calculating the initial (or prior) probabilities ,
that one starts with; these may depend on the subjective experiences and biases of the person considering the hypothesis. For instance, one person might have quite a bit of prior faith in the lottery system, and assign the probabilities
and
. Another person might have quite a bit of prior cynicism, and perhaps assign
and
. One cannot use purely mathematical arguments to determine which of these two people is “correct” (or whether they are both “wrong”); it depends on subjective factors.
What Bayesian probability does do, however, is provide a rule to update these probabilities ,
in view of new information
to provide posterior probabilities
,
. In our example, the new information
would be the fact that the October 1 lottery numbers were
(in some order). The update is given by the famous Bayes theorem
- The prior odds
of the alternative hypothesis;
- The probability
that the event
occurs under the null hypothesis
; and
- The probability
that the event
occurs under the alternative hypothesis
.
As previously discussed, the prior odds of the alternative hypothesis are subjective and vary from person to person; in the example earlier, the person with substantial faith in the lottery may only give prior odds of
(99 to 1 against) of the alternative hypothesis, whereas the cynic might give odds of
(even odds). The probability
is the quantity that can often be calculated by straightforward mathematics; as discussed before, in this specific example we have
For instance, suppose we replace the alternative hypothesis by the following very specific (and somewhat bizarre) hypothesis:
- Alternative hypothesis
: The lottery is rigged by a cult that worships the multiples of
, and views October 1 as their holiest day. On this day, they will manipulate the lottery to only select those balls that are multiples of
.
Under this alternative hypothesis , we have
. So, when
happens, the odds of this alternative hypothesis
will increase by the dramatic factor of
. So, for instance, someone who already was entertaining odds of
of this hypothesis
would now have these odds multiply dramatically to
, so that the probability of
would have jumped from a mere
to a staggering
. This is about as strong a shift in belief as one could imagine. However, this hypothesis
is so specific and bizarre that one’s prior odds of this hypothesis would be nowhere near as large as
(unless substantial prior evidence of this cult and its hold on the lottery system existed, of course). A more realistic prior odds for
would be something like
– which is so miniscule that even multiplying it by a factor such as
barely moves the needle.
Remark 1 The contrast between alternative hypothesisand alternative hypothesis
illustrates a common demagogical rhetorical technique when an advocate is trying to convince an audience of an alternative hypothesis, namely to use suggestive language (“`I’m just asking questions here”) rather than precise statements in order to leave the alternative hypothesis deliberately vague. In particular, the advocate may take advantage of the freedom to use a broad formulation of the hypothesis (such as
) in order to maximize the audience’s prior odds of the hypothesis, simultaneously with a very specific formulation of the hypothesis (such as
) in order to maximize the probability of the actual event
occuring under this hypothesis. (A related technique is to be deliberately vague about the hypothesized competency of some suspicious actor, so that this actor could be portrayed as being extraordinarily competent when convenient to do so, while simultaneously being portrayed as extraordinarily incompetent when that instead is the more useful hypothesis.) This can lead to wildly inaccurate Bayesian updates of this vague alternative hypothesis, and so precise formulation of such hypothesis is important if one is to approach a topic from anything remotely resembling a scientific approach. [EDIT: as pointed out to me by a reader, this technique is a Bayesian analogue of the motte and bailey fallacy.]
At the opposite extreme, consider instead the following hypothesis:
- Alternative hypothesis
: The lottery is rigged by some corrupt officials, who on October 1 decide to randomly determine the winning numbers in advance, share these numbers with their collaborators, and then manipulate the lottery to choose those numbers that they selected.
If these corrupt officials are indeed choosing their predetermined winning numbers randomly, then the probability would in fact be just the same probability
as
, and in this case the seemingly unusual event
would in fact have no effect on the odds of the alternative hypothesis, because it was just as unlikely for the alternative hypothesis to generate this multiples-of-nine pattern as for the null hypothesis to. In fact, one would imagine that these corrupt officials would avoid “suspicious” numbers, such as the multiples of
, and only choose numbers that look random, in which case
would in fact be less than
and so the event
would actually lower the odds of the alternative hypothesis in this case. (In fact, one can sometimes use this tendency of fraudsters to not generate truly random data as a statistical tool to detect such fraud; violations of Benford’s law for instance can be used in this fashion, though only in situations where the null hypothesis is expected to obey Benford’s law, as discussed in this previous blog post.)
Now let us consider a third alternative hypothesis:
- Alternative hypothesis
: On October 1, the lottery machine developed a fault and now only selects numbers that exhibit unusual patterns.
Setting aside the question of precisely what faulty mechanism could induce this sort of effect, it is not clear at all how to compute in this case. Using the principle of indifference as a crude rule of thumb, one might expect
Remark 2 This example demonstrates another demagogical rhetorical technique that one sometimes sees (particularly in political or other emotionally charged contexts), which is to cherry-pick the information presented to their audience by informing them of eventswhich have a relatively high probability of occurring under their alternative hypothesis, but withholding information about other relevant events
that have a relatively low probability of occurring under their alternative hypothesis. When confronted with such new information
, a common defense of a demogogue is to modify the alternative hypothesis
to a more specific hypothesis
that can “explain” this information
(“Oh, clearly we heard about
because the conspiracy in fact extends to the additional organizations
that reported
“), taking advantage of the vagueness discussed in Remark 1.
Let us consider a superficially similar hypothesis:
- Alternative hypothesis
: On October 1, a divine being decided to send a sign to humanity by placing an unusual pattern in a lottery.
Here we (literally) stay agnostic on the prior odds of this hypothesis, and do not address the theological question of why a divine being should choose to use the medium of a lottery to send their signs. At first glance, the probability here should be similar to the probability
, and so perhaps one could use this event
to improve the odds of the existence of a divine being by a factor of a thousand or so. But note carefully that the hypothesis
did not specify which lottery the divine being chose to use. The PSCO Grand Lotto is just one of a dozen lotteries run by the Philippine Charity Sweepstakes Office (PCSO), and of course there are over a hundred other countries and thousands of states within these countries, each of which often run their own lotteries. Taking into account these thousands or tens of thousands of additional lotteries to choose from, the probability
now drops by several orders of magnitude, and is now basically comparable to the probability
coming from the null hypothesis. As such one does not expect the event
to have a significant impact on the odds of the hypothesis
, despite the small-looking nature
of the probability
.
In summary, we have failed to locate any alternative hypothesis which
- Has some non-negligible prior odds of being true (and in particular is not excessively specific, as with hypothesis
);
- Has a significantly higher probability of producing the specific event
than the null hypothesis; AND
- Does not struggle to also produce other events
that have since been observed.
We now return to the fact that for this specific October 1 lottery, there were tickets that managed to select the winning numbers. Let us call this event
. In view of this additional information, we should now consider the ratio of the probabilities
and
, rather than the ratio of the probabilities
and
. If we augment the null hypothesis to
- Null hypothesis
: The lottery is run in a completely fair and random fashion, and the purchasers of lottery tickets also select their numbers in a completely random fashion.
Then is indeed of the “insanely improbable” category mentioned previously. I was not able to get official numbers on how many tickets are purchased per lottery, but let us say for sake of argument that it is 1 million (the conclusion will not be extremely sensitive to this choice). Then the expected number of tickets that would have the winning numbers would be
- Null hypothesis
: The lottery is run in a completely fair and random fashion, but a significant fraction of the purchasers of lottery tickets only select “unusual” numbers.
then it can now become quite plausible that a highly unusual set of numbers such as could be selected by as many as
purchasers of tickets; for instance, if
of the 1 million ticket holders chose to select their numbers according to some sort of pattern, then only
of those holders would have to pick
in order for the event
to hold (given
), and this is not extremely implausible. Given that this reasonable version of the null hypothesis already gives a plausible explanation for
, there does not seem to be a pressing need to locate an alternate hypothesis
that gives some other explanation (cf. Occam’s razor). [UPDATE: Indeed, given the actual layout of the tickets of ths lottery, the numbers
form a diagonal, and so all that is needed in order for the modified null hypothesis
to explain the event
is to postulate that a significant fraction of ticket purchasers decided to lay out their numbers in a simple geometric pattern, such as a row or diagonal.]
Remark 3 In view of the above discussion, one can propose a systematic way to evaluate (in as objective a fashion as possible) rhetorical claims in which an advocate is presenting evidence to support some alternative hypothesis:
- State the null hypothesis
and the alternative hypothesis
as precisely as possible. In particular, avoid conflating an extremely broad hypothesis (such as the hypothesis
in our running example) with an extremely specific one (such as
in our example).
- With the hypotheses precisely stated, give an honest estimate to the prior odds of this formulation of the alternative hypothesis.
- Consider if all the relevant information
(or at least a representative sample thereof) has been presented to you before proceeding further. If not, consider gathering more information
from further sources.
- Estimate how likely the information
was to have occurred under the null hypothesis.
- Estimate how likely the information
was to have occurred under the alternative hypothesis (using exactly the same wording of this hypothesis as you did in previous steps).
- If the second estimate is significantly larger than the first, then you have cause to update your prior odds of this hypothesis (though if those prior odds were already vanishingly unlikely, this may not move the needle significantly). If not, the argument is unconvincing and no significant adjustment to the odds (except perhaps in a downwards direction) needs to be made.
Rachel Greenfeld and I have just uploaded to the arXiv our announcement “A counterexample to the periodic tiling conjecture“. This is an announcement of a longer paper that we are currently in the process of writing up (and hope to release in a few weeks), in which we disprove the periodic tiling conjecture of Grünbaum-Shephard and Lagarias-Wang. This conjecture can be formulated in both discrete and continuous settings:
Conjecture 1 (Discrete periodic tiling conjecture) Suppose thatis a finite set that tiles
by translations (i.e.,
can be partitioned into translates of
). Then
also tiles
by translations periodically (i.e., the set of translations can be taken to be a periodic subset of
).
Conjecture 2 (Continuous periodic tiling conjecture) Suppose thatis a bounded measurable set of positive measure that tiles
by translations up to null sets. Then
also tiles
by translations periodically up to null sets.
The discrete periodic tiling conjecture can be easily established for by the pigeonhole principle (as first observed by Newman), and was proven for
by Bhattacharya (with a new proof given by Greenfeld and myself). The continuous periodic tiling conjecture was established for
by Lagarias and Wang. By an old observation of Hao Wang, one of the consequences of the (discrete) periodic tiling conjecture is that the problem of determining whether a given finite set
tiles by translations is (algorithmically and logically) decidable.
On the other hand, once one allows tilings by more than one tile, it is well known that aperiodic tile sets exist, even in dimension two – finite collections of discrete or continuous tiles that can tile the given domain by translations, but not periodically. Perhaps the most famous examples of such aperiodic tilings are the Penrose tilings, but there are many other constructions; for instance, there is a construction of Ammann, Grümbaum, and Shephard of eight tiles in which tile aperiodically. Recently, Rachel and I constructed a pair of tiles in
that tiled a periodic subset of
aperiodically (in fact we could even make the tiling question logically undecidable in ZFC).
Our main result is then
Theorem 3 Both the discrete and continuous periodic tiling conjectures fail for sufficiently large. Also, there is a finite abelian group
such that the analogue of the discrete periodic tiling conjecture for
is false.
This suggests that the techniques used to prove the discrete periodic conjecture in are already close to the limit of their applicability, as they cannot handle even virtually two-dimensional discrete abelian groups such as
. The main difficulty is in constructing the counterexample in the
setting.
The approach starts by adapting some of the methods of a previous paper of Rachel and myself. The first step is make the problem easier to solve by disproving a “multiple periodic tiling conjecture” instead of the traditional periodic tiling conjecture. At present, Theorem 3 asserts the existence of a “tiling equation” (where one should think of
and
as given, and the tiling set
is known), which admits solutions, all of which are non-periodic. It turns out that it is enough to instead assert the existence of a system
It is convenient to replace sets by functions, so that this tiling language can be translated to a more familiar language, namely the language of (certain types of) functional equations. The key point here is that the tiling equation
The non-periodic behaviour that we ended up trying to capture was that of a certain “-adically structured function”
associated to some fixed and sufficiently large prime
(in fact for our arguments any prime larger than
, e.g.,
, would suffice), defined by the formula
It turns out that we cannot describe this one-dimensional non-periodic function directly via tiling equations. However, we can describe two-dimensional non-periodic functions such as for some coefficients
via a suitable system of tiling equations. A typical such function looks like this:
A feature of this function is that when one restricts to a row or diagonal of such a function, the resulting one-dimensional function exhibits “-adic structure” in the sense that it behaves like a rescaled version of
; see the announcement for a precise version of this statement. It turns out that the converse is essentially true: after excluding some degenerate solutions in which the function is constant along one or more of the columns, all two-dimensional functions which exhibit
-adic structure along (non-vertical) lines must behave like one of the functions
mentioned earlier, and in particular is non-periodic. The proof of this result is strongly reminiscent of the type of reasoning needed to solve a Sudoku puzzle, and so we have adopted some Sudoku-like terminology in our arguments to provide intuition and visuals. One key step is to perform a shear transformation to the puzzle so that many of the rows become constant, as displayed in this example,
and then perform a “Tetris” move of eliminating the constant rows to arrive at a secondary Sudoku puzzle which one then analyzes in turn:
It is the iteration of this procedure that ultimately generates the non-periodic -adic structure.
Let denote the space of
matrices with integer entries, and let
be the group of invertible
matrices with integer entries. The Smith normal form takes an arbitrary matrix
and factorises it as
, where
,
, and
is a rectangular diagonal matrix, by which we mean that the principal
minor is diagonal, with all other entries zero. Furthermore the diagonal entries of
are
for some
(which is also the rank of
) with the numbers
(known as the invariant factors) principal divisors with
. The invariant factors are uniquely determined; but there can be some freedom to modify the invertible matrices
. The Smith normal form can be computed easily; for instance, in SAGE, it can be computed calling the
function from the matrix class. The Smith normal form is also available for other principal ideal domains than the integers, but we will only be focused on the integer case here. For the purposes of this post, we will view the Smith normal form as a primitive operation on matrices that can be invoked as a “black box”.
In this post I would like to record how to use the Smith normal form to computationally manipulate two closely related classes of objects:
- Subgroups
of a standard lattice
(or lattice subgroups for short);
- Closed subgroups
of a standard torus
(or closed torus subgroups for short).
The above two classes of objects are isomorphic to each other by Pontryagin duality: if is a lattice subgroup, then the orthogonal complement
Example 1 The orthogonal complement of the lattice subgroupis the closed torus subgroup
and conversely.
Let us focus first on lattice subgroups . As all such subgroups are finitely generated abelian groups, one way to describe a lattice subgroup is to specify a set
of generators of
. Equivalently, we have
Example 2 Letbe the lattice subgroup generated by
,
,
, thus
with
. A Smith normal form for
is given by
so
is a rank two lattice with a basis of
and
(and the invariant factors are
and
). The trimmed representation is
There are other Smith normal forms for
, giving slightly different representations here, but the rank and invariant factors will always be the same.
By the above discussion we can represent a lattice subgroup by a matrix
for some
; this representation is not unique, but we will address this issue shortly. For now, we focus on the question of how to use such data representations of subgroups to perform basic operations on lattice subgroups. There are some operations that are very easy to perform using this data representation:
- (Applying a linear transformation) if
, so that
is also a linear transformation from
to
, then
maps lattice subgroups to lattice subgroups, and clearly maps the lattice subgroup
to
for any
.
- (Sum) Given two lattice subgroups
for some
,
, the sum
is equal to the lattice subgroup
, where
is the matrix formed by concatenating the columns of
with the columns of
.
- (Direct sum) Given two lattice subgroups
,
, the direct sum
is equal to the lattice subgroup
, where
is the block matrix formed by taking the direct sum of
and
.
One can also use Smith normal form to detect when one lattice subgroup is a subgroup of another lattice subgroup
. Using Smith normal form factorization
, with invariant factors
, the relation
is equivalent after some manipulation to
Example 3 To test whether the lattice subgroupgenerated by
and
is contained in the lattice subgroup
from Example 2, we write
as
with
, and observe that
The first row is of course divisible by
, and the last row vanishes as required, but the second row is not divisible by
, so
is not contained in
(but
is); also a similar computation verifies that
is conversely contained in
.
One can now test whether by testing whether
and
simultaneously hold (there may be more efficient ways to do this, but this is already computationally manageable in many applications). This in principle addresses the issue of non-uniqueness of representation of a subgroup
in the form
.
Next, we consider the question of representing the intersection of two subgroups
in the form
for some
and
. We can write
Example 4 With the latticefrom Example 2, we shall compute the intersection of
with the subgroup
, which one can also write as
with
. We obtain a Smith normal form
so
. We have
and so we can write
where
One can trim this representation if desired, for instance by deleting the first column of
(and replacing
with
). Thus the intersection of
with
is the rank one subgroup generated by
.
A similar calculation allows one to represent the pullback of a subgroup
via a linear transformation
, since
Among other things, this allows one to describe lattices given by systems of linear equations and congruences in the format. Indeed, the set of lattice vectors
that solve the system of congruences
Example 5 With the lattice subgroupfrom Example 2, we have
, and so
consists of those triples
which obey the (redundant) congruence
the congruence
and the identity
Conversely, one can use the above procedure to convert the above system of congruences and identities back into a form
(though depending on which Smith normal form one chooses, the end result may be a different representation of the same lattice group
).
Now we apply Pontryagin duality. We claim the identity
Example 6 The orthogonal complement of the lattice subgroupfrom Example 2 is the closed torus subgroup
using the trimmed representation of
, one can simplify this a little to
and one can also write this as the image of the group
under the torus isomorphism
In other words, one can write
so that
is isomorphic to
.
We can now dualize all of the previous computable operations on subgroups of to produce computable operations on closed subgroups of
. For instance:
- To form the intersection or sum of two closed torus subgroups
, use the identities
and
and then calculate the sum or intersection of the lattice subgroupsby the previous methods. Similarly, the operation of direct sum of two closed torus subgroups dualises to the operation of direct sum of two lattice subgroups.
- To determine whether one closed torus subgroup
is contained in (or equal to) another closed torus subgroup
, simply use the preceding methods to check whether the lattice subgroup
is contained in (or equal to) the lattice subgroup
.
- To compute the pull back
of a closed torus subgroup
via a linear transformation
, use the identity
Similarly, to compute the imageof a closed torus subgroup
, use the identity
Example 7 Suppose one wants to compute the sum of the closed torus subgroupfrom Example 6 with the closed torus subgroup
. This latter group is the orthogonal complement of the lattice subgroup
considered in Example 4. Thus we have
where
is the matrix from Example 6; discarding the zero column, we thus have
Recent Comments