Tamar Ziegler and I have just uploaded to the arXiv our paper “Infinite partial sumsets in the primes“. This is a short paper inspired by a recent result of Kra, Moreira, Richter, and Robertson (discussed for instance in this Quanta article from last December) showing that for any set {A} of natural numbers of positive upper density, there exists a sequence {b_1 < b_2 < b_3 < \dots} of natural numbers and a shift {t} such that {b_i + b_j + t \in A} for all {i<j} this answers a question of Erdős). In view of the “transference principle“, it is then plausible to ask whether the same result holds if {A} is replaced by the primes. We can show the following results:

Theorem 1
  • (i) If the Hardy-Littlewood prime tuples conjecture (or the weaker conjecture of Dickson) is true, then there exists an increasing sequence {b_1 < b_2 < b_3 < \dots} of primes such that {b_i + b_j + 1} is prime for all {i < j}.
  • (ii) Unconditionally, there exist increasing sequences {a_1 < a_2 < \dots} and {b_1 < b_2 < \dots} of natural numbers such that {a_i + b_j} is prime for all {i<j}.
  • (iii) These conclusions fail if “prime” is replaced by “positive (relative) density subset of the primes” (even if the density is equal to 1).

We remark that it was shown by Balog that there (unconditionally) exist arbitrarily long but finite sequences {b_1 < \dots < b_k} of primes such that {b_i + b_j + 1} is prime for all {i < j \leq k}. (This result can also be recovered from the later results of Ben Green, myself, and Tamar Ziegler.) Also, it had previously been shown by Granville that on the Hardy-Littlewood prime tuples conjecture, there existed increasing sequences {a_1 < a_2 < \dots} and {b_1 < b_2 < \dots} of natural numbers such that {a_i+b_j} is prime for all {i,j}.

The conclusion of (i) is stronger than that of (ii) (which is of course consistent with the former being conditional and the latter unconditional). The conclusion (ii) also implies the well-known theorem of Maynard that for any given {k}, there exist infinitely many {k}-tuples of primes of bounded diameter, and indeed our proof of (ii) uses the same “Maynard sieve” that powers the proof of that theorem (though we use a formulation of that sieve closer to that in this blog post of mine). Indeed, the failure of (iii) basically arises from the failure of Maynard’s theorem for dense subsets of primes, simply by removing those clusters of primes that are unusually closely spaced.

Our proof of (i) was initially inspired by the topological dynamics methods used by Kra, Moreira, Richter, and Robertson, but we managed to condense it to a purely elementary argument (taking up only half a page) that makes no reference to topological dynamics and builds up the sequence {b_1 < b_2 < \dots} recursively by repeated application of the prime tuples conjecture.

The proof of (ii) takes up the majority of the paper. It is easiest to phrase the argument in terms of “prime-producing tuples” – tuples {(h_1,\dots,h_k)} for which there are infinitely many {n} with {n+h_1,\dots,n+h_k} all prime. Maynard’s theorem is equivalent to the existence of arbitrarily long prime-producing tuples; our theorem is equivalent to the stronger assertion that there exist an infinite sequence {h_1 < h_2 < \dots} such that every initial segment {(h_1,\dots,h_k)} is prime-producing. The main new tool for achieving this is the following cute measure-theoretic lemma of Bergelson:

Lemma 2 (Bergelson intersectivity lemma) Let {E_1,E_2,\dots} be subsets of a probability space {(X,\mu)} of measure uniformly bounded away from zero, thus {\inf_i \mu(E_i) > 0}. Then there exists a subsequence {E_{i_1}, E_{i_2}, \dots} such that

\displaystyle  \mu(E_{i_1} \cap \dots \cap E_{i_k} ) > 0

for all {k}.

This lemma has a short proof, though not an entirely obvious one. Firstly, by deleting a null set from {X}, one can assume that all finite intersections {E_{i_1} \cap \dots \cap E_{i_k}} are either positive measure or empty. Secondly, a routine application of Fatou’s lemma shows that the maximal function {\limsup_N \frac{1}{N} \sum_{i=1}^N 1_{E_i}} has a positive integral, hence must be positive at some point {x_0}. Thus there is a subsequence {E_{i_1}, E_{i_2}, \dots} whose finite intersections all contain {x_0}, thus have positive measure as desired by the previous reduction.

It turns out that one cannot quite combine the standard Maynard sieve with the intersectivity lemma because the events {E_i} that show up (which roughly correspond to the event that {n + h_i} is prime for some random number {n} (with a well-chosen probability distribution) and some shift {h_i}) have their probability going to zero, rather than being uniformly bounded from below. To get around this, we borrow an idea from a paper of Banks, Freiberg, and Maynard, and group the shifts {h_i} into various clusters {h_{i,1},\dots,h_{i,J_1}}, chosen in such a way that the probability that at least one of {n+h_{i,1},\dots,n+h_{i,J_1}} is prime is bounded uniformly from below. One then applies the Bergelson intersectivity lemma to those events and uses many applications of the pigeonhole principle to conclude.

Over the last few years, I have served on a committee of the National Academy of Sciences to produce some posters and other related media to showcase twenty-first century and its applications in the real world, suitable for display in classrooms or math departments. Our posters (together with some associated commentary, webinars on related topics, and even a whimsical “comic“) are now available for download here.

This post is an unofficial sequel to one of my first blog posts from 2007, which was entitled “Quantum mechanics and Tomb Raider“.

One of the oldest and most famous allegories is Plato’s allegory of the cave. This allegory centers around a group of people chained to a wall in a cave that cannot see themselves or each other, but only the two-dimensional shadows of themselves cast on the wall in front of them by some light source they cannot directly see. Because of this, they identify reality with this two-dimensional representation, and have significant conceptual difficulties in trying to view themselves (or the world as a whole) as three-dimensional, until they are freed from the cave and able to venture into the sunlight.

There is a similar conceptual difficulty when trying to understand Einstein’s theory of special relativity (and more so for general relativity, but let us focus on special relativity for now). We are very much accustomed to thinking of reality as a three-dimensional space endowed with a Euclidean geometry that we traverse through in time, but in order to have the clearest view of the universe of special relativity it is better to think of reality instead as a four-dimensional spacetime that is endowed instead with a Minkowski geometry, which mathematically is similar to a (four-dimensional) Euclidean space but with a crucial change of sign in the underlying metric. Indeed, whereas the distance {ds} between two points in Euclidean space {{\bf R}^3} is given by the three-dimensional Pythagorean theorem

\displaystyle  ds^2 = dx^2 + dy^2 + dz^2

under some standard Cartesian coordinate system {(x,y,z)} of that space, and the distance {ds} in a four-dimensional Euclidean space {{\bf R}^4} would be similarly given by

\displaystyle  ds^2 = dx^2 + dy^2 + dz^2 + du^2

under a standard four-dimensional Cartesian coordinate system {(x,y,z,u)}, the spacetime interval {ds} in Minkowski space is given by

\displaystyle  ds^2 = dx^2 + dy^2 + dz^2 - c^2 dt^2

(though in many texts the opposite sign convention {ds^2 = -dx^2 -dy^2 - dz^2 + c^2dt^2} is preferred) in spacetime coordinates {(x,y,z,t)}, where {c} is the speed of light. The geometry of Minkowski space is then quite similar algebraically to the geometry of Euclidean space (with the sign change replacing the traditional trigonometric functions {\sin, \cos, \tan}, etc. by their hyperbolic counterparts {\sinh, \cosh, \tanh}, and with various factors involving “{c}” inserted in the formulae), but also has some qualitative differences to Euclidean space, most notably a causality structure connected to light cones that has no obvious counterpart in Euclidean space.

That said, the analogy between Minkowski space and four-dimensional Euclidean space is strong enough that it serves as a useful conceptual aid when first learning special relativity; for instance the excellent introductory text “Spacetime physics” by Taylor and Wheeler very much adopts this view. On the other hand, this analogy doesn’t directly address the conceptual problem mentioned earlier of viewing reality as a four-dimensional spacetime in the first place, rather than as a three-dimensional space that objects move around in as time progresses. Of course, part of the issue is that we aren’t good at directly visualizing four dimensions in the first place. This latter problem can at least be easily addressed by removing one or two spatial dimensions from this framework – and indeed many relativity texts start with the simplified setting of only having one spatial dimension, so that spacetime becomes two-dimensional and can be depicted with relative ease by spacetime diagrams – but still there is conceptual resistance to the idea of treating time as another spatial dimension, since we clearly cannot “move around” in time as freely as we can in space, nor do we seem able to easily “rotate” between the spatial and temporal axes, the way that we can between the three coordinate axes of Euclidean space.

With this in mind, I thought it might be worth attempting a Plato-type allegory to reconcile the spatial and spacetime views of reality, in a way that can be used to describe (analogues of) some of the less intuitive features of relativity, such as time dilation, length contraction, and the relativity of simultaneity. I have (somewhat whimsically) decided to place this allegory in a Tolkienesque fantasy world (similarly to how my previous allegory to describe quantum mechanics was phrased in a world based on the computer game “Tomb Raider”). This is something of an experiment, and (like any other analogy) the allegory will not be able to perfectly capture every aspect of the phenomenon it is trying to represent, so any feedback to improve the allegory would be appreciated.

Read the rest of this entry »

If {\lambda>0}, a Poisson random variable {{\bf Poisson}(\lambda)} with mean {\lambda} is a random variable taking values in the natural numbers with probability distribution

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) = k) = e^{-\lambda} \frac{\lambda^k}{k!}.

One is often interested in bounding upper tail probabilities

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u))

for {u \geq 0}, or lower tail probabilities

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u))

for {-1 < u \leq 0}. A standard tool for this is Bennett’s inequality:

Proposition 1 (Bennett’s inequality) One has

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq \exp(-\lambda h(u))

for {u \geq 0} and

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u)) \leq \exp(-\lambda h(u))

for {-1 < u \leq 0}, where

\displaystyle  h(u) := (1+u) \log(1+u) - u.

From the Taylor expansion {h(u) = \frac{u^2}{2} + O(u^3)} for {u=O(1)} we conclude Gaussian type tail bounds in the regime {u = o(1)} (and in particular when {u = O(1/\sqrt{\lambda})} (in the spirit of the Chernoff, Bernstein, and Hoeffding inequalities). but in the regime where {u} is large and positive one obtains a slight gain over these other classical bounds (of {\exp(- \lambda u \log u)} type, rather than {\exp(-\lambda u)}).

Proof: We use the exponential moment method. For any {t \geq 0}, we have from Markov’s inequality that

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq e^{-t \lambda(1+u)} {\bf E} \exp( t {\bf Poisson}(\lambda) ).

A standard computation shows that the moment generating function of the Poisson distribution is given by

\displaystyle  \exp( t {\bf Poisson}(\lambda) ) = \exp( (e^t - 1) \lambda )

and hence

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq \exp( (e^t - 1)\lambda - t \lambda(1+u) ).

For {u \geq 0}, it turns out that the right-hand side is optimized by setting {t = \log(1+u)}, in which case the right-hand side simplifies to {\exp(-\lambda h(u))}. This proves the first inequality; the second inequality is proven similarly (but now {u} and {t} are non-positive rather than non-negative). \Box

Remark 2 Bennett’s inequality also applies for (suitably normalized) sums of bounded independent random variables. In some cases there are direct comparison inequalities available to relate those variables to the Poisson case. For instance, suppose {S = X_1 + \dots + X_n} is the sum of independent Boolean variables {X_1,\dots,X_n \in \{0,1\}} of total mean {\sum_{j=1}^n {\bf E} X_j = \lambda} and with {\sup_i {\bf P}(X_i) \leq \varepsilon} for some {0 < \varepsilon < 1}. Then for any natural number {k}, we have

\displaystyle  {\bf P}(S=k) = \sum_{1 \leq i_1 < \dots < i_k \leq n} {\bf P}(X_{i_1}=1) \dots {\bf P}(X_{i_k}=1)

\displaystyle  \prod_{i \neq i_1,\dots,i_k} {\bf P}(X_i=0)

\displaystyle  \leq \frac{1}{k!} (\sum_{i=1}^n \frac{{\bf P}(X_i=1)}{{\bf P}(X_i=0)})^k \times \prod_{i=1}^n {\bf P}(X_i=0)

\displaystyle  \leq \frac{1}{k!} (\frac{\lambda}{1-\varepsilon})^k \prod_{i=1}^n \exp( - {\bf P}(X_i = 1))

\displaystyle  \leq e^{-\lambda} \frac{\lambda^k}{(1-\varepsilon)^k k!}

\displaystyle  \leq e^{\frac{\varepsilon}{1-\varepsilon} \lambda} {\bf P}( \mathbf{Poisson}(\frac{\lambda}{1-\varepsilon}) = k).

As such, for {\varepsilon} small, one can efficiently control the tail probabilities of {S} in terms of the tail probability of a Poisson random variable of mean close to {\lambda}; this is of course very closely related to the well known fact that the Poisson distribution emerges as the limit of sums of many independent boolean variables, each of which is non-zero with small probability. See this paper of Bentkus and this paper of Pinelis for some further useful (and less obvious) comparison inequalities of this type.

In this note I wanted to record the observation that one can improve the Bennett bound by a small polynomial factor once one leaves the Gaussian regime {u = O(1/\sqrt{\lambda})}, in particular gaining a factor of {1/\sqrt{\lambda}} when {u \sim 1}. This observation is not difficult and is implicitly in the literature (one can extract it for instance from the much more general results of this paper of Talagrand, and the basic idea already appears in this paper of Glynn), but I was not able to find a clean version of this statement in the literature, so I am placing it here on my blog. (But if a reader knows of a reference that basically contains the bound below, I would be happy to know of it.)

Proposition 3 (Improved Bennett’s inequality) One has

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda \min(u, u^2)}}

for {u \geq 0} and

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u)) \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda u^2 (1+u)}}

for {-1 < u \leq 0}.

Proof: We begin with the first inequality. We may assume that {u \geq 1/\sqrt{\lambda}}, since otherwise the claim follows from the usual Bennett inequality. We expand out the left-hand side as

\displaystyle  e^{-\lambda} \sum_{k \geq \lambda(1+u)} \frac{\lambda^k}{k!}.

Observe that for {k \geq \lambda(1+u)} that

\displaystyle  \frac{\lambda^{k+1}}{(k+1)!} \leq \frac{1}{1+u} \frac{\lambda^{k+1}}{(k+1)!} .

Thus the sum is dominated by the first term times a geometric series {\sum_{j=0}^\infty \frac{1}{(1+u)^j} = 1 + \frac{1}{u}}. We can thus bound the left-hand side by

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \sup_{k \geq \lambda(1+u)} \frac{\lambda^k}{k!}.

By the Stirling approximation, this is

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \sup_{k \geq \lambda(1+u)} \frac{1}{\sqrt{k}} \frac{(e\lambda)^k}{k^k}.

The expression inside the supremum is decreasing in {k} for {k > \lambda}, thus we can bound it by

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \frac{1}{\sqrt{\lambda(1+u)}} \frac{(e\lambda)^{\lambda(1+u)}}{(\lambda(1+u))^{\lambda(1+u)}},

which simplifies to

\displaystyle  \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda \min(u, u^2)}}

after a routine calculation.

Now we turn to the second inequality. As before we may assume that {u \leq -1/\sqrt{\lambda}}. We first dispose of a degenerate case in which {\lambda(1+u) < 1}. Here the left-hand side is just

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) = 0 ) = e^{-\lambda}

and the right-hand side is comparable to

\displaystyle  e^{-\lambda} \exp( - \lambda (1+u) \log (1+u) + \lambda(1+u) ) / \sqrt{\lambda(1+u)}.

Since {-\lambda(1+u) \log(1+u)} is negative and {0 < \lambda(1+u) < 1}, we see that the right-hand side is {\gg e^{-\lambda}}, and the estimate holds in this case.

It remains to consider the regime where {u \leq -1/\sqrt{\lambda}} and {\lambda(1+u) \geq 1}. The left-hand side expands as

\displaystyle  e^{-\lambda} \sum_{k \leq \lambda(1+u)} \frac{\lambda^k}{k!}.

The sum is dominated by the first term times a geometric series {\sum_{j=-\infty}^0 \frac{1}{(1+u)^j} = \frac{1}{|u|}}. The maximal {k} is comparable to {\lambda(1+u)}, so we can bound the left-hand side by

\displaystyle  \ll e^{-\lambda} \frac{1}{|u|} \sup_{\lambda(1+u) \ll k \leq \lambda(1+u)} \frac{\lambda^k}{k!}.

Using the Stirling approximation as before we can bound this by

\displaystyle  \ll e^{-\lambda} \frac{1}{|u|} \frac{1}{\sqrt{\lambda(1+u)}} \frac{(e\lambda)^{\lambda(1+u)}}{(\lambda(1+u))^{\lambda(1+u)}},

which simplifies to

\displaystyle  \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda u^2 (1+u)}}

after a routine calculation. \Box

The same analysis can be reversed to show that the bounds given above are basically sharp up to constants, at least when {\lambda} (and {\lambda(1+u)}) are large.

[The following information was provided to me by Geordie Williamson, who is Director of the Sydney Mathematics Research Institute – T.]

We are currently advertising two positions in math and AI:

Both positions are for three years and are based at the Sydney Mathematical Research Institute. The positions are research only, but teaching at the University of Sydney is possible if desired. The successful candidate will have considerable time and flexibility to pursue their own research program.

We are after either:

  1. excellent mathematicians with some interest in programming and modern AI;
  2. excellent computer scientists with some interest and background in mathematics, as well as an interest in using AI to attack tough problems in mathematics.

Rachel Greenfeld and I have just uploaded to the arXiv our paper “A counterexample to the periodic tiling conjecture“. This is the full version of the result I announced on this blog a few months ago, in which we disprove the periodic tiling conjecture of Grünbaum-Shephard and Lagarias-Wang. The paper took a little longer than expected to finish, due to a technical issue that we did not realize at the time of the announcement that required a workaround.

In more detail: the original strategy, as described in the announcement, was to build a “tiling language” that was capable of encoding a certain “{p}-adic Sudoku puzzle”, and then show that the latter type of puzzle had only non-periodic solutions if {p} was a sufficiently large prime. As it turns out, the second half of this strategy worked out, but there was an issue in the first part: our tiling language was able (using {2}-group-valued functions) to encode arbitrary boolean relationships between boolean functions, and was also able (using {{\bf Z}/p{\bf Z}}-valued functions) to encode “clock” functions such as {n \mapsto n \hbox{ mod } p} that were part of our {p}-adic Sudoku puzzle, but we were not able to make these two types of functions “talk” to each other in the way that was needed to encode the {p}-adic Sudoku puzzle (the basic problem being that if {H} is a finite abelian {2}-group then there are no non-trivial subgroups of {H \times {\bf Z}/p{\bf Z}} that are not contained in {H} or trivial in the {{\bf Z}/p{\bf Z}} direction). As a consequence, we had to replace our “{p}-adic Sudoku puzzle” by a “{2}-adic Sudoku puzzle” which basically amounts to replacing the prime {p} by a sufficiently large power of {2} (we believe {2^{10}} will suffice). This solved the encoding issue, but the analysis of the {2}-adic Sudoku puzzles was a little bit more complicated than the {p}-adic case, for the following reason. The following is a nice exercise in analysis:

Theorem 1 (Linearity in three directions implies full linearity) Let {F: {\bf R}^2 \rightarrow {\bf R}} be a smooth function which is affine-linear on every horizontal line, diagonal (line of slope {1}), and anti-diagonal (line of slope {-1}). In other words, for any {c \in {\bf R}}, the functions {x \mapsto F(x,c)}, {x \mapsto F(x,c+x)}, and {x \mapsto F(x,c-x)} are each affine functions on {{\bf R}}. Then {F} is an affine function on {{\bf R}^2}.

Indeed, the property of being affine in three directions shows that the quadratic form associated to the Hessian {\nabla^2 F(x,y)} at any given point vanishes at {(1,0)}, {(1,1)}, and {(1,-1)}, and thus must vanish everywhere. In fact the smoothness hypothesis is not necessary; we leave this as an exercise to the interested reader. The same statement turns out to be true if one replaces {{\bf R}} with the cyclic group {{\bf Z}/p{\bf Z}} as long as {p} is odd; this is the key for us to showing that our {p}-adic Sudoku puzzles have an (approximate) two-dimensional affine structure, which on further analysis can then be used to show that it is in fact non-periodic. However, it turns out that the corresponding claim for cyclic groups {{\bf Z}/q{\bf Z}} can fail when {q} is a sufficiently large power of {2}! In fact the general form of functions {F: ({\bf Z}/q{\bf Z})^2 \rightarrow {\bf Z}/q{\bf Z}} that are affine on every horizontal line, diagonal, and anti-diagonal takes the form

\displaystyle  F(x,y) = Ax + By + C + D \frac{q}{4} y(x-y)

for some integer coefficients {A,B,C,D}. This additional “pseudo-affine” term {D \frac{q}{4} y(x-y)} causes some additional technical complications but ultimately turns out to be manageable.

During the writing process we also discovered that the encoding part of the proof becomes more modular and conceptual once one introduces two new definitions, that of an “expressible property” and a “weakly expressible property”. These concepts are somewhat analogous to that of {\Pi^0_0} sentences and {\Sigma^0_1} sentences in the arithmetic hierarchy, or to algebraic sets and semi-algebraic sets in real algebraic geometry. Roughly speaking, an expressible property is a property of a tuple of functions {f_w: G \rightarrow H_w}, {w \in {\mathcal W}} from an abelian group {G} to finite abelian groups {H_w}, such that the property can be expressed in terms of one or more tiling equations on the graph

\displaystyle  A := \{ (x, (f_w(x))_{w \in {\mathcal W}} \subset G \times \prod_{w \in {\mathcal W}} H_w.

For instance, the property that two functions {f,g: {\bf Z} \rightarrow H} differ by a constant can be expressed in terms of the tiling equation

\displaystyle  A \oplus (\{0\} \times H^2) = {\bf Z} \times H^2

(the vertical line test), as well as

\displaystyle  A \oplus (\{0\} \times \Delta \cup \{1\} \times (H^2 \backslash \Delta)) = G \times H^2,

where {\Delta = \{ (h,h): h \in H \}} is the diagonal subgroup of {H^2}. A weakly expressible property {P} is an existential quantification of some expressible property {P^*}, so that a tuple of functions {(f_w)_{w \in W}} obeys the property {P} if and only if there exists an extension of this tuple by some additional functions that obey the property {P^*}. It turns out that weakly expressible properties are closed under a number of useful operations, and allow us to easily construct quite complicated weakly expressible properties out of a “library” of simple weakly expressible properties, much as a complex computer program can be constructed out of simple library routines. In particular we will be able to “program” our Sudoku puzzle as a weakly expressible property.

It’s been a while since I’ve actively participated in social media outside of this blog – I was active in Google Buzz/Google+ for a while, until that service closed – but I’ve decided to try out Mathstodon, one of the servers of the open source social media software platform Mastodon. As I understand it, Mastodon functions in many ways similar to the significantly more well-known platform Twitter, but is decentralized into a federation of servers that share content with each other but can have their own moderation rules and add-ons. For instance, the Mathstodon server has the additional feature of supporting LaTeX in its posts. Another consequence of this decentralization is that if one for some reason ends up disagreeing with the administration of the server one is in, one has the option of transferring one’s account to a different server while staying on the same platform.

I just created an account at Mathstodon and it currently has very little content, but I hope to add some soon (though I will probably not be as prolific as some other mathematicians already on that site, such as John Baez or Nalini Joshi).

In 2010, the UCLA mathematics department launched a scholarship opportunity for entering freshman students with exceptional background and promise in mathematics. This program was unfortunately suspended for a while due to technical reasons, but we are once again able to offer one scholarship each year.  The UCLA Math Undergraduate Merit Scholarship provides for full tuition, and a room and board allowance for 4 years, contingent on continued high academic performance. In addition, scholarship recipients follow an individualized accelerated program of study, as determined after consultation with UCLA faculty.   The program of study leads to a Masters degree in Mathematics in four years.

More information and an application form for the scholarship can be found on the web at:

https://ww3.math.ucla.edu/ucla-math-undergraduate-merit-scholarship/

To be considered for Fall 2023, candidates must apply for the scholarship and also for admission to UCLA on or before November 30, 2022.

Just a short post to advertise the workshop “Machine assisted proofs” that will be held on Feb 13-17 next year, here at the Institute for Pure and Applied Mathematics (IPAM); I am one of the organizers of this event together with Erika Abraham, Jeremy Avigad, Kevin Buzzard, Jordan Ellenberg, Tim Gowers, and Marijn Heule. The purpose of this event is to bring together experts in the various types of formal computer-assisted methods used to verify, discover, or otherwise assist with mathematical proofs, as well as pure mathematicians who are interested in learning about the current and future state of the art with such tools; this seems to be an opportune time to bring these communities together, given some recent high-profile applications of formal methods in pure mathematics (e.g, the liquid tensor experiment). The workshop will consist of a number of lectures from both communities, as well as a panel to discuss future directions. The workshop is open to general participants (both in person and remotely), although there is a registration process and a moderate registration fee to cover costs and to restrict the capacity to genuine applicants.

This is a spinoff from the previous post. In that post, we remarked that whenever one receives a new piece of information {E}, the prior odds {\mathop{\bf P}( H_1 ) / \mathop{\bf P}( H_0 )} between an alternative hypothesis {H_1} and a null hypothesis {H_0} is updated to a posterior odds {\mathop{\bf P}( H_1|E ) / \mathop{\bf P}( H_0|E )}, which can be computed via Bayes’ theorem by the formula

\displaystyle  \frac{\mathop{\bf P}( H_1|E )}{\mathop{\bf P}(H_0|E)} = \frac{\mathop{\bf P}(H_1)}{\mathop{\bf P}(H_0)} \times \frac{\mathop{\bf P}(E|H_1)}{\mathop{\bf P}(E|H_0)}

where {\mathop{\bf P}(E|H_1)} is the likelihood of this information {E} under the alternative hypothesis {H_1}, and {\mathop{\bf P}(E|H_0)} is the likelihood of this information {E} under the null hypothesis {H_0}. If there are no other hypotheses under consideration, then the two posterior probabilities {\mathop{\bf P}( H_1|E )}, {\mathop{\bf P}( H_0|E )} must add up to one, and so can be recovered from the posterior odds {o := \frac{\mathop{\bf P}( H_1|E )}{\mathop{\bf P}(H_0|E)}} by the formulae

\displaystyle  \mathop{\bf P}(H_1|E) = \frac{o}{1+o}; \quad \mathop{\bf P}(H_0|E) = \frac{1}{1+o}.

This gives a straightforward way to update one’s prior probabilities, and I thought I would present it in the form of a worksheet for ease of calculation:

A PDF version of the worksheet and instructions can be found here. One can fill in this worksheet in the following order:

  1. In Box 1, one enters in the precise statement of the null hypothesis {H_0}.
  2. In Box 2, one enters in the precise statement of the alternative hypothesis {H_1}. (This step is very important! As discussed in the previous post, Bayesian calculations can become extremely inaccurate if the alternative hypothesis is vague.)
  3. In Box 3, one enters in the prior probability {\mathop{\bf P}(H_0)} (or the best estimate thereof) of the null hypothesis {H_0}.
  4. In Box 4, one enters in the prior probability {\mathop{\bf P}(H_1)} (or the best estimate thereof) of the alternative hypothesis {H_1}. If only two hypotheses are being considered, we of course have {\mathop{\bf P}(H_1) = 1 - \mathop{\bf P}(H_0)}.
  5. In Box 5, one enters in the ratio {\mathop{\bf P}(H_1)/\mathop{\bf P}(H_0)} between Box 4 and Box 3.
  6. In Box 6, one enters in the precise new information {E} that one has acquired since the prior state. (As discussed in the previous post, it is important that all relevant information {E} – both supporting and invalidating the alternative hypothesis – are reported accurately. If one cannot be certain that key information has not been withheld to you, then Bayesian calculations become highly unreliable.)
  7. In Box 7, one enters in the likelihood {\mathop{\bf P}(E|H_0)} (or the best estimate thereof) of the new information {E} under the null hypothesis {H_0}.
  8. In Box 8, one enters in the likelihood {\mathop{\bf P}(E|H_1)} (or the best estimate thereof) of the new information {E} under the null hypothesis {H_1}. (This can be difficult to compute, particularly if {H_1} is not specified precisely.)
  9. In Box 9, one enters in the ratio {\mathop{\bf P}(E|H_1)/\mathop{\bf P}(E|H_0)} betwen Box 8 and Box 7.
  10. In Box 10, one enters in the product of Box 5 and Box 9.
  11. (Assuming there are no other hypotheses than {H_0} and {H_1}) In Box 11, enter in {1} divided by {1} plus Box 10.
  12. (Assuming there are no other hypotheses than {H_0} and {H_1}) In Box 12, enter in Box 10 divided by {1} plus Box 10. (Alternatively, one can enter in {1} minus Box 11.)

To illustrate this procedure, let us consider a standard Bayesian update problem. Suppose that a given point in time, {2\%} of the population is infected with COVID-19. In response to this, a company mandates COVID-19 testing of its workforce, using a cheap COVID-19 test. This test has a {20\%} chance of a false negative (testing negative when one has COVID) and a {5\%} chance of a false positive (testing positive when one does not have COVID). An employee {X} takes the mandatory test, which turns out to be positive. What is the probability that {X} actually has COVID?

We can fill out the entries in the worksheet one at a time:

  • Box 1: The null hypothesis {H_0} is that {X} does not have COVID.
  • Box 2: The alternative hypothesis {H_1} is that {X} does have COVID.
  • Box 3: In the absence of any better information, the prior probability {\mathop{\bf P}(H_0)} of the null hypothesis is {98\%}, or {0.98}.
  • Box 4: Similarly, the prior probability {\mathop{\bf P}(H_1)} of the alternative hypothesis is {2\%}, or {0.02}.
  • Box 5: The prior odds {\mathop{\bf P}(H_1)/\mathop{\bf P}(H_0)} are {0.02/0.98 \approx 0.02}.
  • Box 6: The new information {E} is that {X} has tested positive for COVID.
  • Box 7: The likelihood {\mathop{\bf P}(E|H_0)} of {E} under the null hypothesis is {5\%}, or {0.05} (the false positive rate).
  • Box 8: The likelihood {\mathop{\bf P}(E|H_1)} of {E} under the alternative is {80\%}, or {0.8} (one minus the false negative rate).
  • Box 9: The likelihood ratio {\mathop{\bf P}(E|H_1)/\mathop{\bf P}(E|H_0)} is {0.8 / 0.05 = 16}.
  • Box 10: The product of Box 5 and Box 9 is approximately {0.32}.
  • Box 11: The posterior probability {\mathop{\bf P}(H_0|E)} is approximately {1/(1+0.32) \approx 75\%}.
  • Box 12: The posterior probability {\mathop{\bf P}(H_1|E)} is approximately {0.32/(1+0.32) \approx 25\%}.

The filled worksheet looks like this:

Perhaps surprisingly, despite the positive COVID test, the employee {X} only has a {25\%} chance of actually having COVID! This is due to the relatively large false positive rate of this cheap test, and is an illustration of the base rate fallacy in statistics.

We remark that if we switch the roles of the null hypothesis and alternative hypothesis, then some of the odds in the worksheet change, but the ultimate conclusions remain unchanged:

So the question of which hypothesis to designate as the null hypothesis and which one to designate as the alternative hypothesis is largely a matter of convention.

Now let us take a superficially similar situation in which a mother observers her daughter exhibiting COVID-like symptoms, to the point where she estimates the probability of her daughter having COVID at {50\%}. She then administers the same cheap COVID-19 test as before, which returns positive. What is the posterior probability of her daughter having COVID?

One can fill out the worksheet much as before, but now with the prior probability of the alternative hypothesis raised from {2\%} to {50\%} (and the prior probablity of the null hypothesis dropping from {98\%} to {50\%}). One now gets that the probability that the daughter has COVID has increased all the way to {94\%}:

Thus we see that prior probabilities can make a significant impact on the posterior probabilities.

Now we use the worksheet to analyze an infamous probability puzzle, the Monty Hall problem. Let us use the formulation given in that Wikipedia page:

Problem 1 Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

For this problem, the precise formulation of the null hypothesis and the alternative hypothesis become rather important. Suppose we take the following two hypotheses:

  • Null hypothesis {H_0}: The car is behind door number 1, and no matter what door you pick, the host will randomly reveal another door that contains a goat.
  • Alternative hypothesis {H_1}: The car is behind door number 2 or 3, and no matter what door you pick, the host will randomly reveal another door that contains a goat.
Assuming the prizes are distributed randomly, we have {\mathop{\bf P}(H_0)=1/3} and {\mathop{\bf P}(H_1)=2/3}. The new information {E} is that, after door 1 is selected, door 3 is revealed and shown to be a goat. After some thought, we conclude that {\mathop{\bf P}(E|H_0)} is equal to {1/2} (the host has a fifty-fifty chance of revealing door 3 instead of door 2) but that {\mathop{\bf P}(E|H_1)} is also equal to {1/2} (if the car is behind door 2, the host must reveal door 3, whereas if the car is behind door 3, the host cannot reveal door 3). Filling in the worksheet, we see that the new information does not in fact alter the odds, and the probability that the car is not behind door 1 remains at 2/3, so it is advantageous to switch.

However, consider the following different set of hypotheses:

  • Null hypothesis {H'_0}: The car is behind door number 1, and if you pick the door with the car, the host will reveal another door to entice you to switch. Otherwise, the host will not reveal a door.
  • Alternative hypothesis {H'_1}: The car is behind door number 2 or 3, and if you pick the door with the car, the host will reveal another door to entice you to switch. Otherwise, the host will not reveal a door.

Here we still have {\mathop{\bf P}(H'_0)=1/3} and {\mathop{\bf P}(H'_1)=2/3}, but while {\mathop{\bf P}(E|H'_0)} remains equal to {1/2}, {\mathop{\bf P}(E|H'_1)} has dropped to zero (since if the car is not behind door 1, the host will not reveal a door). So now {\mathop{\bf P}(H'_0|E)} has increased all the way to {1}, and it is not advantageous to switch! This dramatically illustrates the importance of specifying the hypotheses precisely. The worksheet is now filled out as follows:

Finally, we consider another famous probability puzzle, the Sleeping Beauty problem. Again we quote the problem as formulated on the Wikipedia page:

Problem 2 Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:
  • If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
  • If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.
  • In either case, she will be awakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: “What is your credence now for the proposition that the coin landed heads?”‘

Here the situation can be confusing because there are key portions of this experiment in which the observer is unconscious, but nevertheless Bayesian probability continues to operate regardless of whether the observer is conscious. To make this issue more precise, let us assume that the awakenings mentioned in the problem always occur at 8am, so in particular at 7am, Sleeping beauty will always be unconscious.

Here, the null and alternative hypotheses are easy to state precisely:

  • Null hypothesis {H_0}: The coin landed tails.
  • Alternative hypothesis {H_1}: The coin landed heads.

The subtle thing here is to work out what the correct prior state is (in most other applications of Bayesian probability, this state is obvious from the problem). It turns out that the most reasonable choice of prior state is “unconscious at 7am, on either Monday or Tuesday, with an equal chance of each”. (Note that whatever the outcome of the coin flip is, Sleeping Beauty will be unconscious at 7am Monday and unconscious again at 7am Tuesday, so it makes sense to give each of these two states an equal probability.) The new information is then

  • New information {E}: One hour after the prior state, Sleeping Beauty is awakened.

With this formulation, we see that {\mathop{\bf P}(H_0)=\mathop{\bf P}(H_1)=1/2}, {\mathop{\bf P}(E|H_0)=1}, and {\mathop{\bf P}(E|H_1)=1/2}, so on working through the worksheet one eventually arrives at {\mathop{\bf P}(H_1|E)=1/3}, so that Sleeping Beauty should only assign a probability of {1/3} to the event that the coin landed as heads.

There are arguments advanced in the literature to adopt the position that {\mathop{\bf P}(H_1|E)} should instead be equal to {1/2}, but I do not see a way to interpret them in this Bayesian framework without a substantial alteration to either the notion of the prior state, or by not presenting the new information {E} properly.

If one has multiple pieces of information {E_1, E_2, \dots} that one wishes to use to update one’s priors, one can do so by filling out one copy of the worksheet for each new piece of information, or by using a multi-row version of the worksheet using such identities as

\displaystyle  \frac{\mathop{\bf P}( H_1|E_1,E_2 )}{\mathop{\bf P}(H_0|E_1,E_2)} = \frac{\mathop{\bf P}(H_1)}{\mathop{\bf P}(H_0)} \times \frac{\mathop{\bf P}(E_1|H_1)}{\mathop{\bf P}(E_1|H_0)} \times \frac{\mathop{\bf P}(E_2|H_1,E_1)}{\mathop{\bf P}(E_2|H_0,E_1)}.

We leave the details of these variants of the Bayesian update problem to the interested reader. The only thing I will note though is that if a key piece of information {E} is withheld from the person filling out the worksheet, for instance if that person relies exclusively on a news source that only reports information that supports the alternative hypothesis {H_1} and omits information that debunks it, then the outcome of the worksheet is likely to be highly inaccurate, and one should only perform a Bayesian analysis when one has a high confidence that all relevant information (both favorable and unfavorable to the alternative hypothesis) is being reported to the user.

Archives