Kevin Ford, Ben Green, Sergei Konyagin, and myself have just posted to the arXiv our preprint “Large gaps between consecutive prime numbers“. This paper concerns the “opposite” problem to that considered by the recently concluded Polymath8 project, which was concerned with very small values of the prime gap {p_{n+1}-p_n}. Here, we wish to consider the largest prime gap {G(X) = p_{n+1}-p_n} that one can find in the interval {[X] = \{1,\dots,X\}} as {X} goes to infinity.

Finding lower bounds on {G(X)} is more or less equivalent to locating long strings of consecutive composite numbers that are not too large compared to the length of the string. A classic (and quite well known) construction here starts with the observation that for any natural number {n}, the consecutive numbers {n!+2, n!+3,\dots,n!+n} are all composite, because each {n!+i}, {i=2,\dots,n} is divisible by some prime {p \leq n}, while being strictly larger than that prime {p}. From this and Stirling’s formula, it is not difficult to obtain the bound

\displaystyle G(X) \gg \frac{\log X}{\log\log X}. \ \ \ \ \ (1)

 

A more efficient bound comes from the prime number theorem: there are only {(1+o(1)) \frac{X}{\log X}} primes up to {X}, so just from the pigeonhole principle one can locate a string of consecutive composite numbers up to {X} of length at least {(1-o(1)) \log X}, thus

\displaystyle G(X) \gtrsim \log X \ \ \ \ \ (2)

 

where we use {X \gtrsim Y} or {Y \lesssim X} as shorthand for {X \geq (1-o(1)) Y} or {Y \leq (1+o(1)) X}.

What about upper bounds? The Cramér random model predicts that the primes up to {X} are distributed like a random subset {\{1,\dots,X\}} of density {1/\log X}. Using this model, Cramér arrived at the conjecture

\displaystyle G(X) \ll \log^2 X.

In fact, if one makes the extremely optimistic assumption that the random model perfectly describes the behaviour of the primes, one would arrive at the even more precise prediction

\displaystyle G(X) \sim \log^2 X.

However, it is no longer widely believed that this optimistic version of the conjecture is true, due to some additional irregularities in the primes coming from the basic fact that large primes cannot be divisible by very small primes. Using the Maier matrix method to capture some of this irregularity, Granville was led to the conjecture that

\displaystyle G(X) \gtrsim 2e^{-\gamma} \log^2 X

(note that {2e^{-\gamma} = 1.1229\dots} is slightly larger than {1}). For comparison, the known upper bounds on {G(X)} are quite weak; unconditionally one has {G(X) \ll X^{0.525}} by the work of Baker, Harman, and Pintz, and even on the Riemann hypothesis one only gets down to {G(X) \ll X^{1/2} \log X}, as shown by Cramér (a slight improvement is also possible if one additionally assumes the pair correlation conjecture; see this article of Heath-Brown and the references therein).

This conjecture remains out of reach of current methods. In 1931, Westzynthius managed to improve the bound (2) slightly to

\displaystyle G(X) \gg \frac{\log\log\log X}{\log\log\log\log X} \log X ,

which Erdös in 1935 improved to

\displaystyle G(X) \gg \frac{\log\log X}{(\log\log\log X)^2} \log X

and Rankin in 1938 improved slightly further to

\displaystyle G(X) \gtrsim c \frac{\log\log X (\log\log\log\log X)}{(\log\log\log X)^2} \log X \ \ \ \ \ (3)

 

with {c=1/3}. Remarkably, this rather strange bound then proved extremely difficult to advance further on; until recently, the only improvements were to the constant {c}, which was raised to {c=\frac{1}{2} e^\gamma} in 1963 by Schönhage, to {c= e^\gamma} in 1963 by Rankin, to {c = 1.31256 e^\gamma} by Maier and Pomerance, and finally to {c = 2e^\gamma} in 1997 by Pintz.

Erdös listed the problem of making {c} arbitrarily large one of his favourite open problems, even offering (“somewhat rashly”, in his words) a cash prize for the solution. Our main result answers this question in the affirmative:

Theorem 1 The bound (3) holds for arbitrarily large {c>0}.

In principle, we thus have a bound of the form

\displaystyle G(X) \geq f(X) \frac{\log\log X (\log\log\log\log X)}{(\log\log\log X)^2} \log X

for some {f(X)} that grows to infinity. Unfortunately, due to various sources of ineffectivity in our methods, we cannot provide any explicit rate of growth on {f(X)} at all.

We decided to announce this result the old-fashioned way, as part of a research lecture; more precisely, Ben Green announced the result in his ICM lecture this Tuesday. (The ICM staff have very efficiently put up video of his talks (and most of the other plenary and prize talks) online; Ben’s talk is here, with the announcement beginning at about 0:48. Note a slight typo in his slides, in that the exponent of {\log\log\log X} in the denominator is {3} instead of {2}.) Ben’s lecture slides may be found here.

By coincidence, an independent proof of this theorem has also been obtained very recently by James Maynard.

I discuss our proof method below the fold.

Read the rest of this entry »

[This guest post is authored by Matilde Lalin, an Associate Professor in the Département de mathématiques et de statistique at the Université de Montréal.  I have lightly edited the text, mostly by adding some HTML formatting. -T.]

Mathematicians (and likely other academics!) with small children face some unique challenges when traveling to conferences and workshops. The goal of this post is to reflect on these, and to start a constructive discussion what institutions and event organizers could do to improve the experiences of such participants.

The first necessary step is to recognize that different families have different needs. While it is hard to completely address everybody’s needs, there are some general measures that have a good chance to help most of the people traveling with young children. In this post, I will mostly focus on nursing mothers with infants (\leq 24 months old) because that is my personal experience. Many of the suggestions will apply to other cases such as non-nursing babies, children of single parents, children of couples of mathematicians who are interested in attending the same conference, etc..

The mother of a nursing infant that wishes to attend a conference has three options:

  1. Bring the infant and a relative/friend to help caring for the infant. The main challenge in this case is to fund the trip expenses of the relative. This involves trip costs, lodging, and food. The family may need a hotel room with some special amenities such as crib, fridge, microwave, etc. Location is also important, with easy access to facilities such as a grocery store, pharmacy, etc. The mother will need to take regular breaks from the conference in order to nurse the baby (this could be as often as every three hours or so). Depending on personal preferences, she may need to nurse privately. It is convenient, thus, to make a private room available, located as close to the conference venue as possible. The relative may need to have a place to stay with the baby near the conference such as a playground or a room with toys, particularly if the hotel room is far.
  2. Bring the infant and hire someone local (a nanny) to help caring for the infant. The main challenges in this case are two: finding the caregiver and paying for such services. Finding a caregiver in a place where one does not live is hard, as it is difficult to conduct interviews or get references. There are agencies that can do this for a (quite expensive) fee: they will find a professional caregiver with background checks, CPR certification, many references, etc. It may be worth it, though, as professional caregivers tend to provide high-quality services and peace of mind is priceless for the mother mathematician attending a conference. As in the previous case, the mother may have particular needs regarding the hotel room, location, and all the other facilities mentioned for Option 1.
  3. Travel without the infant and pump milk regularly. This can be very challenging for the mother, the baby, and the person that stays behind taking care of the baby, but the costs of this arrangement are much lower than in Option 1 or 2 (I am ignoring the possibility that the family needs to hire help at home, which is necessary in some cases). A nursing mother away from her baby has no option but to pump her milk to prevent her from pain and serious health complications. This mother may have to pump milk very often. Pumping is less efficient than nursing, so she will be gone for longer in each break or she will have more breaks compared to a mother that travels with her baby. For pumping, people need a room which should ideally be private, with a sink, and located as close to the conference venue as possible. It is often impossible for these three conditions to be met at the same time, so different mothers give priority to different features. Some people pump milk in washrooms, to have easy access to water. Other people might prefer to pump in a more comfortable setting, such as an office, and go to the washroom to wash the breast pump accessories after. If the mother expects that the baby will drink breastmilk while she is away, then she will also have to pump milk in advance of her trip. This requires some careful planning.
     
    Many pumping mothers try to store the pumped milk and bring it back home. In this case the mother needs a hotel room with a fridge which (ideally, but hard to find) has a freezer. In a perfect world there would also be a fridge in the place where she pumps/where the conference is held.

It is important to keep in mind that each option has its own set of challenges (even when expenses and facilities are all covered) and that different families may be restricted in their choice of options for a variety of reasons. It is therefore important that all these three options be facilitated.

As for the effect these choices have on the conference experience for the mother, Option 1 means that she has to balance her time between the conference and spending time with her relative/friend. This pressure disappears when we consider Option 2, so this option may lead to more participation in the conferences activities. In Option 3, the mother is in principle free to participate in all the conference activities, but the frequent breaks may limit the type of activity. A mother may choose different options depending on the nature of the conference.

I want to stress, for the three options, that having to make choices about what to miss in the conference is very hard. While talks are important, so are the opportunities to meet people and discuss mathematics that happen during breaks and social events. It is very difficult to balance all of this. This is particularly difficult for the pumping mother in Option 3: because she travels without her baby, she is not perceived to be a in special situation or in need of accommodation. However, this mother is probably choosing between going to the last lecture in the morning or having lunch alone, because if she goes to pump right after the last lecture, by the time she is back, everybody has left for lunch.

Here is the Hall of Fame for those organizations that are already supporting nursing mothers’ travels in mathematics:

  • The Natural Sciences and Engineering Research Council of Canada (NSERC) (search for “child care”) allows to reimburse the costs of child care with Option 2 out of the mother’s grants. They will also reimburse the travel expenses of a relative with Option 1 up to the amount that would cost to hire a local caregiver.
  • The ENFANT/ELEFANT conference (co-organized by Lillian Pierce and Damaris Schindler) provided a good model to follow regarding accommodation for parents with children during conferences that included funding for covering the travel costs of accompanying caretakers (the funding was provided by the Deutsche Forschungsgemeinschaft, and lactation rooms and play rooms near the conference venue (the facilities were provided by the Hausdorff Center for Mathematics).
     
    Additional information (where to go with kids, etc) was provided on site by the organizers and was made available to all participants all the time, by means of a display board that was left standing during the whole week of the conference.
  • The American Institute of Mathematics (AIM) reimburses up to 500 dollars on childcare for visitors and they have some online resources that assist in finding childcare and nannies.

[UPDATED] Added a few more things to the Hall of Fame

In closing, here is a (possibly incomplete) list of resources that institutes, funding agencies, and conferences could consider providing for nursing mother mathematicians:

    1. Funding (for cost associated to child care either professional or by an accompanying relative).
    2. List of childcare resources (nannies, nanny agencies, drop-in childcare centre, etc).
    3. Nursing rooms and playrooms near the conference venue. Nearby fridge.
    4. Breaks of at least 20 minutes every 2-3 hours.
    5. Information about transportation with infants. More specific, taxi and/or shuttle companies that provide infant car seats. Information regarding the law on infant seats in taxis and other public transportation.
    6. Accessibility for strollers.
    7. [UPDATED] A nearby playground location. (comment from Peter).

I also find it important that these resources be listed publicly in the institute/conference website. This serves a double purpose: first, it helps those in need of the resources to access them easily, and second, it contributes to make these accommodations normal, setting a good model for future events, and inspiring organizers of future events.

Finally, I am pretty sure that the options and solutions I described do not cover all cases. I would like to finish this note by inviting readers to make suggestions, share experiences, and/or pose questions about this topic.

In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).

Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the {P \neq NP} problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The {P \neq NP} problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as {k}-SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and {k}-SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.

My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector {x \in {\bf R}^n} in an affine space {\{ x \in {\bf R}^n: Ax = b \}} (where {b \in {\bf R}^m} and {A \in {\bf R}^{m\times n}} are given, and {m} is typically somewhat smaller than {n}) which minimises the {\ell^1}-norm {\|x\|_{\ell^1} := \sum_{i=1}^n |x_i|} of the vector {x}. This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once {n} and {m} get moderately large (e.g. of the order of {10^6}), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix {A}. Much of the problem comes from the fact that the functional {x \mapsto \|x\|_1} is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint {Ax=b} with a convex penalty term {\frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2}, thus one is now trying to minimise the unconstrained functional

\displaystyle \|x\|_1 + \frac{1}{2\epsilon} \|Ax-b\|_{\ell^2}^2.

This functional is more convex, and is over a computationally simpler domain {{\bf R}^n} than the affine space {\{x \in {\bf R}^n: Ax=b\}}, so is easier (though still not entirely trivial) to optimise over. However, the minimiser {x^\epsilon} to this problem need not match the minimiser {x^0} to the original problem, particularly if the (sub-)gradient {\partial \|x\|_1} of the original functional {\|x\|_1} is large at {x^0}, and if {\epsilon} is not set to be small. (And setting {\epsilon} too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term

\displaystyle \|x\|_1 - \langle p, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2

then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser {x^0} to the original problem, provided that {p} is (or more precisely, lies in) the (sub-)gradient {\partial \|x\|_1} of {\|x\|_1} at {x^0} – even if {\epsilon} is not small. But, one does not know in advance what the correct value of {p} should be, because one does not know what the minimiser {x^0} is.

With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for {x} and {p} simultaneously; given an initial guess {x^k, p^k} for both {x^k} and {p^k}, one first updates {x^k} to the minimiser {x^{k+1} \in {\bf R}^n} of the convex functional

\displaystyle \|x\|_1 - \langle p^k, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2 \ \ \ \ \ (1)

and then updates {p^{k+1}} to the natural value of the subgradient {\partial \|x\|_1} at {x^{k+1}}, namely

\displaystyle p^{k+1} := p^k - \nabla \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2|_{x=x^{k+1}} = p_k - \frac{1}{\epsilon} (Ax^k - b)

(note upon taking the first variation of (1) that {p^{k+1}} is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser {x^0} even for non-small values of {\epsilon}, and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).

Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth {n}-dimensional Riemannian manifold that one wants to embed locally into a Euclidean space {{\bf R}^m}. The Nash embedding theorem guarantees that one can do this if {m} is large enough depending on {n}, but what is the minimal value of {m} one can get away with? Many years ago, my colleague Robert Greene showed that {m = \frac{n(n+1)}{2} + n} sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that {m = \frac{n(n+1)}{2}} is possible, and no better. So this suggests that {m = \frac{n(n+1)}{2}} is the threshold for the smooth problem also, but this remains open in general. The cases {n=1} is trivial, and the {n=2} case is not too difficult (if the curvature is non-zero) as the codimension {m-n} is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the {n=3} case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.

The 2014 Fields medallists have just been announced as (in alphabetical order of surname) Artur Avila, Manjul Bhargava, Martin Hairer, and Maryam Mirzakhani (see also these nice video profiles for the winners, which is a new initiative of the IMU and the Simons foundation). This time four years ago, I wrote a blog post discussing one result from each of the 2010 medallists; I thought I would try to repeat the exercise here, although the work of the medallists this time around is a little bit further away from my own direct area of expertise than last time, and so my discussion will unfortunately be a bit superficial (and possibly not completely accurate) in places. As before, I am picking these results based on my own idiosyncratic tastes, and they should not be viewed as necessarily being the “best” work of these medallists. (See also the press releases for Avila, Bhargava, Hairer, and Mirzakhani.)

Artur Avila works in dynamical systems and in the study of Schrödinger operators. The work of Avila that I am most familiar with is his solution with Svetlana Jitormiskaya of the ten martini problem of Kac, the solution to which (according to Barry Simon) he offered ten martinis for, hence the name. The problem involves perhaps the simplest example of a Schrödinger operator with non-trivial spectral properties, namely the almost Mathieu operator {H^{\lambda,\alpha}_\omega: \ell^2({\bf Z}) \rightarrow \ell^2({\bf Z})} defined for parameters {\alpha,\omega \in {\bf R}/{\bf Z}} and {\lambda>0} by a discrete one-dimensional Schrödinger operator with cosine potential:

\displaystyle (H^{\lambda,\alpha}_\omega u)_n := u_{n+1} + u_{n-1} + 2\lambda (\cos 2\pi(\theta+n\alpha)) u_n.

This is a bounded self-adjoint operator and thus has a spectrum {\sigma( H^{\lambda,\alpha}_\omega )} that is a compact subset of the real line; it arises in a number of physical contexts, most notably in the theory of the integer quantum Hall effect, though I will not discuss these applications here. Remarkably, the structure of this spectrum depends crucially on the Diophantine properties of the frequency {\alpha}. For instance, if {\alpha = p/q} is a rational number, then the operator is periodic with period {q}, and then basic (discrete) Floquet theory tells us that the spectrum is simply the union of {q} (possibly touching) intervals. But for irrational {\alpha} (in which case the spectrum is independent of the phase {\theta}), the situation is much more fractal in nature, for instance in the critical case {\lambda=1} the spectrum (as a function of {\alpha}) gives rise to the Hofstadter butterfly. The “ten martini problem” asserts that for every irrational {\alpha} and every choice of coupling constant {\lambda > 0}, the spectrum is homeomorphic to a Cantor set. Prior to the work of Avila and Jitormiskaya, there were a number of partial results on this problem, notably the result of Puig establishing Cantor spectrum for a full measure set of parameters {(\lambda,\alpha)}, as well as results requiring a perturbative hypothesis, such as {\lambda} being very small or very large. The result was also already known for {\alpha} being either very close to rational (i.e. a Liouville number) or very far from rational (a Diophantine number), although the analyses for these two cases failed to meet in the middle, leaving some cases untreated. The argument uses a wide variety of existing techniques, both perturbative and non-perturbative, to attack this problem, as well as an amusing argument by contradiction: they assume (in certain regimes) that the spectrum fails to be a Cantor set, and use this hypothesis to obtain additional Lipschitz control on the spectrum (as a function of the frequency {\alpha}), which they can then use (after much effort) to improve existing arguments and conclude that the spectrum was in fact Cantor after all!

Manjul Bhargava produces amazingly beautiful mathematics, though most of it is outside of my own area of expertise. One part of his work that touches on an area of my own interest (namely, random matrix theory) is his ongoing work with many co-authors on modeling (both conjecturally and rigorously) the statistics of various key number-theoretic features of elliptic curves (such as their rank, their Selmer group, or their Tate-Shafarevich groups). For instance, with Kane, Lenstra, Poonen, and Rains, Manjul has proposed a very general random matrix model that predicts all of these statistics (for instance, predicting that the {p}-component of the Tate-Shafarevich group is distributed like the cokernel of a certain random {p}-adic matrix, very much in the spirit of the Cohen-Lenstra heuristics discussed in this previous post). But what is even more impressive is that Manjul and his coauthors have been able to verify several non-trivial fragments of this model (e.g. showing that certain moments have the predicted asymptotics), giving for the first time non-trivial upper and lower bounds for various statistics, for instance obtaining lower bounds on how often an elliptic curve has rank {0} or rank {1}, leading most recently (in combination with existing work of Gross-Zagier and of Kolyvagin, among others) to his amazing result with Skinner and Zhang that at least {66\%} of all elliptic curves over {{\bf Q}} (ordered by height) obey the Birch and Swinnerton-Dyer conjecture. Previously it was not even known that a positive proportion of curves obeyed the conjecture. This is still a fair ways from resolving the conjecture fully (in particular, the situation with the presumably small number of curves of rank {2} and higher is still very poorly understood, and the theory of Gross-Zagier and Kolyvagin that this work relies on, which was initially only available for {{\bf Q}}, has only been extended to totally real number fields thus far, by the work of Zhang), but it certainly does provide hope that the conjecture could be within reach in a statistical sense at least.

Martin Hairer works in at the interface between probability and partial differential equations, and in particular in the theory of stochastic differential equations (SDEs). The result of his that is closest to my own interests is his remarkable demonstration with Jonathan Mattingly of unique invariant measure for the two-dimensional stochastically forced Navier-Stokes equation

\displaystyle \partial_t u + (u \cdot \nabla u) = \nu \Delta u - \nabla p + \xi

\displaystyle \nabla \cdot u = 0

on the two-torus {({\bf R}/{\bf Z})^2}, where {\xi} is a Gaussian field that forces a fixed set of frequencies. It is expected that for any reasonable choice of initial data, the solution to this equation should asymptotically be distributed according to Kolmogorov’s power law, as discussed in this previous post. This is still far from established rigorously (although there are some results in this direction for dyadic models, see e.g. this paper of Cheskidov, Shvydkoy, and Friedlander). However, Hairer and Mattingly were able to show that there was a unique probability distribution to almost every initial data would converge to asymptotically; by the ergodic theorem, this is equivalent to demonstrating the existence and uniqueness of an invariant measure for the flow. Existence can be established using standard methods, but uniqueness is much more difficult. One of the standard routes to uniqueness is to establish a “strong Feller property” that enforces some continuity on the transition operators; among other things, this would mean that two ergodic probability measures with intersecting supports would in fact have a non-trivial common component, contradicting the ergodic theorem (which forces different ergodic measures to be mutually singular). Since all ergodic measures for Navier-Stokes can be seen to contain the origin in their support, this would give uniqueness. Unfortunately, the strong Feller property is unlikely to hold in the infinite-dimensional phase space for Navier-Stokes; but Hairer and Mattingly develop a clean abstract substitute for this property, which they call the asymptotic strong Feller property, which is again a regularity property on the transition operator; this in turn is then demonstrated by a careful application of Malliavin calculus.

Maryam Mirzakhani has mostly focused on the geometry and dynamics of Teichmuller-type moduli spaces, such as the moduli space of Riemann surfaces with a fixed genus and a fixed number of cusps (or with a fixed number of boundaries that are geodesics of a prescribed length). These spaces have an incredibly rich structure, ranging from geometric structure (such as the Kahler geometry given by the Weil-Petersson metric), to dynamical structure (through the action of the mapping class group on this and related spaces), to algebraic structure (viewing these spaces as algebraic varieties), and are thus connected to many other objects of interest in geometry and dynamics. For instance, by developing a new recursive formula for the Weil-Petersson volume of this space, Mirzakhani was able to asymptotically count the number of simple prime geodesics of length up to some threshold {L} in a hyperbolic surface (or more precisely, she obtained asymptotics for the number of such geodesics in a given orbit of the mapping class group); the answer turns out to be polynomial in {L}, in contrast to the much larger class of non-simple prime geodesics, whose asymptotics are exponential in {L} (the “prime number theorem for geodesics”, developed in a classic series of works by Delsart, Huber, Selberg, and Margulis); she also used this formula to establish a new proof of a conjecture of Witten on intersection numbers that was first proven by Kontsevich. More recently, in two lengthy papers with Eskin and with Eskin-Mohammadi, Mirzakhani established rigidity theorems for the action of {SL_2({\bf R})} on such moduli spaces that are close analogues of Ratner’s celebrated rigidity theorems for unipotently generated groups (discussed in this previous blog post). Ratner’s theorems are already notoriously difficult to prove, and rely very much on the polynomial stability properties of unipotent flows; in this even more complicated setting, the unipotent flows are no longer tractable, and Mirzakhani instead uses a recent “exponential drift” method of Benoist and Quint with as a substitute. Ratner’s theorems are incredibly useful for all sorts of problems connected to homogeneous dynamics, and the analogous theorems established by Mirzakhani, Eskin, and Mohammadi have a similarly broad range of applications, for instance in counting periodic billiard trajectories in rational polygons.

I’ve just uploaded to the arXiv the D.H.J. Polymath paper “Variants of the Selberg sieve, and bounded intervals containing many primes“, which is the second paper to be produced from the Polymath8 project (the first one being discussed here). We’ll refer to this latter paper here as the Polymath8b paper, and the former as the Polymath8a paper. As with Polymath8a, the Polymath8b paper is concerned with the smallest asymptotic prime gap

\displaystyle  H_1 := \liminf_{n \rightarrow \infty}(p_{n+1}-p_n),

where {p_n} denotes the {n^{th}} prime, as well as the more general quantities

\displaystyle  H_m := \liminf_{n \rightarrow \infty}(p_{n+m}-p_n).

In the breakthrough paper of Goldston, Pintz, and Yildirim, the bound {H_1 \leq 16} was obtained under the strong hypothesis of the Elliott-Halberstam conjecture. An unconditional bound on {H_1}, however, remained elusive until the celebrated work of Zhang last year, who showed that

\displaystyle  H_1 \leq 70{,}000{,}000.

The Polymath8a paper then improved this to {H_1 \leq 4{,}680}. After that, Maynard introduced a new multidimensional Selberg sieve argument that gave the substantial improvement

\displaystyle  H_1 \leq 600

unconditionally, and {H_1 \leq 12} on the Elliott-Halberstam conjecture; furthermore, bounds on {H_m} for higher {m} were obtained for the first time, and specifically that {H_m \ll m^3 e^{4m}} for all {m \geq 1}, with the improvements {H_2 \leq 600} and {H_m \ll m^3 e^{2m}} on the Elliott-Halberstam conjecture. (I had independently discovered the multidimensional sieve idea, although I did not obtain Maynard’s specific numerical results, and my asymptotic bounds were a bit weaker.)

In Polymath8b, we obtain some further improvements. Unconditionally, we have {H_1 \leq 246} and {H_m \ll m e^{(4 - \frac{28}{157}) m}}, together with some explicit bounds on {H_2,H_3,H_4,H_5}; on the Elliott-Halberstam conjecture we have {H_m \ll m e^{2m}} and some numerical improvements to the {H_2,H_3,H_4,H_5} bounds; and assuming the generalised Elliott-Halberstam conjecture we have the bound {H_1 \leq 6}, which is best possible from sieve-theoretic methods thanks to the parity problem obstruction.

There were a variety of methods used to establish these results. Maynard’s paper obtained a criterion for bounding {H_m} which reduced to finding a good solution to a certain multidimensional variational problem. When the dimension parameter {k} was relatively small (e.g. {k \leq 100}), we were able to obtain good numerical solutions both by continuing the method of Maynard (using a basis of symmetric polynomials), or by using a Krylov iteration scheme. For large {k}, we refined the asymptotics and obtained near-optimal solutions of the variational problem. For the {H_1} bounds, we extended the reach of the multidimensional Selberg sieve (particularly under the assumption of the generalised Elliott-Halberstam conjecture) by allowing the function {F} in the multidimensional variational problem to extend to a larger region of space than was previously admissible, albeit with some tricky new constraints on {F} (and penalties in the variational problem). This required some unusual sieve-theoretic manipulations, notably an “epsilon trick”, ultimately relying on the elementary inequality {(a+b)^2 \geq a^2 + 2ab}, that allowed one to get non-trivial lower bounds for sums such as {\sum_n (a(n)+b(n))^2} even if the sum {\sum_n b(n)^2} had no non-trivial estimates available; and a way to estimate divisor sums such as {\sum_{n\leq x} \sum_{d|n} \lambda_d} even if {d} was permitted to be comparable to or even exceed {x}, by using the fundamental theorem of arithmetic to factorise {n} (after restricting to the case when {n} is almost prime). I hope that these sieve-theoretic tricks will be useful in future work in the subject.

With this paper, the Polymath8 project is almost complete; there is still a little bit of scope to push our methods further and get some modest improvement for instance to the {H_1 \leq 246} bound, but this would require a substantial amount of effort, and it is probably best to instead wait for some new breakthrough in the subject to come along. One final task we are performing is to write up a retrospective article on both the 8a and 8b experiences, an incomplete writeup of which can be found here. If anyone wishes to contribute some commentary on these projects (whether you were an active contributor, an occasional contributor, or a silent “lurker” in the online discussion), please feel free to do so in the comments to this post.

In the traditional foundations of probability theory, one selects a probability space {(\Omega, {\mathcal B}, {\mathbf P})}, and makes a distinction between deterministic mathematical objects, which do not depend on the sampled state {\omega \in \Omega}, and stochastic (or random) mathematical objects, which do depend (but in a measurable fashion) on the sampled state {\omega \in \Omega}. For instance, a deterministic real number would just be an element {x \in {\bf R}}, whereas a stochastic real number (or real random variable) would be a measurable function {x: \Omega \rightarrow {\bf R}}, where in this post {{\bf R}} will always be endowed with the Borel {\sigma}-algebra. (For readers familiar with nonstandard analysis, the adjectives “deterministic” and “stochastic” will be used here in a manner analogous to the uses of the adjectives “standard” and “nonstandard” in nonstandard analysis. The analogy is particularly close when comparing with the “cheap nonstandard analysis” discussed in this previous blog post. We will also use “relative to {\Omega}” as a synonym for “stochastic”.)

Actually, for our purposes we will adopt the philosophy of identifying stochastic objects that agree almost surely, so if one was to be completely precise, we should define a stochastic real number to be an equivalence class {[x]} of measurable functions {x: \Omega \rightarrow {\bf R}}, up to almost sure equivalence. However, we shall often abuse notation and write {[x]} simply as {x}.

More generally, given any measurable space {X = (X, {\mathcal X})}, we can talk either about deterministic elements {x \in X}, or about stochastic elements of {X}, that is to say equivalence classes {[x]} of measurable maps {x: \Omega \rightarrow X} up to almost sure equivalence. We will use {\Gamma(X|\Omega)} to denote the set of all stochastic elements of {X}. (For readers familiar with sheaves, it may helpful for the purposes of this post to think of {\Gamma(X|\Omega)} as the space of measurable global sections of the trivial {X}-bundle over {\Omega}.) Of course every deterministic element {x} of {X} can also be viewed as a stochastic element {x|\Omega \in \Gamma(X|\Omega)} given by (the equivalence class of) the constant function {\omega \mapsto x}, thus giving an embedding of {X} into {\Gamma(X|\Omega)}. We do not attempt here to give an interpretation of {\Gamma(X|\Omega)} for sets {X} that are not equipped with a {\sigma}-algebra {{\mathcal X}}.

Remark 1 In my previous post on the foundations of probability theory, I emphasised the freedom to extend the sample space {(\Omega, {\mathcal B}, {\mathbf P})} to a larger sample space whenever one wished to inject additional sources of randomness. This is of course an important freedom to possess (and in the current formalism, is the analogue of the important operation of base change in algebraic geometry), but in this post we will focus on a single fixed sample space {(\Omega, {\mathcal B}, {\mathbf P})}, and not consider extensions of this space, so that one only has to consider two types of mathematical objects (deterministic and stochastic), as opposed to having many more such types, one for each potential choice of sample space (with the deterministic objects corresponding to the case when the sample space collapses to a point).

Any (measurable) {k}-ary operation on deterministic mathematical objects then extends to their stochastic counterparts by applying the operation pointwise. For instance, the addition operation {+: {\bf R} \times {\bf R} \rightarrow {\bf R}} on deterministic real numbers extends to an addition operation {+: \Gamma({\bf R}|\Omega) \times \Gamma({\bf R}|\Omega) \rightarrow \Gamma({\bf R}|\Omega)}, by defining the class {[x]+[y]} for {x,y: \Omega \rightarrow {\bf R}} to be the equivalence class of the function {\omega \mapsto x(\omega) + y(\omega)}; this operation is easily seen to be well-defined. More generally, any measurable {k}-ary deterministic operation {O: X_1 \times \dots \times X_k \rightarrow Y} between measurable spaces {X_1,\dots,X_k,Y} extends to an stochastic operation {O: \Gamma(X_1|\Omega) \times \dots \Gamma(X_k|\Omega) \rightarrow \Gamma(Y|\Omega)} in the obvious manner.

There is a similar story for {k}-ary relations {R: X_1 \times \dots \times X_k \rightarrow \{\hbox{true},\hbox{false}\}}, although here one has to make a distinction between a deterministic reading of the relation and a stochastic one. Namely, if we are given stochastic objects {x_i \in \Gamma(X_i|\Omega)} for {i=1,\dots,k}, the relation {R(x_1,\dots,x_k)} does not necessarily take values in the deterministic Boolean algebra {\{ \hbox{true}, \hbox{false}\}}, but only in the stochastic Boolean algebra {\Gamma(\{ \hbox{true}, \hbox{false}\}|\Omega)} – thus {R(x_1,\dots,x_k)} may be true with some positive probability and also false with some positive probability (with the event that {R(x_1,\dots,x_k)} being stochastically true being determined up to null events). Of course, the deterministic Boolean algebra embeds in the stochastic one, so we can talk about a relation {R(x_1,\dots,x_k)} being determinstically true or deterministically false, which (due to our identification of stochastic objects that agree almost surely) means that {R(x_1(\omega),\dots,x_k(\omega))} is almost surely true or almost surely false respectively. For instance given two stochastic objects {x,y}, one can view their equality relation {x=y} as having a stochastic truth value. This is distinct from the way the equality symbol {=} is used in mathematical logic, which we will now call “equality in the deterministic sense” to reduce confusion. Thus, {x=y} in the deterministic sense if and only if the stochastic truth value of {x=y} is equal to {\hbox{true}}, that is to say that {x(\omega)=y(\omega)} for almost all {\omega}.

Any universal identity for deterministic operations (or universal implication between identities) extends to their stochastic counterparts: for instance, addition is commutative, associative, and cancellative on the space of deterministic reals {{\bf R}}, and is therefore commutative, associative, and cancellative on stochastic reals {\Gamma({\bf R}|\Omega)} as well. However, one has to be more careful when working with mathematical laws that are not expressible as universal identities, or implications between identities. For instance, {{\bf R}} is an integral domain: if {x_1,x_2 \in {\bf R}} are deterministic reals such that {x_1 x_2=0}, then one must have {x_1=0} or {x_2=0}. However, if {x_1, x_2 \in \Gamma({\bf R}|\Omega)} are stochastic reals such that {x_1 x_2 = 0} (in the deterministic sense), then it is no longer necessarily the case that {x_1=0} (in the deterministic sense) or that {x_2=0} (in the deterministic sense); however, it is still true that “{x_1=0} or {x_2=0}” is true in the deterministic sense if one interprets the boolean operator “or” stochastically, thus “{x_1(\omega)=0} or {x_2(\omega)=0}” is true for almost all {\omega}. Another way to properly obtain a stochastic interpretation of the integral domain property of {{\bf R}} is to rewrite it as

\displaystyle  x_1,x_2 \in {\bf R}, x_1 x_2 = 0 \implies x_i=0 \hbox{ for some } i \in \{1,2\}

and then make all sets stochastic to obtain the true statement

\displaystyle  x_1,x_2 \in \Gamma({\bf R}|\Omega), x_1 x_2 = 0 \implies x_i=0 \hbox{ for some } i \in \Gamma(\{1,2\}|\Omega),

thus we have to allow the index {i} for which vanishing {x_i=0} occurs to also be stochastic, rather than deterministic. (A technical note: when one proves this statement, one has to select {i} in a measurable fashion; for instance, one can choose {i(\omega)} to equal {1} when {x_1(\omega)=0}, and {2} otherwise (so that in the “tie-breaking” case when {x_1(\omega)} and {x_2(\omega)} both vanish, one always selects {i(\omega)} to equal {1}).)

Similarly, the law of the excluded middle fails when interpreted deterministically, but remains true when interpreted stochastically: if {S} is a stochastic statement, then it is not necessarily the case that {S} is either deterministically true or deterministically false; however the sentence “{S} or not-{S}” is still deterministically true if the boolean operator “or” is interpreted stochastically rather than deterministically.

To avoid having to keep pointing out which operations are interpreted stochastically and which ones are interpreted deterministically, we will use the following convention: if we assert that a mathematical sentence {S} involving stochastic objects is true, then (unless otherwise specified) we mean that {S} is deterministically true, assuming that all relations used inside {S} are interpreted stochastically. For instance, if {x,y} are stochastic reals, when we assert that “Exactly one of {x < y}, {x=y}, or {x>y} is true”, then by default it is understood that the relations {<}, {=}, {>} and the boolean operator “exactly one of” are interpreted stochastically, and the assertion is that the sentence is deterministically true.

In the above discussion, the stochastic objects {x} being considered were elements of a deterministic space {X}, such as the reals {{\bf R}}. However, it can often be convenient to generalise this situation by allowing the ambient space {X} to also be stochastic. For instance, one might wish to consider a stochastic vector {v(\omega)} inside a stochastic vector space {V(\omega)}, or a stochastic edge {e} of a stochastic graph {G(\omega)}. In order to formally describe this situation within the classical framework of measure theory, one needs to place all the ambient spaces {X(\omega)} inside a measurable space. This can certainly be done in many contexts (e.g. when considering random graphs on a deterministic set of vertices, or if one is willing to work up to equivalence and place the ambient spaces inside a suitable moduli space), but is not completely natural in other contexts. For instance, if one wishes to consider stochastic vector spaces of potentially unbounded dimension (in particular, potentially larger than any given cardinal that one might specify in advance), then the class of all possible vector spaces is so large that it becomes a proper class rather than a set (even if one works up to equivalence), making it problematic to give this class the structure of a measurable space; furthermore, even once one does so, one needs to take additional care to pin down what it would mean for a random vector {\omega \mapsto v_\omega} lying in a random vector space {\omega \mapsto V_\omega} to depend “measurably” on {\omega}.

Of course, in any reasonable application one can avoid the set theoretic issues at least by various ad hoc means, for instance by restricting the dimension of all spaces involved to some fixed cardinal such as {2^{\aleph_0}}. However, the measure-theoretic issues can require some additional effort to resolve properly.

In this post I would like to describe a different way to formalise stochastic spaces, and stochastic elements of these spaces, by viewing the spaces as measure-theoretic analogue of a sheaf, but being over the probability space {\Omega} rather than over a topological space; stochastic objects are then sections of such sheaves. Actually, for minor technical reasons it is convenient to work in the slightly more general setting in which the base space {\Omega} is a finite measure space {(\Omega, {\mathcal B}, \mu)} rather than a probability space, thus {\mu(\Omega)} can take any value in {[0,+\infty)} rather than being normalised to equal {1}. This will allow us to easily localise to subevents {\Omega'} of {\Omega} without the need for normalisation, even when {\Omega'} is a null event (though we caution that the map {x \mapsto x|\Omega'} from deterministic objects {x} ceases to be injective in this latter case). We will however still continue to use probabilistic terminology. despite the lack of normalisation; thus for instance, sets {E} in {{\mathcal B}} will be referred to as events, the measure {\mu(E)} of such a set will be referred to as the probability (which is now permitted to exceed {1} in some cases), and an event whose complement is a null event shall be said to hold almost surely. It is in fact likely that almost all of the theory below extends to base spaces which are {\sigma}-finite rather than finite (for instance, by damping the measure to become finite, without introducing any further null events), although we will not pursue this further generalisation here.

The approach taken in this post is “topos-theoretic” in nature (although we will not use the language of topoi explicitly here), and is well suited to a “pointless” or “point-free” approach to probability theory, in which the role of the stochastic state {\omega \in \Omega} is suppressed as much as possible; instead, one strives to always adopt a “relative point of view”, with all objects under consideration being viewed as stochastic objects relative to the underlying base space {\Omega}. In this perspective, the stochastic version of a set is as follows.

Definition 1 (Stochastic set) Unless otherwise specified, we assume that we are given a fixed finite measure space {\Omega = (\Omega, {\mathcal B}, \mu)} (which we refer to as the base space). A stochastic set (relative to {\Omega}) is a tuple {X|\Omega = (\Gamma(X|E)_{E \in {\mathcal B}}, ((|E))_{E \subset F, E,F \in {\mathcal B}})} consisting of the following objects:

  • A set {\Gamma(X|E)} assigned to each event {E \in {\mathcal B}}; and
  • A restriction map {x \mapsto x|E} from {\Gamma(X|F)} to {\Gamma(X|E)} to each pair {E \subset F} of nested events {E,F \in {\mathcal B}}. (Strictly speaking, one should indicate the dependence on {F} in the notation for the restriction map, e.g. using {x \mapsto x|(E \leftarrow F)} instead of {x \mapsto x|E}, but we will abuse notation by omitting the {F} dependence.)

We refer to elements of {\Gamma(X|E)} as local stochastic elements of the stochastic set {X|\Omega}, localised to the event {E}, and elements of {\Gamma(X|\Omega)} as global stochastic elements (or simply elements) of the stochastic set. (In the language of sheaves, one would use “sections” instead of “elements” here, but I prefer to use the latter terminology here, for compatibility with conventional probabilistic notation, where for instance measurable maps from {\Omega} to {{\bf R}} are referred to as real random variables, rather than sections of the reals.)

Furthermore, we impose the following axioms:

  • (Category) The map {x \mapsto x|E} from {\Gamma(X|E)} to {\Gamma(X|E)} is the identity map, and if {E \subset F \subset G} are events in {{\mathcal B}}, then {((x|F)|E) = (x|E)} for all {x \in \Gamma(X|G)}.
  • (Null events trivial) If {E \in {\mathcal B}} is a null event, then the set {\Gamma(X|E)} is a singleton set. (In particular, {\Gamma(X|\emptyset)} is always a singleton set; this is analogous to the convention that {x^0=1} for any number {x}.)
  • (Countable gluing) Suppose that for each natural number {n}, one has an event {E_n \in {\mathcal B}} and an element {x_n \in \Gamma(X|E_n)} such that {x_n|(E_n \cap E_m) = x_m|(E_n \cap E_m)} for all {n,m}. Then there exists a unique {x\in \Gamma(X|\bigcup_{n=1}^\infty E_n)} such that {x_n = x|E_n} for all {n}.

If {\Omega'} is an event in {\Omega}, we define the localisation {X|\Omega'} of the stochastic set {X|\Omega} to {\Omega'} to be the stochastic set

\displaystyle X|\Omega' := (\Gamma(X|E)_{E \in {\mathcal B}; E \subset \Omega'}, ((|E))_{E \subset F \subset \Omega', E,F \in {\mathcal B}})

relative to {\Omega'}. (Note that there is no need to renormalise the measure on {\Omega'}, as we are not demanding that our base space have total measure {1}.)

The following fact is useful for actually verifying that a given object indeed has the structure of a stochastic set:

Exercise 1 Show that to verify the countable gluing axiom of a stochastic set, it suffices to do so under the additional hypothesis that the events {E_n} are disjoint. (Note that this is quite different from the situation with sheaves over a topological space, in which the analogous gluing axiom is often trivial in the disjoint case but has non-trivial content in the overlapping case. This is ultimately because a {\sigma}-algebra is closed under all Boolean operations, whereas a topology is only closed under union and intersection.)

Let us illustrate the concept of a stochastic set with some examples.

Example 1 (Discrete case) A simple case arises when {\Omega} is a discrete space which is at most countable. If we assign a set {X_\omega} to each {\omega \in \Omega}, with {X_\omega} a singleton if {\mu(\{\omega\})=0}. One then sets {\Gamma(X|E) := \prod_{\omega \in E} X_\omega}, with the obvious restriction maps, giving rise to a stochastic set {X|\Omega}. (Thus, a local element {x} of {\Gamma(X|E)} can be viewed as a map {\omega \mapsto x(\omega)} on {E} that takes values in {X_\omega} for each {\omega \in E}.) Conversely, it is not difficult to see that any stochastic set over an at most countable discrete probability space {\Omega} is of this form up to isomorphism. In this case, one can think of {X|\Omega} as a bundle of sets {X_\omega} over each point {\omega} (of positive probability) in the base space {\Omega}. One can extend this bundle interpretation of stochastic sets to reasonably nice sample spaces {\Omega} (such as standard Borel spaces) and similarly reasonable {X}; however, I would like to avoid this interpretation in the formalism below in order to be able to easily work in settings in which {\Omega} and {X} are very “large” (e.g. not separable in any reasonable sense). Note that we permit some of the {X_\omega} to be empty, thus it can be possible for {\Gamma(X|\Omega)} to be empty whilst {\Gamma(X|E)} for some strict subevents {E} of {\Omega} to be non-empty. (This is analogous to how it is possible for a sheaf to have local sections but no global sections.) As such, the space {\Gamma(X|\Omega)} of global elements does not completely determine the stochastic set {X|\Omega}; one sometimes needs to localise to an event {E} in order to see the full structure of such a set. Thus it is important to distinguish between a stochastic set {X|\Omega} and its space {\Gamma(X|\Omega)} of global elements. (As such, it is a slight abuse of the axiom of extensionality to refer to global elements of {X|\Omega} simply as “elements”, but hopefully this should not cause too much confusion.)

Example 2 (Measurable spaces as stochastic sets) Returning now to a general base space {\Omega}, any (deterministic) measurable space {X} gives rise to a stochastic set {X|\Omega}, with {\Gamma(X|E)} being defined as in previous discussion as the measurable functions from {E} to {X} modulo almost everywhere equivalence (in particular, {\Gamma(X|E)} a singleton set when {E} is null), with the usual restriction maps. The constraint of measurability on the maps {x: E \rightarrow \Omega}, together with the quotienting by almost sure equivalence, means that {\Gamma(X|E)} is now more complicated than a plain Cartesian product {\prod_{\omega \in E} X_\omega} of fibres, but this still serves as a useful first approximation to what {\Gamma(X|E)} is for the purposes of developing intuition. Indeed, the measurability constraint is so weak (as compared for instance to topological or smooth constraints in other contexts, such as sheaves of continuous or smooth sections of bundles) that the intuition of essentially independent fibres is quite an accurate one, at least if one avoids consideration of an uncountable number of objects simultaneously.

Example 3 (Extended Hilbert modules) This example is the one that motivated this post for me. Suppose that one has an extension {(\tilde \Omega, \tilde {\mathcal B}, \tilde \mu)} of the base space {(\Omega, {\mathcal B},\mu)}, thus we have a measurable factor map {\pi: \tilde \Omega \rightarrow \Omega} such that the pushforward of the measure {\tilde \mu} by {\pi} is equal to {\mu}. Then we have a conditional expectation operator {\pi_*: L^2(\tilde \Omega,\tilde {\mathcal B},\tilde \mu) \rightarrow L^2(\Omega,{\mathcal B},\mu)}, defined as the adjoint of the pullback map {\pi^*: L^2(\Omega,{\mathcal B},\mu) \rightarrow L^2(\tilde \Omega,\tilde {\mathcal B},\tilde \mu)}. As is well known, the conditional expectation operator also extends to a contraction {\pi_*: L^1(\tilde \Omega,\tilde {\mathcal B},\tilde \mu) \rightarrow L^1(\Omega,{\mathcal B}, \mu)}; by monotone convergence we may also extend {\pi_*} to a map from measurable functions from {\tilde \Omega} to the extended non-negative reals {[0,+\infty]}, to measurable functions from {\Omega} to {[0,+\infty]}. We then define the “extended Hilbert module” {L^2(\tilde \Omega|\Omega)} to be the space of functions {f \in L^2(\tilde \Omega,\tilde {\mathcal B},\tilde \mu)} with {\pi_*(|f|^2)} finite almost everywhere. This is an extended version of the Hilbert module {L^\infty_{\Omega} L^2(\tilde \Omega|\Omega)}, which is defined similarly except that {\pi_*(|f|^2)} is required to lie in {L^\infty(\Omega,{\mathcal B},\mu)}; this is a Hilbert module over {L^\infty(\Omega, {\mathcal B}, \mu)} which is of particular importance in the Furstenberg-Zimmer structure theory of measure-preserving systems. We can then define the stochastic set {L^2_\pi(\tilde \Omega)|\Omega} by setting

\displaystyle  \Gamma(L^2_\pi(\tilde \Omega)|E) := L^2( \pi^{-1}(E) | E )

with the obvious restriction maps. In the case that {\Omega,\Omega'} are standard Borel spaces, one can disintegrate {\mu'} as an integral {\mu' = \int_\Omega \nu_\omega\ d\mu(\omega)} of probability measures {\nu_\omega} (supported in the fibre {\pi^{-1}(\{\omega\})}), in which case this stochastic set can be viewed as having fibres {L^2( \tilde \Omega, \tilde {\mathcal B}, \nu_\omega )} (though if {\Omega} is not discrete, there are still some measurability conditions in {\omega} on the local and global elements that need to be imposed). However, I am interested in the case when {\Omega,\Omega'} are not standard Borel spaces (in fact, I will take them to be algebraic probability spaces, as defined in this previous post), in which case disintegrations are not available. However, it appears that the stochastic analysis developed in this blog post can serve as a substitute for the tool of disintegration in this context.

We make the remark that if {X|\Omega} is a stochastic set and {E, F} are events that are equivalent up to null events, then one can identify {\Gamma(X|E)} with {\Gamma(X|F)} (through their common restriction to {\Gamma(X|(E \cap F))}, with the restriction maps now being bijections). As such, the notion of a stochastic set does not require the full structure of a concrete probability space {(\Omega, {\mathcal B}, {\mathbf P})}; one could also have defined the notion using only the abstract {\sigma}-algebra consisting of {{\mathcal B}} modulo null events as the base space, or equivalently one could define stochastic sets over the algebraic probability spaces defined in this previous post. However, we will stick with the classical formalism of concrete probability spaces here so as to keep the notation reasonably familiar.

As a corollary of the above observation, we see that if the base space {\Omega} has total measure {0}, then all stochastic sets are trivial (they are just points).

Exercise 2 If {X|\Omega} is a stochastic set, show that there exists an event {\Omega'} with the property that for any event {E}, {\Gamma(X|E)} is non-empty if and only if {E} is contained in {\Omega'} modulo null events. (In particular, {\Omega'} is unique up to null events.) Hint: consider the numbers {\mu( E )} for {E} ranging over all events with {\Gamma(X|E)} non-empty, and form a maximising sequence for these numbers. Then use all three axioms of a stochastic set.

One can now start take many of the fundamental objects, operations, and results in set theory (and, hence, in most other categories of mathematics) and establish analogues relative to a finite measure space. Implicitly, what we will be doing in the next few paragraphs is endowing the category of stochastic sets with the structure of an elementary topos. However, to keep things reasonably concrete, we will not explicitly emphasise the topos-theoretic formalism here, although it is certainly lurking in the background.

Firstly, we define a stochastic function {f: X|\Omega \rightarrow Y|\Omega} between two stochastic sets {X|\Omega, Y|\Omega} to be a collection of maps {f: \Gamma(X|E) \rightarrow \Gamma(Y|E)} for each {E \in {\mathcal B}} which form a natural transformation in the sense that {f(x|E) = f(x)|E} for all {x \in \Gamma(X|F)} and nested events {E \subset F}. In the case when {\Omega} is discrete and at most countable (and after deleting all null points), a stochastic function is nothing more than a collection of functions {f_\omega: X_\omega \rightarrow Y_\omega} for each {\omega \in \Omega}, with the function {f: \Gamma(X|E) \rightarrow \Gamma(Y|E)} then being a direct sum of the factor functions {f_\omega}:

\displaystyle  f( (x_\omega)_{\omega \in E} ) = ( f_\omega(x_\omega) )_{\omega \in E}.

Thus (in the discrete, at most countable setting, at least) stochastic functions do not mix together information from different states {\omega} in a sample space; the value of {f(x)} at {\omega} depends only on the value of {x} at {\omega}. The situation is a bit more subtle for continuous probability spaces, due to the identification of stochastic objects that agree almost surely, nevertheness it is still good intuition to think of stochastic functions as essentially being “pointwise” or “local” in nature.

One can now form the stochastic set {\hbox{Hom}(X \rightarrow Y)|\Omega} of functions from {X|\Omega} to {Y|\Omega}, by setting {\Gamma(\hbox{Hom}(X \rightarrow Y)|E)} for any event {E} to be the set of local stochastic functions {f: X|E \rightarrow Y|E} of the localisations of {X|\Omega, Y|\Omega} to {E}; this is a stochastic set if we use the obvious restriction maps. In the case when {\Omega} is discrete and at most countable, the fibre {\hbox{Hom}(X \rightarrow Y)_\omega} at a point {\omega} of positive measure is simply the set {Y_\omega^{X_\omega}} of functions from {X_\omega} to {Y_\omega}.

In a similar spirit, we say that one stochastic set {Y|\Omega} is a (stochastic) subset of another {X|\Omega}, and write {Y|\Omega \subset X|\Omega}, if we have a stochastic inclusion map, thus {\Gamma(Y|E) \subset \Gamma(X|E)} for all events {E}, with the restriction maps being compatible. We can then define the power set {2^X|\Omega} of a stochastic set {X|\Omega} by setting {\Gamma(2^X|E)} for any event {E} to be the set of all stochastic subsets {Y|E} of {X|E} relative to {E}; it is easy to see that {2^X|\Omega} is a stochastic set with the obvious restriction maps (one can also identify {2^X|\Omega} with {\hbox{Hom}(X, \{\hbox{true},\hbox{false}\})|\Omega} in the obvious fashion). Again, when {\Omega} is discrete and at most countable, the fibre of {2^X|\Omega} at a point {\omega} of positive measure is simply the deterministic power set {2^{X_\omega}}.

Note that if {f: X|\Omega \rightarrow Y|\Omega} is a stochastic function and {Y'|\Omega} is a stochastic subset of {Y|\Omega}, then the inverse image {f^{-1}(Y')|\Omega}, defined by setting {\Gamma(f^{-1}(Y')|E)} for any event {E} to be the set of those {x \in \Gamma(X|E)} with {f(x) \in \Gamma(Y'|E)}, is a stochastic subset of {X|\Omega}. In particular, given a {k}-ary relation {R: X_1 \times \dots \times X_k|\Omega \rightarrow \{\hbox{true}, \hbox{false}\}|\Omega}, the inverse image {R^{-1}( \{ \hbox{true} \}|\Omega )} is a stochastic subset of {X_1 \times \dots \times X_k|\Omega}, which by abuse of notation we denote as

\displaystyle  \{ (x_1,\dots,x_k) \in X_1 \times \dots \times X_k: R(x_1,\dots,x_k) \hbox{ is true} \}|\Omega.

In a similar spirit, if {X'|\Omega} is a stochastic subset of {X|\Omega} and {f: X|\Omega \rightarrow Y|\Omega} is a stochastic function, we can define the image {f(X')|\Omega} by setting {\Gamma(f(X')|E)} to be the set of those {f(x)} with {x \in \Gamma(X'|E)}; one easily verifies that this is a stochastic subset of {Y|\Omega}.

Remark 2 One should caution that in the definition of the subset relation {Y|\Omega \subset X|\Omega}, it is important that {\Gamma(Y|E) \subset \Gamma(X|E)} for all events {E}, not just the global event {\Omega}; in particular, just because a stochastic set {X|\Omega} has no global sections, does not mean that it is contained in the stochastic empty set {\emptyset|\Omega}.

Now we discuss Boolean operations on stochastic subsets of a given stochastic set {X|\Omega}. Given two stochastic subsets {X_1|\Omega, X_2|\Omega} of {X|\Omega}, the stochastic intersection {(X_1 \cap X_2)|\Omega} is defined by setting {\Gamma((X_1 \cap X_2)|E)} to be the set of {x \in \Gamma(X|E)} that lie in both {\Gamma(X_1|E)} and {\Gamma(X_2|E)}:

\displaystyle  \Gamma(X_1 \cap X_2)|E) := \Gamma(X_1|E) \cap \Gamma(X_2|E).

This is easily verified to again be a stochastic subset of {X|\Omega}. More generally one may define stochastic countable intersections {(\bigcap_{n=1}^\infty X_n)|\Omega} for any sequence {X_n|\Omega} of stochastic subsets of {X|\Omega}. One could extend this definition to uncountable families if one wished, but I would advise against it, because some of the usual laws of Boolean algebra (e.g. the de Morgan laws) may break down in this setting.

Stochastic unions are a bit more subtle. The set {\Gamma((X_1 \cup X_2)|E)} should not be defined to simply be the union of {\Gamma(X_1|E)} and {\Gamma(X_2|E)}, as this would not respect the gluing axiom. Instead, we define {\Gamma((X_1 \cup X_2)|E)} to be the set of all {x \in \Gamma(X|E)} such that one can cover {E} by measurable subevents {E_1,E_2} such that {x_i|E_i \in \Gamma(X_i|E_i)} for {i=1,2}; then {(X_1 \cup X_2)|\Omega} may be verified to be a stochastic subset of {X|\Omega}. Thus for instance {\{0,1\}|\Omega} is the stochastic union of {\{0\}|\Omega} and {\{1\}|\Omega}. Similarly for countable unions {(\bigcup_{n=1}^\infty X_n)|\Omega} of stochastic subsets {X_n|\Omega} of {X|\Omega}, although for uncountable unions are extremely problematic (they are disliked by both the measure theory and the countable gluing axiom) and will not be defined here. Finally, the stochastic difference set {\Gamma((X_1 \backslash X_2)|E)} is defined as the set of all {x|E} in {\Gamma(X_1|E)} such that {x|F \not \in \Gamma(X_2|F)} for any subevent {F} of {E} of positive probability. One may verify that in the case when {\Omega} is discrete and at most countable, these Boolean operations correspond to the classical Boolean operations applied separately to each fibre {X_{i,\omega}} of the relevant sets {X_i}. We also leave as an exercise to the reader to verify the usual laws of Boolean arithmetic, e.g. the de Morgan laws, provided that one works with at most countable unions and intersections.

One can also consider a stochastic finite union {(\bigcup_{n=1}^N X_n)|\Omega} in which the number {N} of sets in the union is itself stochastic. More precisely, let {X|\Omega} be a stochastic set, let {N \in {\bf N}|\Omega} be a stochastic natural number, and let {n \mapsto X_n|\Omega} be a stochastic function from the stochastic set {\{ n \in {\bf N}: n \leq N\}|\Omega} (defined by setting {\Gamma(\{n \in {\bf N}: n\leq N\}|E) := \{ n \in {\bf N}|E: n \leq N|E\}})) to the stochastic power set {2^X|\Omega}. Here we are considering {0} to be a natural number, to allow for unions that are possibly empty, with {{\bf N}_+ := {\bf N} \backslash \{0\}} used for the positive natural numbers. We also write {(X_n)_{n=1}^N|\Omega} for the stochastic function {n \mapsto X_n|\Omega}. Then we can define the stochastic union {\bigcup_{n=1}^N X_n|\Omega} by setting {\Gamma(\bigcup_{n=1}^N X_n|E)} for an event {E} to be the set of local elements {x \in \Gamma(X|E)} with the property that there exists a covering of {E} by measurable subevents {E_{n_0}} for {n_0 \in {\bf N}_+}, such that one has {n_0 \leq N|E_{n_0}} and {x|E_{n_0} \in \Gamma(X_{n_0}|E_{n_0})}. One can verify that {\bigcup_{n=1}^N X_n|\Omega} is a stochastic set (with the obvious restriction maps). Again, in the model case when {\Omega} is discrete and at most countable, the fibre {(\bigcup_{n=1}^N X_n)_\omega} is what one would expect it to be, namely {\bigcup_{n=1}^{N(\omega)} (X_n)_\omega}.

The Cartesian product {(X \times Y)|\Omega} of two stochastic sets may be defined by setting {\Gamma((X \times Y)|E) := \Gamma(X|E) \times \Gamma(Y|E)} for all events {E}, with the obvious restriction maps; this is easily seen to be another stochastic set. This lets one define the concept of a {k}-ary operation {f: (X_1 \times \dots \times X_k)|\Omega \rightarrow Y|\Omega} from {k} stochastic sets {X_1,\dots,X_k} to another stochastic set {Y}, or a {k}-ary relation {R: (X_1 \times \dots \times X_k)|\Omega \rightarrow \{\hbox{true}, \hbox{false}\}|\Omega}. In particular, given {x_i \in X_i|\Omega} for {i=1,\dots,k}, the relation {R(x_1,\dots,x_k)} may be deterministically true, deterministically false, or have some other stochastic truth value.

Remark 3 In the degenerate case when {\Omega} is null, stochastic logic becomes a bit weird: all stochastic statements are deterministically true, as are their stochastic negations, since every event in {\Omega} (even the empty set) now holds with full probability. Among other pathologies, the empty set now has a global element over {\Omega} (this is analogous to the notorious convention {0^0=1}), and any two deterministic objects {x,y} become equal over {\Omega}: {x|\Omega=y|\Omega}.

The following simple observation is crucial to subsequent discussion. If {(x_n)_{n \in {\bf N}_+}} is a sequence taking values in the global elements {\Gamma(X|\Omega)} of a stochastic space {X|\Omega}, then we may also define global elements {x_n \in \Gamma(X|\Omega)} for stochastic indices {n \in {\bf N}_+|\Omega} as well, by appealing to the countable gluing axiom to glue together {x_{n_0}} restricted to the set {\{ \omega \in \Omega: n(\omega) = n_0\}} for each deterministic natural number {n_0} to form {x_n}. With this definition, the map {n \mapsto x_n} is a stochastic function from {{\bf N}_+|\Omega} to {X|\Omega}; indeed, this creates a one-to-one correspondence between external sequences (maps {n \mapsto x_n} from {{\bf N}_+} to {\Gamma(X|\Omega)}) and stochastic sequences (stochastic functions {n \mapsto x_n} from {{\bf N}_+|\Omega} to {X|\Omega}). Similarly with {{\bf N}_+} replaced by any other at most countable set. This observation will be important in allowing many deterministic arguments involving sequences will be able to be carried over to the stochastic setting.

We now specialise from the extremely broad discipline of set theory to the more focused discipline of real analysis. There are two fundamental axioms that underlie real analysis (and in particular distinguishes it from real algebra). The first is the Archimedean property, which we phrase in the “no infinitesimal” formulation as follows:

Proposition 2 (Archimedean property) Let {x \in {\bf R}} be such that {x \leq 1/n} for all positive natural numbers {n}. Then {x \leq 0}.

The other is the least upper bound axiom:

Proposition 3 (Least upper bound axiom) Let {S} be a non-empty subset of {{\bf R}} which has an upper bound {M \in {\bf R}}, thus {x \leq M} for all {x \in S}. Then there exists a unique real number {\sup S \in {\bf R}} with the following properties:

  • {x \leq \sup S} for all {x \in S}.
  • For any real {L < \sup S}, there exists {x \in S} such that {L < x \leq \sup S}.
  • {\sup S \leq M}.

Furthermore, {\sup S} does not depend on the choice of {M}.

The Archimedean property extends easily to the stochastic setting:

Proposition 4 (Stochastic Archimedean property) Let {x \in \Gamma({\bf R}|\Omega)} be such that {x \leq 1/n} for all deterministic natural numbers {n}. Then {x \leq 0}.

Remark 4 Here, incidentally, is one place in which this stochastic formalism deviates from the nonstandard analysis formalism, as the latter certainly permits the existence of infinitesimal elements. On the other hand, we caution that stochastic real numbers are permitted to be unbounded, so that formulation of Archimedean property is not valid in the stochastic setting.

The proof is easy and is left to the reader. The least upper bound axiom also extends nicely to the stochastic setting, but the proof requires more work (in particular, our argument uses the monotone convergence theorem):

Theorem 5 (Stochastic least upper bound axiom) Let {S|\Omega} be a stochastic subset of {{\bf R}|\Omega} which has a global upper bound {M \in {\bf R}|\Omega}, thus {x \leq M} for all {x \in \Gamma(S|\Omega)}, and is globally non-empty in the sense that there is at least one global element {x \in \Gamma(S|\Omega)}. Then there exists a unique stochastic real number {\sup S \in \Gamma({\bf R}|\Omega)} with the following properties:

  • {x \leq \sup S} for all {x \in \Gamma(S|\Omega)}.
  • For any stochastic real {L < \sup S}, there exists {x \in \Gamma(S|\Omega)} such that {L < x \leq \sup S}.
  • {\sup S \leq M}.

Furthermore, {\sup S} does not depend on the choice of {M}.

For future reference, we note that the same result holds with {{\bf R}} replaced by {{\bf N} \cup \{+\infty\}} throughout, since the latter may be embedded in the former, for instance by mapping {n} to {1 - \frac{1}{n+1}} and {+\infty} to {1}. In applications, the above theorem serves as a reasonable substitute for the countable axiom of choice, which does not appear to hold in unrestricted generality relative to a measure space; in particular, it can be used to generate various extremising sequences for stochastic functionals on various stochastic function spaces.

Proof: Uniqueness is clear (using the Archimedean property), as well as the independence on {M}, so we turn to existence. By using an order-preserving map from {{\bf R}} to {(-1,1)} (e.g. {x \mapsto \frac{2}{\pi} \hbox{arctan}(x)}) we may assume that {S|\Omega} is a subset of {(-1,1)|\Omega}, and that {M < 1}.

We observe that {\Gamma(S|\Omega)} is a lattice: if {x, y \in \Gamma(S|\Omega)}, then {\max(x,y)} and {\min(x,y)} also lie in {\Gamma(S|\Omega)}. Indeed, {\max(x,y)} may be formed by appealing to the countable gluing axiom to glue {y} (restricted the set {\{ \omega \in \Omega: x(\omega) < y(\omega) \}}) with {x} (restricted to the set {\{ \omega \in \Omega: x(\omega) \geq y(\omega) \}}), and similarly for {\min(x,y)}. (Here we use the fact that relations such as {<} are Borel measurable on {{\bf R}}.)

Let {A \in {\bf R}} denote the deterministic quantity

\displaystyle  A := \sup \{ \int_\Omega x(\omega)\ d\mu(\omega): x \in \Gamma(S|\Omega) \}

then (by Proposition 3!) {A} is well-defined; here we use the hypothesis that {\mu(\Omega)} is finite. Thus we may find a sequence {(x_n)_{n \in {\bf N}}} of elements {x_n} of {\Gamma(S|\Omega)} such that

\displaystyle  \int_\Omega x_n(\omega)\ d\mu(\omega) \rightarrow A \hbox{ as } n \rightarrow \infty. \ \ \ \ \ (1)

Using the lattice property, we may assume that the {x_n} are non-decreasing: {x_n \leq x_m} whenever {n \leq m}. If we then define {\sup S(\omega) := \sup_n x_n(\omega)} (after choosing measurable representatives of each equivalence class {x_n}), then {\sup S} is a stochastic real with {\sup S \leq M}.

If {x \in \Gamma(S|\Omega)}, then {\max(x,x_n) \in \Gamma(S|\Omega)}, and so

\displaystyle  \int_\Omega \max(x,x_n)\ d\mu(\omega) \leq A.

From this and (1) we conclude that

\displaystyle  \int_\Omega \max(x-x_n,0) \rightarrow 0 \hbox{ as } n \rightarrow \infty.

From monotone convergence, we conclude that

\displaystyle  \int_\Omega \max(x-\sup S,0) = 0

and so {x \leq \sup S}, as required.

Now let {L < \sup S} be a stochastic real. After choosing measurable representatives of each relevant equivalence class, we see that for almost every {\omega \in \Omega}, we can find a natural number {n(\omega)} with {x_{n(\omega)} > L}. If we choose {n(\omega)} to be the first such positive natural number when it exists, and (say) {1} otherwise, then {n} is a stochastic positive natural number and {L < x_n}. The claim follows. \Box

Remark 5 One can abstract away the role of the measure {\mu} here, leaving only the ideal of null sets. The property that the measure is finite is then replaced by the more general property that given any non-empty family of measurable sets, there is an at most countable union of sets in that family that is an upper bound modulo null sets for all elements in that faily.

Using Proposition 4 and Theorem 5, one can then revisit many of the other foundational results of deterministic real analysis, and develop stochastic analogues; we give some examples of this below the fold (focusing on the Heine-Borel theorem and a case of the spectral theorem). As an application of this formalism, we revisit some of the Furstenberg-Zimmer structural theory of measure-preserving systems, particularly that of relatively compact and relatively weakly mixing systems, and interpret them in this framework, basically as stochastic versions of compact and weakly mixing systems (though with the caveat that the shift map is allowed to act non-trivially on the underlying probability space). As this formalism is “point-free”, in that it avoids explicit use of fibres and disintegrations, it will be well suited for generalising this structure theory to settings in which the underlying probability spaces are not standard Borel, and the underlying groups are uncountable; I hope to discuss such generalisations in future blog posts.

Remark 6 Roughly speaking, stochastic real analysis can be viewed as a restricted subset of classical real analysis in which all operations have to be “measurable” with respect to the base space. In particular, indiscriminate application of the axiom of choice is not permitted, and one should largely restrict oneself to performing countable unions and intersections rather than arbitrary unions or intersections. Presumably one can formalise this intuition with a suitable “countable transfer principle”, but I was not able to formulate a clean and general principle of this sort, instead verifying various assertions about stochastic objects by hand rather than by direct transfer from the deterministic setting. However, it would be desirable to have such a principle, since otherwise one is faced with the tedious task of redoing all the foundations of real analysis (or whatever other base theory of mathematics one is going to be working in) in the stochastic setting by carefully repeating all the arguments.

More generally, topos theory is a good formalism for capturing precisely the informal idea of performing mathematics with certain operations, such as the axiom of choice, the law of the excluded middle, or arbitrary unions and intersections, being somehow “prohibited” or otherwise “restricted”.

Read the rest of this entry »

Two of the most famous open problems in additive prime number theory are the twin prime conjecture and the binary Goldbach conjecture. They have quite similar forms:

  • Twin prime conjecture The equation {p_1 - p_2 = 2} has infinitely many solutions with {p_1,p_2} prime.
  • Binary Goldbach conjecture The equation {p_1 + p_2 = N} has at least one solution with {p_1,p_2} prime for any given even {N \geq 4}.

In view of this similarity, it is not surprising that the partial progress on these two conjectures have tracked each other fairly closely; the twin prime conjecture is generally considered slightly easier than the binary Goldbach conjecture, but broadly speaking any progress made on one of the conjectures has also led to a comparable amount of progress on the other. (For instance, Chen’s theorem has a version for the twin prime conjecture, and a version for the binary Goldbach conjecture.) Also, the notorious parity obstruction is present in both problems, preventing a solution to either conjecture by almost all known methods (see this previous blog post for more discussion).

In this post, I would like to note a divergence from this general principle, with regards to bounded error versions of these two conjectures:

  • Twin prime with bounded error The inequalities {0 < p_1 - p_2 < H} has infinitely many solutions with {p_1,p_2} prime for some absolute constant {H}.
  • Binary Goldbach with bounded error The inequalities {N \leq p_1+p_2 \leq N+H} has at least one solution with {p_1,p_2} prime for any sufficiently large {N} and some absolute constant {H}.

The first of these statements is now a well-known theorem of Zhang, and the Polymath8b project hosted on this blog has managed to lower {H} to {H=246} unconditionally, and to {H=6} assuming the generalised Elliott-Halberstam conjecture. However, the second statement remains open; the best result that the Polymath8b project could manage in this direction is that (assuming GEH) at least one of the binary Goldbach conjecture with bounded error, or the twin prime conjecture with no error, had to be true.

All the known proofs of Zhang’s theorem proceed through sieve-theoretic means. Basically, they take as input equidistribution results that control the size of discrepancies such as

\displaystyle  \Delta(f; a\ (q)) := \sum_{x \leq n \leq 2x; n=a\ (q)} f(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} f(n) \ \ \ \ \ (1)

for various congruence classes {a\ (q)} and various arithmetic functions {f}, e.g. {f(n) = \Lambda(n+h_i)} (or more generaly {f(n) = \alpha * \beta(n+h_i)} for various {\alpha,\beta}). After taking some carefully chosen linear combinations of these discrepancies, and using the trivial positivity lower bound

\displaystyle  a_n \geq 0 \hbox{ for all } n \implies \sum_n a_n \geq 0 \ \ \ \ \ (2)

one eventually obtains (for suitable {H}) a non-trivial lower bound of the form

\displaystyle  \sum_{x \leq n \leq 2x} \nu(n) 1_A(n) > 0

where {\nu} is some weight function, and {A} is the set of {n} such that there are at least two primes in the interval {[n,n+H]}. This implies at least one solution to the inequalities {0 < p_1 - p_2 < H} with {p_1,p_2 \sim x}, and Zhang’s theorem follows.

In a similar vein, one could hope to use bounds on discrepancies such as (1) (for {x} comparable to {N}), together with the trivial lower bound (2), to obtain (for sufficiently large {N}, and suitable {H}) a non-trivial lower bound of the form

\displaystyle  \sum_{n \leq N} \nu(n) 1_B(n) > 0 \ \ \ \ \ (3)

for some weight function {\nu}, where {B} is the set of {n} such that there is at least one prime in each of the intervals {[n,n+H]} and {[N-n-H,n]}. This would imply the binary Goldbach conjecture with bounded error.

However, the parity obstruction blocks such a strategy from working (for much the same reason that it blocks any bound of the form {H \leq 4} in Zhang’s theorem, as discussed in the Polymath8b paper.) The reason is as follows. The sieve-theoretic arguments are linear with respect to the {n} summation, and as such, any such sieve-theoretic argument would automatically also work in a weighted setting in which the {n} summation is weighted by some non-negative weight {\omega(n) \geq 0}. More precisely, if one could control the weighted discrepancies

\displaystyle  \Delta(f\omega; a\ (q)) = \sum_{x \leq n \leq 2x; n=a\ (q)} f(n) \omega(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} f(n) \omega(n)

to essentially the same accuracy as the unweighted discrepancies (1), then thanks to the trivial weighted version

\displaystyle  a_n \geq 0 \hbox{ for all } n \implies \sum_n a_n \omega(n) \geq 0

of (2), any sieve-theoretic argument that was capable of proving (3) would also be capable of proving the weighted estimate

\displaystyle  \sum_{n \leq N} \nu(n) 1_B(n) \omega(n) > 0. \ \ \ \ \ (4)

However, (4) may be defeated by a suitable choice of weight {\omega}, namely

\displaystyle  \omega(n) := \prod_{i=1}^H (1 + \lambda(n) \lambda(n+i)) \times \prod_{j=0}^H (1 - \lambda(n) \lambda(N-n-j))

where {n \mapsto \lambda(n)} is the Liouville function, which counts the parity of the number of prime factors of a given number {n}. Since {\lambda(n)^2 = 1}, one can expand out {\omega(n)} as the sum of {1} and a finite number of other terms, each of which consists of the product of two or more translates (or reflections) of {\lambda}. But from the Möbius randomness principle (or its analogue for the Liouville function), such products of {\lambda} are widely expected to be essentially orthogonal to any arithmetic function {f(n)} that is arising from a single multiplicative function such as {\Lambda}, even on very short arithmetic progressions. As such, replacing {1} by {\omega(n)} in (1) should have a negligible effect on the discrepancy. On the other hand, in order for {\omega(n)} to be non-zero, {\lambda(n+i)} has to have the same sign as {\lambda(n)} and hence the opposite sign to {\lambda(N-n-j)} cannot simultaneously be prime for any {0 \leq i,j \leq H}, and so {1_B(n) \omega(n)} vanishes identically, contradicting (4). This indirectly rules out any modification of the Goldston-Pintz-Yildirim/Zhang method for establishing the binary Goldbach conjecture with bounded error.

The above argument is not watertight, and one could envisage some ways around this problem. One of them is that the Möbius randomness principle could simply be false, in which case the parity obstruction vanishes. A good example of this is the result of Heath-Brown that shows that if there are infinitely many Siegel zeroes (which is a strong violation of the Möbius randomness principle), then the twin prime conjecture holds. Another way around the obstruction is to start controlling the discrepancy (1) for functions {f} that are combinations of more than one multiplicative function, e.g. {f(n) = \Lambda(n) \Lambda(n+2)}. However, controlling such functions looks to be at least as difficult as the twin prime conjecture (which is morally equivalent to obtaining non-trivial lower-bounds for {\sum_{x \leq n \leq 2x} \Lambda(n) \Lambda(n+2)}). A third option is not to use a sieve-theoretic argument, but to try a different method (e.g. the circle method). However, most other known methods also exhibit linearity in the “{n}” variable and I would suspect they would be vulnerable to a similar obstruction. (In any case, the circle method specifically has some other difficulties in tackling binary problems, as discussed in this previous post.)

Due to some requests, I’m uploading to my blog the slides for my recent talk in Segovia (for the birthday conference of Michael Cowling) on “Hilbert’s fifth problem and approximate groups“.  The slides cover essentially the same range of topics in this series of lecture notes, or in this text of mine, though of course in considerably less detail, given that the slides are meant to be presented in an hour.

Let {\bar{{\bf Q}}} be the algebraic closure of {{\bf Q}}, that is to say the field of algebraic numbers. We fix an embedding of {\bar{{\bf Q}}} into {{\bf C}}, giving rise to a complex absolute value {z \mapsto |z|} for algebraic numbers {z \in \bar{{\bf Q}}}.

Let {\alpha \in \bar{{\bf Q}}} be of degree {D > 1}, so that {\alpha} is irrational. A classical theorem of Liouville gives the quantitative bound

\displaystyle  |\alpha - \frac{p}{q}| \geq c \frac{1}{|q|^D} \ \ \ \ \ (1)

for the irrationality of {\alpha} fails to be approximated by rational numbers {p/q}, where {c>0} depends on {\alpha,D} but not on {p,q}. Indeed, if one lets {\alpha = \alpha_1, \alpha_2, \dots, \alpha_D} be the Galois conjugates of {\alpha}, then the quantity {\prod_{i=1}^D |q \alpha_i - p|} is a non-zero natural number divided by a constant, and so we have the trivial lower bound

\displaystyle  \prod_{i=1}^D |q \alpha_i - p| \geq c

from which the bound (1) easily follows. A well known corollary of the bound (1) is that Liouville numbers are automatically transcendental.

The famous theorem of Thue, Siegel and Roth improves the bound (1) to

\displaystyle  |\alpha - \frac{p}{q}| \geq c \frac{1}{|q|^{2+\epsilon}} \ \ \ \ \ (2)

for any {\epsilon>0} and rationals {\frac{p}{q}}, where {c>0} depends on {\alpha,\epsilon} but not on {p,q}. Apart from the {\epsilon} in the exponent and the implied constant, this bound is optimal, as can be seen from Dirichlet’s theorem. This theorem is a good example of the ineffectivity phenomenon that affects a large portion of modern number theory: the implied constant in the {\gg} notation is known to be finite, but there is no explicit bound for it in terms of the coefficients of the polynomial defining {\alpha} (in contrast to (1), for which an effective bound may be easily established). This is ultimately due to the reliance on the “dueling conspiracy” (or “repulsion phenomenon”) strategy. We do not as yet have a good way to rule out one counterexample to (2), in which {\frac{p}{q}} is far closer to {\alpha} than {\frac{1}{|q|^{2+\epsilon}}}; however we can rule out two such counterexamples, by playing them off of each other.

A powerful strengthening of the Thue-Siegel-Roth theorem is given by the subspace theorem, first proven by Schmidt and then generalised further by several authors. To motivate the theorem, first observe that the Thue-Siegel-Roth theorem may be rephrased as a bound of the form

\displaystyle  | \alpha p - \beta q | \times | \alpha' p - \beta' q | \geq c (1 + |p| + |q|)^{-\epsilon} \ \ \ \ \ (3)

for any algebraic numbers {\alpha,\beta,\alpha',\beta'} with {(\alpha,\beta)} and {(\alpha',\beta')} linearly independent (over the algebraic numbers), and any {(p,q) \in {\bf Z}^2} and {\epsilon>0}, with the exception when {\alpha,\beta} or {\alpha',\beta'} are rationally dependent (i.e. one is a rational multiple of the other), in which case one has to remove some lines (i.e. subspaces in {{\bf Q}^2}) of rational slope from the space {{\bf Z}^2} of pairs {(p,q)} to which the bound (3) does not apply (namely, those lines for which the left-hand side vanishes). Here {c>0} can depend on {\alpha,\beta,\alpha',\beta',\epsilon} but not on {p,q}. More generally, we have

Theorem 1 (Schmidt subspace theorem) Let {d} be a natural number. Let {L_1,\dots,L_d: \bar{{\bf Q}}^d \rightarrow \bar{{\bf Q}}} be linearly independent linear forms. Then for any {\epsilon>0}, one has the bound

\displaystyle  \prod_{i=1}^d |L_i(x)| \geq c (1 + \|x\| )^{-\epsilon}

for all {x \in {\bf Z}^d}, outside of a finite number of proper subspaces of {{\bf Q}^d}, where

\displaystyle  \| (x_1,\dots,x_d) \| := \max( |x_1|, \dots, |x_d| )

and {c>0} depends on {\epsilon, d} and the {\alpha_{i,j}}, but is independent of {x}.

Being a generalisation of the Thue-Siegel-Roth theorem, it is unsurprising that the known proofs of the subspace theorem are also ineffective with regards to the constant {c}. (However, the number of exceptional subspaces may be bounded effectively; cf. the situation with the Skolem-Mahler-Lech theorem, discussed in this previous blog post.) Once again, the lower bound here is basically sharp except for the {\epsilon} factor and the implied constant: given any {\delta_1,\dots,\delta_d > 0} with {\delta_1 \dots \delta_d = 1}, a simple volume packing argument (the same one used to prove the Dirichlet approximation theorem) shows that for any sufficiently large {N \geq 1}, one can find integers {x_1,\dots,x_d \in [-N,N]}, not all zero, such that

\displaystyle  |L_i(x)| \ll \delta_i

for all {i=1,\dots,d}. Thus one can get {\prod_{i=1}^d |L_i(x)|} comparable to {1} in many different ways.

There are important generalisations of the subspace theorem to other number fields than the rationals (and to other valuations than the Archimedean valuation {z \mapsto |z|}); we will develop one such generalisation below.

The subspace theorem is one of many finiteness theorems in Diophantine geometry; in this case, it is the number of exceptional subspaces which is finite. It turns out that finiteness theorems are very compatible with the language of nonstandard analysis. (See this previous blog post for a review of the basics of nonstandard analysis, and in particular for the nonstandard interpretation of asymptotic notation such as {\ll} and {o()}.) The reason for this is that a standard set {X} is finite if and only if it contains no strictly nonstandard elements (that is to say, elements of {{}^* X \backslash X}). This makes for a clean formulation of finiteness theorems in the nonstandard setting. For instance, the standard form of Bezout’s theorem asserts that if {P(x,y), Q(x,y)} are coprime polynomials over some field, then the curves {\{ (x,y): P(x,y) = 0\}} and {\{ (x,y): Q(x,y)=0\}} intersect in only finitely many points. The nonstandard version of this is then

Theorem 2 (Bezout’s theorem, nonstandard form) Let {P(x,y), Q(x,y)} be standard coprime polynomials. Then there are no strictly nonstandard solutions to {P(x,y)=Q(x,y)=0}.

Now we reformulate Theorem 1 in nonstandard language. We need a definition:

Definition 3 (General position) Let {K \subset L} be nested fields. A point {x = (x_1,\dots,x_d)} in {L^d} is said to be in {K}-general position if it is not contained in any hyperplane of {L^d} definable over {K}, or equivalently if one has

\displaystyle  a_1 x_1 + \dots + a_d x_d = 0 \iff a_1=\dots = a_d = 0

for any {a_1,\dots,a_d \in K}.

Theorem 4 (Schmidt subspace theorem, nonstandard version) Let {d} be a standard natural number. Let {L_1,\dots,L_d: \bar{{\bf Q}}^d \rightarrow \bar{{\bf Q}}} be linearly independent standard linear forms. Let {x \in {}^* {\bf Z}^d} be a tuple of nonstandard integers which is in {{\bf Q}}-general position (in particular, this forces {x} to be strictly nonstandard). Then one has

\displaystyle  \prod_{i=1}^d |L_i(x)| \gg \|x\|^{-o(1)},

where we extend {L_i} from {\bar{{\bf Q}}} to {{}^* \bar{{\bf Q}}} (and also similarly extend {\| \|} from {{\bf Z}^d} to {{}^* {\bf Z}^d}) in the usual fashion.

Observe that (as is usual when translating to nonstandard analysis) some of the epsilons and quantifiers that are present in the standard version become hidden in the nonstandard framework, being moved inside concepts such as “strictly nonstandard” or “general position”. We remark that as {x} is in {{\bf Q}}-general position, it is also in {\bar{{\bf Q}}}-general position (as an easy Galois-theoretic argument shows), and the requirement that the {L_1,\dots,L_d} are linearly independent is thus equivalent to {L_1(x),\dots,L_d(x)} being {\bar{{\bf Q}}}-linearly independent.

Exercise 1 Verify that Theorem 1 and Theorem 4 are equivalent. (Hint: there are only countably many proper subspaces of {{\bf Q}^d}.)

We will not prove the subspace theorem here, but instead focus on a particular application of the subspace theorem, namely to counting integer points on curves. In this paper of Corvaja and Zannier, the subspace theorem was used to give a new proof of the following basic result of Siegel:

Theorem 5 (Siegel’s theorem on integer points) Let {P \in {\bf Q}[x,y]} be an irreducible polynomial of two variables, such that the affine plane curve {C := \{ (x,y): P(x,y)=0\}} either has genus at least one, or has at least three points on the line at infinity, or both. Then {C} has only finitely many integer points {(x,y) \in {\bf Z}^2}.

This is a finiteness theorem, and as such may be easily converted to a nonstandard form:

Theorem 6 (Siegel’s theorem, nonstandard form) Let {P \in {\bf Q}[x,y]} be a standard irreducible polynomial of two variables, such that the affine plane curve {C := \{ (x,y): P(x,y)=0\}} either has genus at least one, or has at least three points on the line at infinity, or both. Then {C} does not contain any strictly nonstandard integer points {(x_*,y_*) \in {}^* {\bf Z}^2 \backslash {\bf Z}^2}.

Note that Siegel’s theorem can fail for genus zero curves that only meet the line at infinity at just one or two points; the key examples here are the graphs {\{ (x,y): y - f(x) = 0\}} for a polynomial {f \in {\bf Z}[x]}, and the Pell equation curves {\{ (x,y): x^2 - dy^2 = 1 \}}. Siegel’s theorem can be compared with the more difficult theorem of Faltings, which establishes finiteness of rational points (not just integer points), but now needs the stricter requirement that the curve {C} has genus at least two (to avoid the additional counterexample of elliptic curves of positive rank, which have infinitely many rational points).

The standard proofs of Siegel’s theorem rely on a combination of the Thue-Siegel-Roth theorem and a number of results on abelian varieties (notably the Mordell-Weil theorem). The Corvaja-Zannier argument rebalances the difficulty of the argument by replacing the Thue-Siegel-Roth theorem by the more powerful subspace theorem (in fact, they need one of the stronger versions of this theorem alluded to earlier), while greatly reducing the reliance on results on abelian varieties. Indeed, for curves with three or more points at infinity, no theory from abelian varieties is needed at all, while for the remaining cases, one mainly needs the existence of the Abel-Jacobi embedding, together with a relatively elementary theorem of Chevalley-Weil which is used in the proof of the Mordell-Weil theorem, but is significantly easier to prove.

The Corvaja-Zannier argument (together with several further applications of the subspace theorem) is presented nicely in this Bourbaki expose of Bilu. To establish the theorem in full generality requires a certain amount of algebraic number theory machinery, such as the theory of valuations on number fields, or of relative discriminants between such number fields. However, the basic ideas can be presented without much of this machinery by focusing on simple special cases of Siegel’s theorem. For instance, we can handle irreducible cubics that meet the line at infinity at exactly three points {[1,\alpha_1,0], [1,\alpha_2,0], [1,\alpha_3,0]}:

Theorem 7 (Siegel’s theorem with three points at infinity) Siegel’s theorem holds when the irreducible polynomial {P(x,y)} takes the form

\displaystyle  P(x,y) = (y - \alpha_1 x) (y - \alpha_2 x) (y - \alpha_3 x) + Q(x,y)

for some quadratic polynomial {Q \in {\bf Q}[x,y]} and some distinct algebraic numbers {\alpha_1,\alpha_2,\alpha_3}.

Proof: We use the nonstandard formalism. Suppose for sake of contradiction that we can find a strictly nonstandard integer point {(x_*,y_*) \in {}^* {\bf Z}^2 \backslash {\bf Z}^2} on a curve {C := \{ (x,y): P(x,y)=0\}} of the indicated form. As this point is infinitesimally close to the line at infinity, {y_*/x_*} must be infinitesimally close to one of {\alpha_1,\alpha_2,\alpha_3}; without loss of generality we may assume that {y_*/x_*} is infinitesimally close to {\alpha_1}.

We now use a version of the polynomial method, to find some polynomials of controlled degree that vanish to high order on the “arm” of the cubic curve {C} that asymptotes to {[1,\alpha_1,0]}. More precisely, let {D \geq 3} be a large integer (actually {D=3} will already suffice here), and consider the {\bar{{\bf Q}}}-vector space {V} of polynomials {R(x,y) \in \bar{{\bf Q}}[x,y]} of degree at most {D}, and of degree at most {2} in the {y} variable; this space has dimension {3D}. Also, as one traverses the arm {y/x \rightarrow \alpha_1} of {C}, any polynomial {R} in {V} grows at a rate of at most {D}, that is to say {R} has a pole of order at most {D} at the point at infinity {[1,\alpha_1,0]}. By performing Laurent expansions around this point (which is a non-singular point of {C}, as the {\alpha_i} are assumed to be distinct), we may thus find a basis {R_1, \dots, R_{3D}} of {V}, with the property that {R_j} has a pole of order at most {D+1-j} at {[1,\alpha_1,0]} for each {j=1,\dots,3D}.

From the control of the pole at {[1,\alpha_1,0]}, we have

\displaystyle  |R_j(x_*,y_*)| \ll (|x_*|+|y_*|)^{D+1-j}

for all {j=1,\dots,3D}. The exponents here become negative for {j > D+1}, and on multiplying them all together we see that

\displaystyle  \prod_{j=1}^{3D} |R_j(x_*,y_*)| \ll (|x_*|+|y_*|)^{3D(D+1) - \frac{3D(3D+1)}{2}}.

This exponent is negative for {D} large enough (or just take {D=3}). If we expand

\displaystyle  R_j(x_*,y_*) = \sum_{a+b \leq D; b \leq 2} \alpha_{j,a,b} x_*^a y_*^b

for some algebraic numbers {\alpha_{j,a,b}}, then we thus have

\displaystyle  \prod_{j=1}^{3D} |\sum_{a+b \leq D; b \leq 2} \alpha_{j,a,b} x_*^a y_*^b| \ll (|x_*|+|y_*|)^{-\epsilon}

for some standard {\epsilon>0}. Note that the {3D}-dimensional vectors {(\alpha_{j,a,b})_{a+b \leq D; b \leq 2}} are linearly independent in {{\bf C}^{3D}}, because the {R_j} are linearly independent in {V}. Applying the Schmidt subspace theorem in the contrapositive, we conclude that the {3D}-tuple {( x_*^a y_*^b )_{a+b \leq D; b \leq 2} \in {}^* {\bf Z}^{3D}} is not in {{\bf Q}}-general position. That is to say, one has a non-trivial constraint of the form

\displaystyle  \sum_{a+b \leq D; b \leq 2} c_{a,b} x_*^a y_*^b = 0 \ \ \ \ \ (4)

for some standard rational coefficients {c_{a,b}}, not all zero. But, as {P} is irreducible and cubic in {y}, it has no common factor with the standard polynomial {\sum_{a+b \leq D; b \leq 2} c_{a,b} x^a y^b}, so by Bezout’s theorem (Theorem 2) the constraint (4) only has standard solutions, contradicting the strictly nonstandard nature of {(x_*,y_*)}. \Box

Exercise 2 Rewrite the above argument so that it makes no reference to nonstandard analysis. (In this case, the rewriting is quite straightforward; however, there will be a subsequent argument in which the standard version is significantly messier than the nonstandard counterpart, which is the reason why I am working with the nonstandard formalism in this blog post.)

A similar argument works for higher degree curves that meet the line at infinity in three or more points, though if the curve has singularities at infinity then it becomes convenient to rely on the Riemann-Roch theorem to control the dimension of the analogue of the space {V}. Note that when there are only two or fewer points at infinity, though, one cannot get the negative exponent of {-\epsilon} needed to usefully apply the subspace theorem. To deal with this case we require some additional tricks. For simplicity we focus on the case of Mordell curves, although it will be convenient to work with more general number fields {{\bf Q} \subset K \subset \bar{{\bf Q}}} than the rationals:

Theorem 8 (Siegel’s theorem for Mordell curves) Let {k} be a non-zero integer. Then there are only finitely many integer solutions {(x,y) \in {\bf Z}^2} to {y^2 - x^3 = k}. More generally, for any number field {K}, and any nonzero {k \in K}, there are only finitely many algebraic integer solutions {(x,y) \in {\mathcal O}_K^2} to {y^2-x^3=k}, where {{\mathcal O}_K} is the ring of algebraic integers in {K}.

Again, we will establish the nonstandard version. We need some additional notation:

Definition 9

  • We define an almost rational integer to be a nonstandard {x \in {}^* {\bf Q}} such that {Mx \in {}^* {\bf Z}} for some standard positive integer {M}, and write {{\bf Q} {}^* {\bf Z}} for the {{\bf Q}}-algebra of almost rational integers.
  • If {K} is a standard number field, we define an almost {K}-integer to be a nonstandard {x \in {}^* K} such that {Mx \in {}^* {\mathcal O}_K} for some standard positive integer {M}, and write {K {}^* {\bf Z} = K {\mathcal O}_K} for the {K}-algebra of almost {K}-integers.
  • We define an almost algebraic integer to be a nonstandard {x \in {}^* {\bar Q}} such that {Mx} is a nonstandard algebraic integer for some standard positive integer {M}, and write {\bar{{\bf Q}} {}^* {\bf Z}} for the {\bar{{\bf Q}}}-algebra of almost algebraic integers.
  • Theorem 10 (Siegel for Mordell, nonstandard version) Let {k} be a non-zero standard algebraic number. Then the curve {\{ (x,y): y^2 - x^3 = k \}} does not contain any strictly nonstandard almost algebraic integer point.

    Another way of phrasing this theorem is that if {x,y} are strictly nonstandard almost algebraic integers, then {y^2-x^3} is either strictly nonstandard or zero.

    Exercise 3 Verify that Theorem 8 and Theorem 10 are equivalent.

    Due to all the ineffectivity, our proof does not supply any bound on the solutions {x,y} in terms of {k}, even if one removes all references to nonstandard analysis. It is a conjecture of Hall (a special case of the notorious ABC conjecture) that one has the bound {|x| \ll_\epsilon |k|^{2+\epsilon}} for all {\epsilon>0} (or equivalently {|y| \ll_\epsilon |k|^{3+\epsilon}}), but even the weaker conjecture that {x,y} are of polynomial size in {k} is open. (The best known bounds are of exponential nature, and are proven using a version of Baker’s method: see for instance this text of Sprindzuk.)

    A direct repetition of the arguments used to prove Theorem 7 will not work here, because the Mordell curve {\{ (x,y): y^2 - x^3 = k \}} only hits the line at infinity at one point, {[0,1,0]}. To get around this we will exploit the fact that the Mordell curve is an elliptic curve and thus has a group law on it. We will then divide all the integer points on this curve by two; as elliptic curves have four 2-torsion points, this will end up placing us in a situation like Theorem 7, with four points at infinity. However, there is an obstruction: it is not obvious that dividing an integer point on the Mordell curve by two will produce another integer point. However, this is essentially true (after enlarging the ring of integers slightly) thanks to a general principle of Chevalley and Weil, which can be worked out explicitly in the case of division by two on Mordell curves by relatively elementary means (relying mostly on unique factorisation of ideals of algebraic integers). We give the details below the fold.

    Read the rest of this entry »

    As laid out in the foundational work of Kolmogorov, a classical probability space (or probability space for short) is a triplet {(X, {\mathcal X}, \mu)}, where {X} is a set, {{\mathcal X}} is a {\sigma}-algebra of subsets of {X}, and {\mu: {\mathcal X} \rightarrow [0,1]} is a countably additive probability measure on {{\mathcal X}}. Given such a space, one can form a number of interesting function spaces, including

    • the (real) Hilbert space {L^2(X, {\mathcal X}, \mu)} of square-integrable functions {f: X \rightarrow {\bf R}}, modulo {\mu}-almost everywhere equivalence, and with the positive definite inner product {\langle f, g\rangle_{L^2(X, {\mathcal X}, \mu)} := \int_X f g\ d\mu}; and
    • the unital commutative Banach algebra {L^\infty(X, {\mathcal X}, \mu)} of essentially bounded functions {f: X \rightarrow {\bf R}}, modulo {\mu}-almost everywhere equivalence, with {\|f\|_{L^\infty(X, {\mathcal X}, \mu)}} defined as the essential supremum of {|f|}.

    There is also a trace {\tau = \tau_\mu: L^\infty(X, {\mathcal X}, \mu) \rightarrow {\bf C}} on {L^\infty} defined by integration: {\tau(f) := \int_X f\ d\mu}.

    One can form the category {\mathbf{Prb}} of classical probability spaces, by defining a morphism {\phi: (X, {\mathcal X}, \mu) \rightarrow (Y, {\mathcal Y}, \nu)} between probability spaces to be a function {\phi: X \rightarrow Y} which is measurable (thus {\phi^{-1}(E) \in {\mathcal X}} for all {E \in {\mathcal Y}}) and measure-preserving (thus {\mu(\phi^{-1}(E)) = \nu(E)} for all {E \in {\mathcal Y}}).

    Let us now abstract the algebraic features of these spaces as follows; for want of a better name, I will refer to this abstraction as an algebraic probability space, and is very similar to the non-commutative probability spaces studied in this previous post, except that these spaces are now commutative (and real).

    Definition 1 An algebraic probability space is a pair {({\mathcal A}, \tau)} where

    • {{\mathcal A}} is a unital commutative real algebra;
    • {\tau: {\mathcal A} \rightarrow {\bf R}} is a homomorphism such that {\tau(1)=1} and {\tau( f^2 ) \geq 0} for all {f \in {\mathcal A}};
    • Every element {f} of {{\mathcal A}} is bounded in the sense that {\sup_{k \geq 1} \tau( f^{2k} )^{1/2k} < \infty}. (Technically, this isn’t an algebraic property, but I need it for technical reasons.)

    A morphism {\phi: ({\mathcal A}_1, \tau_1) \rightarrow ({\mathcal A}_2, \tau_2)} is a homomorphism {\phi^*: {\mathcal A}_2 \rightarrow {\mathcal A}_1} which is trace-preserving, in the sense that {\tau_1(\phi^*(f)) = \tau_2(f)} for all {f \in {\mathcal A}_2}.

    For want of a better name, I’ll denote the category of algebraic probability spaces as {\mathbf{AlgPrb}}. One can view this category as the opposite category to that of (a subcategory of) the category of tracial commutative real algebras. One could emphasise this opposite nature by denoting the algebraic probability space as {({\mathcal A}, \tau)^{op}} rather than {({\mathcal A},\tau)}; another suggestive (but slightly inaccurate) notation, inspired by the language of schemes, would be {\hbox{Spec}({\mathcal A},\tau)} rather than {({\mathcal A},\tau)}. However, we will not adopt these conventions here, and refer to algebraic probability spaces just by the pair {({\mathcal A},\tau)}.

    By the previous discussion, we have a covariant functor {F: \textbf{Prb} \rightarrow \textbf{AlgPrb}} that takes a classical probability space {(X, {\mathcal X}, \mu)} to its algebraic counterpart {(L^\infty(X, {\mathcal X},\mu), \tau_\mu)}, with a morphism {\phi: (X, {\mathcal X}, \mu) \rightarrow (Y, {\mathcal Y}, \nu)} of classical probability spaces mapping to a morphism {F(\phi): (L^\infty(X, {\mathcal X},\mu), \tau_\mu) \rightarrow (L^\infty(Y, {\mathcal Y},\nu), \tau_\nu)} of the corresponding algebraic probability spaces by the formula

    \displaystyle  F(\phi)^* f := f \circ \phi

    for {f \in L^\infty(Y, {\mathcal Y}, \nu)}. One easily verifies that this is a functor.

    In this post I would like to describe a functor {G: \textbf{AlgPrb} \rightarrow \textbf{Prb}} which partially inverts {F} (up to natural isomorphism), that is to say a recipe for starting with an algebraic probability space {({\mathcal A}, \tau)} and producing a classical probability space {(X, {\mathcal X}, \mu)}. This recipe is not new – it is basically the (commutative) Gelfand-Naimark-Segal construction (discussed in this previous post) combined with the Loomis-Sikorski theorem (discussed in this previous post). However, I wanted to put the construction in a single location for sake of reference. I also wanted to make the point that {F} and {G} are not complete inverses; there is a bit of information in the algebraic probability space (e.g. topological information) which is lost when passing back to the classical probability space. In some future posts, I would like to develop some ergodic theory using the algebraic foundations of probability theory rather than the classical foundations; this turns out to be convenient in the ergodic theory arising from nonstandard analysis (such as that described in this previous post), in which the groups involved are uncountable and the underlying spaces are not standard Borel spaces.

    Let us describe how to construct the functor {G}, with details postponed to below the fold.

    1. Starting with an algebraic probability space {({\mathcal A}, \tau)}, form an inner product on {{\mathcal A}} by the formula {\langle f, g \rangle := \tau(fg)}, and also form the spectral radius {\rho(f) :=\lim_{k \rightarrow \infty} \tau(f^{2^k})^{1/2^k}}.
    2. The inner product is clearly positive semi-definite. Quotienting out the null vectors and taking completions, we arrive at a real Hilbert space {L^2 = L^2({\mathcal A},\tau)}, to which the trace {\tau} may be extended.
    3. Somewhat less obviously, the spectral radius is well-defined and gives a norm on {{\mathcal A}}. Taking {L^2} limits of sequences in {{\mathcal A}} of bounded spectral radius gives us a subspace {L^\infty = L^\infty({\mathcal A},\tau)} of {L^2} that has the structure of a real commutative Banach algebra.
    4. The idempotents {1_E} of the Banach algebra {L^\infty} may be indexed by elements {E} of an abstract {\sigma}-algebra {{\mathcal B}}.
    5. The Boolean algebra homomorphisms {\delta_x: {\mathcal B} \rightarrow \{0,1\}} (or equivalently, the real algebra homomorphisms {\iota_x: L^\infty \rightarrow {\bf R}}) may be indexed by elements {x} of a space {X}.
    6. Let {{\mathcal X}} denote the {\sigma}-algebra on {X} generated by the basic sets {\overline{E} := \{ x \in X: \delta_x(E) = 1 \}} for every {E \in {\mathcal B}}.
    7. Let {{\mathcal N}} be the {\sigma}-ideal of {{\mathcal X}} generated by the sets {\bigcap_n \overline{E_n}}, where {E_n \in {\mathcal B}} is a sequence with {\bigcap_n E_n = \emptyset}.
    8. One verifies that {{\mathcal B}} is isomorphic to {{\mathcal X}/{\mathcal N}}. Using this isomorphism, the trace {\tau} on {L^\infty} can be used to construct a countably additive measure {\mu} on {{\mathcal X}}. The classical probability space {(X, {\mathcal X}, \mu)} is then {G( {\mathcal A}, \tau )}, and the abstract spaces {L^2, L^\infty} may now be identified with their concrete counterparts {L^2(X, {\mathcal X}, \mu)}, {L^\infty(X, {\mathcal X}, \mu)}.
    9. Every algebraic probability space morphism {\phi: ({\mathcal A}_1,\tau_1) \rightarrow ({\mathcal A}_2,\tau_2)} generates a classical probability morphism {G(\phi): (X_1, {\mathcal X}_1, \mu_1) \rightarrow (X_2, {\mathcal X}_2, \mu_2)} via the formula

      \displaystyle  \delta_{G(\phi)(x_1)}( E_2 ) = \delta_{x_1}( \phi^*(E_2) )

      using a pullback operation {\phi^*} on the abstract {\sigma}-algebras {{\mathcal B}_1, {\mathcal B}_2} that can be defined by density.

    Remark 1 The classical probability space {X} constructed by the functor {G} has some additional structure; namely {X} is a {\sigma}-Stone space (a Stone space with the property that the closure of any countable union of clopen sets is clopen), {{\mathcal X}} is the Baire {\sigma}-algebra (generated by the clopen sets), and the null sets are the meager sets. However, we will not use this additional structure here.

    The partial inversion relationship between the functors {F: \textbf{Prb} \rightarrow \textbf{AlgPrb}} and {G: \textbf{AlgPrb} \rightarrow \textbf{Prb}} is given by the following assertion:

    1. There is a natural transformation from {F \circ G: \textbf{AlgPrb} \rightarrow \textbf{AlgPrb}} to the identity functor {I: \textbf{AlgPrb} \rightarrow \textbf{AlgPrb}}.

    More informally: if one starts with an algebraic probability space {({\mathcal A},\tau)} and converts it back into a classical probability space {(X, {\mathcal X}, \mu)}, then there is a trace-preserving algebra homomorphism of {{\mathcal A}} to {L^\infty( X, {\mathcal X}, \mu )}, which respects morphisms of the algebraic probability space. While this relationship is far weaker than an equivalence of categories (which would require that {F \circ G} and {G \circ F} are both natural isomorphisms), it is still good enough to allow many ergodic theory problems formulated using classical probability spaces to be reformulated instead as an equivalent problem in algebraic probability spaces.

    Remark 2 The opposite composition {G \circ F: \textbf{Prb} \rightarrow \textbf{Prb}} is a little odd: it takes an arbitrary probability space {(X, {\mathcal X}, \mu)} and returns a more complicated probability space {(X', {\mathcal X}', \mu')}, with {X'} being the space of homomorphisms {\iota_x: L^\infty(X, {\mathcal X}, \mu) \rightarrow {\bf R}}. while there is “morally” an embedding of {X} into {X'} using the evaluation map, this map does not exist in general because points in {X} may well have zero measure. However, if one takes a “pointless” approach and focuses just on the measure algebras {({\mathcal X}, \mu)}, {({\mathcal X}', \mu')}, then these algebras become naturally isomorphic after quotienting out by null sets.

    Remark 3 An algebraic probability space captures a bit more structure than a classical probability space, because {{\mathcal A}} may be identified with a proper subset of {L^\infty} that describes the “regular” functions (or random variables) of the space. For instance, starting with the unit circle {{\bf R}/{\bf Z}} (with the usual Haar measure and the usual trace {\tau(f) = \int_{{\bf R}/{\bf Z}} f}), any unital subalgebra {{\mathcal A}} of {L^\infty({\bf R}/{\bf Z})} that is dense in {L^2({\bf R}/{\bf Z})} will generate the same classical probability space {G( {\mathcal A}, \tau )} on applying the functor {G}, namely one will get the space {({\bf R}/{\bf Z})'} of homomorphisms from {L^\infty({\bf R}/{\bf Z})} to {{\bf R}} (with the measure induced from {\tau}). Thus for instance {{\mathcal A}} could be the continuous functions {C( {\bf R}/{\bf Z} )}, the Wiener algebra {A({\bf R}/{\bf Z})} or the full space {L^\infty({\bf R}/{\bf Z})}, but the classical space {G( {\mathcal A}, \tau )} will be unable to distinguish these spaces from each other. In particular, the functor {F \circ G} loses information (roughly speaking, this functor takes an algebraic probability space and completes it to a von Neumann algebra, but then forgets exactly what algebra was initially used to create this completion). In ergodic theory, this sort of “extra structure” is traditionally encoded in topological terms, by assuming that the underlying probability space {X} has a nice topological structure (e.g. a standard Borel space); however, with the algebraic perspective one has the freedom to have non-topological notions of extra structure, by choosing {{\mathcal A}} to be something other than an algebra {C(X)} of continuous functions on a topological space. I hope to discuss one such example of extra structure (coming from the Gowers-Host-Kra theory of uniformity seminorms) in a later blog post (this generalises the example of the Wiener algebra given previously, which is encoding “Fourier structure”).

    A small example of how one could use the functors {F, G} is as follows. Suppose one has a classical probability space {(X, {\mathcal X}, \mu)} with a measure-preserving action of an uncountable group {\Gamma}, which is only defined (and an action) up to almost everywhere equivalence; thus for instance for any set {E} and any {g, h \in \Gamma}, {T^{gh} E} and {T^g T^h E} might not be exactly equal, but only equal up to a null set. For similar reasons, an element {E} of the invariant factor {{\mathcal X}^\Gamma} might not be exactly invariant with respect to {\Gamma}, but instead one only has {T^g E} and {E} equal up to null sets for each {g \in \Gamma}. One might like to “clean up” the action of {\Gamma} to make it defined everywhere, and a genuine action everywhere, but this is not immediately achievable if {\Gamma} is uncountable, since the union of all the null sets where something bad occurs may cease to be a null set. However, by applying the functor {F}, each shift {T^g: X \rightarrow X} defines a morphism {T^g: L^\infty(X, {\mathcal X}, \mu) \rightarrow L^\infty(X, {\mathcal X}, \mu)} on the associated algebraic probability space (i.e. the Koopman operator), and then applying {G}, we obtain a shift {T^g: X' \rightarrow X'} on a new classical probability space {(X', {\mathcal X}', \mu')} which now gives a genuine measure-preserving action of {\Gamma}, and which is equivalent to the original action from a measure algebra standpoint. The invariant factor {{\mathcal X}^\Gamma} now consists of those sets in {{\mathcal X}'} which are genuinely {\Gamma}-invariant, not just up to null sets. (Basically, the classical probability space {(X', {\mathcal X}', \mu')} contains a Boolean algebra {\overline{\mathcal B}} with the property that every measurable set {A \in {\mathcal X}'} is equivalent up to null sets to precisely one set in {\overline{\mathcal B}}, allowing for a canonical “retraction” onto {\overline{\mathcal B}} that eliminates all null set issues.)

    More indirectly, the functors {F, G} suggest that one should be able to develop a “pointless” form of ergodic theory, in which the underlying probability spaces are given algebraically rather than classically. I hope to give some more specific examples of this in later posts.

    Read the rest of this entry »

    Archives

    RSS Google+ feed

    • An error has occurred; the feed is probably down. Try again later.
    Follow

    Get every new post delivered to your Inbox.

    Join 3,712 other followers