You are currently browsing Terence Tao’s articles.
I’ve just uploaded to the arXiv the D.H.J. Polymath paper “Variants of the Selberg sieve, and bounded intervals containing many primes“, which is the second paper to be produced from the Polymath8 project (the first one being discussed here). We’ll refer to this latter paper here as the Polymath8b paper, and the former as the Polymath8a paper. As with Polymath8a, the Polymath8b paper is concerned with the smallest asymptotic prime gap
where denotes the prime, as well as the more general quantities
In the breakthrough paper of Goldston, Pintz, and Yildirim, the bound was obtained under the strong hypothesis of the Elliott-Halberstam conjecture. An unconditional bound on , however, remained elusive until the celebrated work of Zhang last year, who showed that
The Polymath8a paper then improved this to . After that, Maynard introduced a new multidimensional Selberg sieve argument that gave the substantial improvement
unconditionally, and on the Elliott-Halberstam conjecture; furthermore, bounds on for higher were obtained for the first time, and specifically that for all , with the improvements and on the Elliott-Halberstam conjecture. (I had independently discovered the multidimensional sieve idea, although I did not obtain Maynard’s specific numerical results, and my asymptotic bounds were a bit weaker.)
In Polymath8b, we obtain some further improvements. Unconditionally, we have and , together with some explicit bounds on ; on the Elliott-Halberstam conjecture we have and some numerical improvements to the bounds; and assuming the generalised Elliott-Halberstam conjecture we have the bound , which is best possible from sieve-theoretic methods thanks to the parity problem obstruction.
There were a variety of methods used to establish these results. Maynard’s paper obtained a criterion for bounding which reduced to finding a good solution to a certain multidimensional variational problem. When the dimension parameter was relatively small (e.g. ), we were able to obtain good numerical solutions both by continuing the method of Maynard (using a basis of symmetric polynomials), or by using a Krylov iteration scheme. For large , we refined the asymptotics and obtained near-optimal solutions of the variational problem. For the bounds, we extended the reach of the multidimensional Selberg sieve (particularly under the assumption of the generalised Elliott-Halberstam conjecture) by allowing the function in the multidimensional variational problem to extend to a larger region of space than was previously admissible, albeit with some tricky new constraints on (and penalties in the variational problem). This required some unusual sieve-theoretic manipulations, notably an “epsilon trick”, ultimately relying on the elementary inequality , that allowed one to get non-trivial lower bounds for sums such as even if the sum had no non-trivial estimates available; and a way to estimate divisor sums such as even if was permitted to be comparable to or even exceed , by using the fundamental theorem of arithmetic to factorise (after restricting to the case when is almost prime). I hope that these sieve-theoretic tricks will be useful in future work in the subject.
With this paper, the Polymath8 project is almost complete; there is still a little bit of scope to push our methods further and get some modest improvement for instance to the bound, but this would require a substantial amount of effort, and it is probably best to instead wait for some new breakthrough in the subject to come along. One final task we are performing is to write up a retrospective article on both the 8a and 8b experiences, an incomplete writeup of which can be found here. If anyone wishes to contribute some commentary on these projects (whether you were an active contributor, an occasional contributor, or a silent “lurker” in the online discussion), please feel free to do so in the comments to this post.
In the traditional foundations of probability theory, one selects a probability space , and makes a distinction between deterministic mathematical objects, which do not depend on the sampled state , and stochastic (or random) mathematical objects, which do depend (but in a measurable fashion) on the sampled state . For instance, a deterministic real number would just be an element , whereas a stochastic real number (or real random variable) would be a measurable function , where in this post will always be endowed with the Borel -algebra. (For readers familiar with nonstandard analysis, the adjectives “deterministic” and “stochastic” will be used here in a manner analogous to the uses of the adjectives “standard” and “nonstandard” in nonstandard analysis. The analogy is particularly close when comparing with the “cheap nonstandard analysis” discussed in this previous blog post. We will also use “relative to ” as a synonym for “stochastic”.)
Actually, for our purposes we will adopt the philosophy of identifying stochastic objects that agree almost surely, so if one was to be completely precise, we should define a stochastic real number to be an equivalence class of measurable functions , up to almost sure equivalence. However, we shall often abuse notation and write simply as .
More generally, given any measurable space , we can talk either about deterministic elements , or about stochastic elements of , that is to say equivalence classes of measurable maps up to almost sure equivalence. We will use to denote the set of all stochastic elements of . (For readers familiar with sheaves, it may helpful for the purposes of this post to think of as the space of measurable global sections of the trivial -bundle over .) Of course every deterministic element of can also be viewed as a stochastic element given by (the equivalence class of) the constant function , thus giving an embedding of into . We do not attempt here to give an interpretation of for sets that are not equipped with a -algebra .
Remark 1 In my previous post on the foundations of probability theory, I emphasised the freedom to extend the sample space to a larger sample space whenever one wished to inject additional sources of randomness. This is of course an important freedom to possess (and in the current formalism, is the analogue of the important operation of base change in algebraic geometry), but in this post we will focus on a single fixed sample space , and not consider extensions of this space, so that one only has to consider two types of mathematical objects (deterministic and stochastic), as opposed to having many more such types, one for each potential choice of sample space (with the deterministic objects corresponding to the case when the sample space collapses to a point).
Any (measurable) -ary operation on deterministic mathematical objects then extends to their stochastic counterparts by applying the operation pointwise. For instance, the addition operation on deterministic real numbers extends to an addition operation , by defining the class for to be the equivalence class of the function ; this operation is easily seen to be well-defined. More generally, any measurable -ary deterministic operation between measurable spaces extends to an stochastic operation in the obvious manner.
There is a similar story for -ary relations , although here one has to make a distinction between a deterministic reading of the relation and a stochastic one. Namely, if we are given stochastic objects for , the relation does not necessarily take values in the deterministic Boolean algebra , but only in the stochastic Boolean algebra – thus may be true with some positive probability and also false with some positive probability (with the event that being stochastically true being determined up to null events). Of course, the deterministic Boolean algebra embeds in the stochastic one, so we can talk about a relation being determinstically true or deterministically false, which (due to our identification of stochastic objects that agree almost surely) means that is almost surely true or almost surely false respectively. For instance given two stochastic objects , one can view their equality relation as having a stochastic truth value. This is distinct from the way the equality symbol is used in mathematical logic, which we will now call “equality in the deterministic sense” to reduce confusion. Thus, in the deterministic sense if and only if the stochastic truth value of is equal to , that is to say that for almost all .
Any universal identity for deterministic operations (or universal implication between identities) extends to their stochastic counterparts: for instance, addition is commutative, associative, and cancellative on the space of deterministic reals , and is therefore commutative, associative, and cancellative on stochastic reals as well. However, one has to be more careful when working with mathematical laws that are not expressible as universal identities, or implications between identities. For instance, is an integral domain: if are deterministic reals such that , then one must have or . However, if are stochastic reals such that (in the deterministic sense), then it is no longer necessarily the case that (in the deterministic sense) or that (in the deterministic sense); however, it is still true that “ or ” is true in the deterministic sense if one interprets the boolean operator “or” stochastically, thus “ or ” is true for almost all . Another way to properly obtain a stochastic interpretation of the integral domain property of is to rewrite it as
and then make all sets stochastic to obtain the true statement
thus we have to allow the index for which vanishing occurs to also be stochastic, rather than deterministic. (A technical note: when one proves this statement, one has to select in a measurable fashion; for instance, one can choose to equal when , and otherwise (so that in the “tie-breaking” case when and both vanish, one always selects to equal ).)
Similarly, the law of the excluded middle fails when interpreted deterministically, but remains true when interpreted stochastically: if is a stochastic statement, then it is not necessarily the case that is either deterministically true or deterministically false; however the sentence “ or not-” is still deterministically true if the boolean operator “or” is interpreted stochastically rather than deterministically.
To avoid having to keep pointing out which operations are interpreted stochastically and which ones are interpreted deterministically, we will use the following convention: if we assert that a mathematical sentence involving stochastic objects is true, then (unless otherwise specified) we mean that is deterministically true, assuming that all relations used inside are interpreted stochastically. For instance, if are stochastic reals, when we assert that “Exactly one of , , or is true”, then by default it is understood that the relations , , and the boolean operator “exactly one of” are interpreted stochastically, and the assertion is that the sentence is deterministically true.
In the above discussion, the stochastic objects being considered were elements of a deterministic space , such as the reals . However, it can often be convenient to generalise this situation by allowing the ambient space to also be stochastic. For instance, one might wish to consider a stochastic vector inside a stochastic vector space , or a stochastic edge of a stochastic graph . In order to formally describe this situation within the classical framework of measure theory, one needs to place all the ambient spaces inside a measurable space. This can certainly be done in many contexts (e.g. when considering random graphs on a deterministic set of vertices, or if one is willing to work up to equivalence and place the ambient spaces inside a suitable moduli space), but is not completely natural in other contexts. For instance, if one wishes to consider stochastic vector spaces of potentially unbounded dimension (in particular, potentially larger than any given cardinal that one might specify in advance), then the class of all possible vector spaces is so large that it becomes a proper class rather than a set (even if one works up to equivalence), making it problematic to give this class the structure of a measurable space; furthermore, even once one does so, one needs to take additional care to pin down what it would mean for a random vector lying in a random vector space to depend “measurably” on .
Of course, in any reasonable application one can avoid the set theoretic issues at least by various ad hoc means, for instance by restricting the dimension of all spaces involved to some fixed cardinal such as . However, the measure-theoretic issues can require some additional effort to resolve properly.
In this post I would like to describe a different way to formalise stochastic spaces, and stochastic elements of these spaces, by viewing the spaces as measure-theoretic analogue of a sheaf, but being over the probability space rather than over a topological space; stochastic objects are then sections of such sheaves. Actually, for minor technical reasons it is convenient to work in the slightly more general setting in which the base space is a finite measure space rather than a probability space, thus can take any value in rather than being normalised to equal . This will allow us to easily localise to subevents of without the need for normalisation, even when is a null event (though we caution that the map from deterministic objects ceases to be injective in this latter case). We will however still continue to use probabilistic terminology. despite the lack of normalisation; thus for instance, sets in will be referred to as events, the measure of such a set will be referred to as the probability (which is now permitted to exceed in some cases), and an event whose complement is a null event shall be said to hold almost surely. It is in fact likely that almost all of the theory below extends to base spaces which are -finite rather than finite (for instance, by damping the measure to become finite, without introducing any further null events), although we will not pursue this further generalisation here.
The approach taken in this post is “topos-theoretic” in nature (although we will not use the language of topoi explicitly here), and is well suited to a “pointless” or “point-free” approach to probability theory, in which the role of the stochastic state is suppressed as much as possible; instead, one strives to always adopt a “relative point of view”, with all objects under consideration being viewed as stochastic objects relative to the underlying base space . In this perspective, the stochastic version of a set is as follows.
Definition 1 (Stochastic set) Unless otherwise specified, we assume that we are given a fixed finite measure space (which we refer to as the base space). A stochastic set (relative to ) is a tuple consisting of the following objects:
- A set assigned to each event ; and
- A restriction map from to to each pair of nested events . (Strictly speaking, one should indicate the dependence on in the notation for the restriction map, e.g. using instead of , but we will abuse notation by omitting the dependence.)
We refer to elements of as local stochastic elements of the stochastic set , localised to the event , and elements of as global stochastic elements (or simply elements) of the stochastic set. (In the language of sheaves, one would use “sections” instead of “elements” here, but I prefer to use the latter terminology here, for compatibility with conventional probabilistic notation, where for instance measurable maps from to are referred to as real random variables, rather than sections of the reals.)
Furthermore, we impose the following axioms:
- (Category) The map from to is the identity map, and if are events in , then for all .
- (Null events trivial) If is a null event, then the set is a singleton set. (In particular, is always a singleton set; this is analogous to the convention that for any number .)
- (Countable gluing) Suppose that for each natural number , one has an event and an element such that for all . Then there exists a unique such that for all .
If is an event in , we define the localisation of the stochastic set to to be the stochastic set
relative to . (Note that there is no need to renormalise the measure on , as we are not demanding that our base space have total measure .)
The following fact is useful for actually verifying that a given object indeed has the structure of a stochastic set:
Exercise 1 Show that to verify the countable gluing axiom of a stochastic set, it suffices to do so under the additional hypothesis that the events are disjoint. (Note that this is quite different from the situation with sheaves over a topological space, in which the analogous gluing axiom is often trivial in the disjoint case but has non-trivial content in the overlapping case. This is ultimately because a -algebra is closed under all Boolean operations, whereas a topology is only closed under union and intersection.)
Let us illustrate the concept of a stochastic set with some examples.
Example 1 (Discrete case) A simple case arises when is a discrete space which is at most countable. If we assign a set to each , with a singleton if . One then sets , with the obvious restriction maps, giving rise to a stochastic set . (Thus, a local element of can be viewed as a map on that takes values in for each .) Conversely, it is not difficult to see that any stochastic set over an at most countable discrete probability space is of this form up to isomorphism. In this case, one can think of as a bundle of sets over each point (of positive probability) in the base space . One can extend this bundle interpretation of stochastic sets to reasonably nice sample spaces (such as standard Borel spaces) and similarly reasonable ; however, I would like to avoid this interpretation in the formalism below in order to be able to easily work in settings in which and are very “large” (e.g. not separable in any reasonable sense). Note that we permit some of the to be empty, thus it can be possible for to be empty whilst for some strict subevents of to be non-empty. (This is analogous to how it is possible for a sheaf to have local sections but no global sections.) As such, the space of global elements does not completely determine the stochastic set ; one sometimes needs to localise to an event in order to see the full structure of such a set. Thus it is important to distinguish between a stochastic set and its space of global elements. (As such, it is a slight abuse of the axiom of extensionality to refer to global elements of simply as “elements”, but hopefully this should not cause too much confusion.)
Example 2 (Measurable spaces as stochastic sets) Returning now to a general base space , any (deterministic) measurable space gives rise to a stochastic set , with being defined as in previous discussion as the measurable functions from to modulo almost everywhere equivalence (in particular, a singleton set when is null), with the usual restriction maps. The constraint of measurability on the maps , together with the quotienting by almost sure equivalence, means that is now more complicated than a plain Cartesian product of fibres, but this still serves as a useful first approximation to what is for the purposes of developing intuition. Indeed, the measurability constraint is so weak (as compared for instance to topological or smooth constraints in other contexts, such as sheaves of continuous or smooth sections of bundles) that the intuition of essentially independent fibres is quite an accurate one, at least if one avoids consideration of an uncountable number of objects simultaneously.
Example 3 (Extended Hilbert modules) This example is the one that motivated this post for me. Suppose that one has an extension of the base space , thus we have a measurable factor map such that the pushforward of the measure by is equal to . Then we have a conditional expectation operator , defined as the adjoint of the pullback map . As is well known, the conditional expectation operator also extends to a contraction ; by monotone convergence we may also extend to a map from measurable functions from to the extended non-negative reals , to measurable functions from to . We then define the “extended Hilbert module” to be the space of functions with finite almost everywhere. This is an extended version of the Hilbert module , which is defined similarly except that is required to lie in ; this is a Hilbert module over which is of particular importance in the Furstenberg-Zimmer structure theory of measure-preserving systems. We can then define the stochastic set by setting
with the obvious restriction maps. In the case that are standard Borel spaces, one can disintegrate as an integral of probability measures (supported in the fibre ), in which case this stochastic set can be viewed as having fibres (though if is not discrete, there are still some measurability conditions in on the local and global elements that need to be imposed). However, I am interested in the case when are not standard Borel spaces (in fact, I will take them to be algebraic probability spaces, as defined in this previous post), in which case disintegrations are not available. However, it appears that the stochastic analysis developed in this blog post can serve as a substitute for the tool of disintegration in this context.
We make the remark that if is a stochastic set and are events that are equivalent up to null events, then one can identify with (through their common restriction to , with the restriction maps now being bijections). As such, the notion of a stochastic set does not require the full structure of a concrete probability space ; one could also have defined the notion using only the abstract -algebra consisting of modulo null events as the base space, or equivalently one could define stochastic sets over the algebraic probability spaces defined in this previous post. However, we will stick with the classical formalism of concrete probability spaces here so as to keep the notation reasonably familiar.
As a corollary of the above observation, we see that if the base space has total measure , then all stochastic sets are trivial (they are just points).
Exercise 2 If is a stochastic set, show that there exists an event with the property that for any event , is non-empty if and only if is contained in modulo null events. (In particular, is unique up to null events.) Hint: consider the numbers for ranging over all events with non-empty, and form a maximising sequence for these numbers. Then use all three axioms of a stochastic set.
One can now start take many of the fundamental objects, operations, and results in set theory (and, hence, in most other categories of mathematics) and establish analogues relative to a finite measure space. Implicitly, what we will be doing in the next few paragraphs is endowing the category of stochastic sets with the structure of an elementary topos. However, to keep things reasonably concrete, we will not explicitly emphasise the topos-theoretic formalism here, although it is certainly lurking in the background.
Firstly, we define a stochastic function between two stochastic sets to be a collection of maps for each which form a natural transformation in the sense that for all and nested events . In the case when is discrete and at most countable (and after deleting all null points), a stochastic function is nothing more than a collection of functions for each , with the function then being a direct sum of the factor functions :
Thus (in the discrete, at most countable setting, at least) stochastic functions do not mix together information from different states in a sample space; the value of at depends only on the value of at . The situation is a bit more subtle for continuous probability spaces, due to the identification of stochastic objects that agree almost surely, nevertheness it is still good intuition to think of stochastic functions as essentially being “pointwise” or “local” in nature.
One can now form the stochastic set of functions from to , by setting for any event to be the set of local stochastic functions of the localisations of to ; this is a stochastic set if we use the obvious restriction maps. In the case when is discrete and at most countable, the fibre at a point of positive measure is simply the set of functions from to .
In a similar spirit, we say that one stochastic set is a (stochastic) subset of another , and write , if we have a stochastic inclusion map, thus for all events , with the restriction maps being compatible. We can then define the power set of a stochastic set by setting for any event to be the set of all stochastic subsets of relative to ; it is easy to see that is a stochastic set with the obvious restriction maps (one can also identify with in the obvious fashion). Again, when is discrete and at most countable, the fibre of at a point of positive measure is simply the deterministic power set .
Note that if is a stochastic function and is a stochastic subset of , then the inverse image , defined by setting for any event to be the set of those with , is a stochastic subset of . In particular, given a -ary relation , the inverse image is a stochastic subset of , which by abuse of notation we denote as
In a similar spirit, if is a stochastic subset of and is a stochastic function, we can define the image by setting to be the set of those with ; one easily verifies that this is a stochastic subset of .
Remark 2 One should caution that in the definition of the subset relation , it is important that for all events , not just the global event ; in particular, just because a stochastic set has no global sections, does not mean that it is contained in the stochastic empty set .
Now we discuss Boolean operations on stochastic subsets of a given stochastic set . Given two stochastic subsets of , the stochastic intersection is defined by setting to be the set of that lie in both and :
This is easily verified to again be a stochastic subset of . More generally one may define stochastic countable intersections for any sequence of stochastic subsets of . One could extend this definition to uncountable families if one wished, but I would advise against it, because some of the usual laws of Boolean algebra (e.g. the de Morgan laws) may break down in this setting.
Stochastic unions are a bit more subtle. The set should not be defined to simply be the union of and , as this would not respect the gluing axiom. Instead, we define to be the set of all such that one can cover by measurable subevents such that for ; then may be verified to be a stochastic subset of . Thus for instance is the stochastic union of and . Similarly for countable unions of stochastic subsets of , although for uncountable unions are extremely problematic (they are disliked by both the measure theory and the countable gluing axiom) and will not be defined here. Finally, the stochastic difference set is defined as the set of all in such that for any subevent of of positive probability. One may verify that in the case when is discrete and at most countable, these Boolean operations correspond to the classical Boolean operations applied separately to each fibre of the relevant sets . We also leave as an exercise to the reader to verify the usual laws of Boolean arithmetic, e.g. the de Morgan laws, provided that one works with at most countable unions and intersections.
One can also consider a stochastic finite union in which the number of sets in the union is itself stochastic. More precisely, let be a stochastic set, let be a stochastic natural number, and let be a stochastic function from the stochastic set (defined by setting )) to the stochastic power set . Here we are considering to be a natural number, to allow for unions that are possibly empty, with used for the positive natural numbers. We also write for the stochastic function . Then we can define the stochastic union by setting for an event to be the set of local elements with the property that there exists a covering of by measurable subevents for , such that one has and . One can verify that is a stochastic set (with the obvious restriction maps). Again, in the model case when is discrete and at most countable, the fibre is what one would expect it to be, namely .
The Cartesian product of two stochastic sets may be defined by setting for all events , with the obvious restriction maps; this is easily seen to be another stochastic set. This lets one define the concept of a -ary operation from stochastic sets to another stochastic set , or a -ary relation . In particular, given for , the relation may be deterministically true, deterministically false, or have some other stochastic truth value.
Remark 3 In the degenerate case when is null, stochastic logic becomes a bit weird: all stochastic statements are deterministically true, as are their stochastic negations, since every event in (even the empty set) now holds with full probability. Among other pathologies, the empty set now has a global element over (this is analogous to the notorious convention ), and any two deterministic objects become equal over : .
The following simple observation is crucial to subsequent discussion. If is a sequence taking values in the global elements of a stochastic space , then we may also define global elements for stochastic indices as well, by appealing to the countable gluing axiom to glue together restricted to the set for each deterministic natural number to form . With this definition, the map is a stochastic function from to ; indeed, this creates a one-to-one correspondence between external sequences (maps from to ) and stochastic sequences (stochastic functions from to ). Similarly with replaced by any other at most countable set. This observation will be important in allowing many deterministic arguments involving sequences will be able to be carried over to the stochastic setting.
We now specialise from the extremely broad discipline of set theory to the more focused discipline of real analysis. There are two fundamental axioms that underlie real analysis (and in particular distinguishes it from real algebra). The first is the Archimedean property, which we phrase in the “no infinitesimal” formulation as follows:
Proposition 2 (Archimedean property) Let be such that for all positive natural numbers . Then .
The other is the least upper bound axiom:
Proposition 3 (Least upper bound axiom) Let be a non-empty subset of which has an upper bound , thus for all . Then there exists a unique real number with the following properties:
- for all .
- For any real , there exists such that .
- .
Furthermore, does not depend on the choice of .
The Archimedean property extends easily to the stochastic setting:
Proposition 4 (Stochastic Archimedean property) Let be such that for all deterministic natural numbers . Then .
Remark 4 Here, incidentally, is one place in which this stochastic formalism deviates from the nonstandard analysis formalism, as the latter certainly permits the existence of infinitesimal elements. On the other hand, we caution that stochastic real numbers are permitted to be unbounded, so that formulation of Archimedean property is not valid in the stochastic setting.
The proof is easy and is left to the reader. The least upper bound axiom also extends nicely to the stochastic setting, but the proof requires more work (in particular, our argument uses the monotone convergence theorem):
Theorem 5 (Stochastic least upper bound axiom) Let be a stochastic subset of which has a global upper bound , thus for all , and is globally non-empty in the sense that there is at least one global element . Then there exists a unique stochastic real number with the following properties:
- for all .
- For any stochastic real , there exists such that .
- .
Furthermore, does not depend on the choice of .
For future reference, we note that the same result holds with replaced by throughout, since the latter may be embedded in the former, for instance by mapping to and to . In applications, the above theorem serves as a reasonable substitute for the countable axiom of choice, which does not appear to hold in unrestricted generality relative to a measure space; in particular, it can be used to generate various extremising sequences for stochastic functionals on various stochastic function spaces.
Proof: Uniqueness is clear (using the Archimedean property), as well as the independence on , so we turn to existence. By using an order-preserving map from to (e.g. ) we may assume that is a subset of , and that .
We observe that is a lattice: if , then and also lie in . Indeed, may be formed by appealing to the countable gluing axiom to glue (restricted the set ) with (restricted to the set ), and similarly for . (Here we use the fact that relations such as are Borel measurable on .)
Let denote the deterministic quantity
then (by Proposition 3!) is well-defined; here we use the hypothesis that is finite. Thus we may find a sequence of elements of such that
Using the lattice property, we may assume that the are non-decreasing: whenever . If we then define (after choosing measurable representatives of each equivalence class ), then is a stochastic real with .
If , then , and so
From this and (1) we conclude that
From monotone convergence, we conclude that
and so , as required.
Now let be a stochastic real. After choosing measurable representatives of each relevant equivalence class, we see that for almost every , we can find a natural number with . If we choose to be the first such positive natural number when it exists, and (say) otherwise, then is a stochastic positive natural number and . The claim follows.
Remark 5 One can abstract away the role of the measure here, leaving only the ideal of null sets. The property that the measure is finite is then replaced by the more general property that given any non-empty family of measurable sets, there is an at most countable union of sets in that family that is an upper bound modulo null sets for all elements in that faily.
Using Proposition 4 and Theorem 5, one can then revisit many of the other foundational results of deterministic real analysis, and develop stochastic analogues; we give some examples of this below the fold (focusing on the Heine-Borel theorem and a case of the spectral theorem). As an application of this formalism, we revisit some of the Furstenberg-Zimmer structural theory of measure-preserving systems, particularly that of relatively compact and relatively weakly mixing systems, and interpret them in this framework, basically as stochastic versions of compact and weakly mixing systems (though with the caveat that the shift map is allowed to act non-trivially on the underlying probability space). As this formalism is “point-free”, in that it avoids explicit use of fibres and disintegrations, it will be well suited for generalising this structure theory to settings in which the underlying probability spaces are not standard Borel, and the underlying groups are uncountable; I hope to discuss such generalisations in future blog posts.
Remark 6 Roughly speaking, stochastic real analysis can be viewed as a restricted subset of classical real analysis in which all operations have to be “measurable” with respect to the base space. In particular, indiscriminate application of the axiom of choice is not permitted, and one should largely restrict oneself to performing countable unions and intersections rather than arbitrary unions or intersections. Presumably one can formalise this intuition with a suitable “countable transfer principle”, but I was not able to formulate a clean and general principle of this sort, instead verifying various assertions about stochastic objects by hand rather than by direct transfer from the deterministic setting. However, it would be desirable to have such a principle, since otherwise one is faced with the tedious task of redoing all the foundations of real analysis (or whatever other base theory of mathematics one is going to be working in) in the stochastic setting by carefully repeating all the arguments.
More generally, topos theory is a good formalism for capturing precisely the informal idea of performing mathematics with certain operations, such as the axiom of choice, the law of the excluded middle, or arbitrary unions and intersections, being somehow “prohibited” or otherwise “restricted”.
Two of the most famous open problems in additive prime number theory are the twin prime conjecture and the binary Goldbach conjecture. They have quite similar forms:
- Twin prime conjecture The equation has infinitely many solutions with prime.
- Binary Goldbach conjecture The equation has at least one solution with prime for any given even .
In view of this similarity, it is not surprising that the partial progress on these two conjectures have tracked each other fairly closely; the twin prime conjecture is generally considered slightly easier than the binary Goldbach conjecture, but broadly speaking any progress made on one of the conjectures has also led to a comparable amount of progress on the other. (For instance, Chen’s theorem has a version for the twin prime conjecture, and a version for the binary Goldbach conjecture.) Also, the notorious parity obstruction is present in both problems, preventing a solution to either conjecture by almost all known methods (see this previous blog post for more discussion).
In this post, I would like to note a divergence from this general principle, with regards to bounded error versions of these two conjectures:
- Twin prime with bounded error The inequalities has infinitely many solutions with prime for some absolute constant .
- Binary Goldbach with bounded error The inequalities has at least one solution with prime for any sufficiently large and some absolute constant .
The first of these statements is now a well-known theorem of Zhang, and the Polymath8b project hosted on this blog has managed to lower to unconditionally, and to assuming the generalised Elliott-Halberstam conjecture. However, the second statement remains open; the best result that the Polymath8b project could manage in this direction is that (assuming GEH) at least one of the binary Goldbach conjecture with bounded error, or the twin prime conjecture with no error, had to be true.
All the known proofs of Zhang’s theorem proceed through sieve-theoretic means. Basically, they take as input equidistribution results that control the size of discrepancies such as
for various congruence classes and various arithmetic functions , e.g. (or more generaly for various ). After taking some carefully chosen linear combinations of these discrepancies, and using the trivial positivity lower bound
one eventually obtains (for suitable ) a non-trivial lower bound of the form
where is some weight function, and is the set of such that there are at least two primes in the interval . This implies at least one solution to the inequalities with , and Zhang’s theorem follows.
In a similar vein, one could hope to use bounds on discrepancies such as (1) (for comparable to ), together with the trivial lower bound (2), to obtain (for sufficiently large , and suitable ) a non-trivial lower bound of the form
for some weight function , where is the set of such that there is at least one prime in each of the intervals and . This would imply the binary Goldbach conjecture with bounded error.
However, the parity obstruction blocks such a strategy from working (for much the same reason that it blocks any bound of the form in Zhang’s theorem, as discussed in the Polymath8b paper.) The reason is as follows. The sieve-theoretic arguments are linear with respect to the summation, and as such, any such sieve-theoretic argument would automatically also work in a weighted setting in which the summation is weighted by some non-negative weight . More precisely, if one could control the weighted discrepancies
to essentially the same accuracy as the unweighted discrepancies (1), then thanks to the trivial weighted version
of (2), any sieve-theoretic argument that was capable of proving (3) would also be capable of proving the weighted estimate
However, (4) may be defeated by a suitable choice of weight , namely
where is the Liouville function, which counts the parity of the number of prime factors of a given number . Since , one can expand out as the sum of and a finite number of other terms, each of which consists of the product of two or more translates (or reflections) of . But from the Möbius randomness principle (or its analogue for the Liouville function), such products of are widely expected to be essentially orthogonal to any arithmetic function that is arising from a single multiplicative function such as , even on very short arithmetic progressions. As such, replacing by in (1) should have a negligible effect on the discrepancy. On the other hand, in order for to be non-zero, has to have the same sign as and hence the opposite sign to cannot simultaneously be prime for any , and so vanishes identically, contradicting (4). This indirectly rules out any modification of the Goldston-Pintz-Yildirim/Zhang method for establishing the binary Goldbach conjecture with bounded error.
The above argument is not watertight, and one could envisage some ways around this problem. One of them is that the Möbius randomness principle could simply be false, in which case the parity obstruction vanishes. A good example of this is the result of Heath-Brown that shows that if there are infinitely many Siegel zeroes (which is a strong violation of the Möbius randomness principle), then the twin prime conjecture holds. Another way around the obstruction is to start controlling the discrepancy (1) for functions that are combinations of more than one multiplicative function, e.g. . However, controlling such functions looks to be at least as difficult as the twin prime conjecture (which is morally equivalent to obtaining non-trivial lower-bounds for ). A third option is not to use a sieve-theoretic argument, but to try a different method (e.g. the circle method). However, most other known methods also exhibit linearity in the “” variable and I would suspect they would be vulnerable to a similar obstruction. (In any case, the circle method specifically has some other difficulties in tackling binary problems, as discussed in this previous post.)
Due to some requests, I’m uploading to my blog the slides for my recent talk in Segovia (for the birthday conference of Michael Cowling) on “Hilbert’s fifth problem and approximate groups“. The slides cover essentially the same range of topics in this series of lecture notes, or in this text of mine, though of course in considerably less detail, given that the slides are meant to be presented in an hour.
Let be the algebraic closure of , that is to say the field of algebraic numbers. We fix an embedding of into , giving rise to a complex absolute value for algebraic numbers .
Let be of degree , so that is irrational. A classical theorem of Liouville gives the quantitative bound
for the irrationality of fails to be approximated by rational numbers , where depends on but not on . Indeed, if one lets be the Galois conjugates of , then the quantity is a non-zero natural number divided by a constant, and so we have the trivial lower bound
from which the bound (1) easily follows. A well known corollary of the bound (1) is that Liouville numbers are automatically transcendental.
The famous theorem of Thue, Siegel and Roth improves the bound (1) to
for any and rationals , where depends on but not on . Apart from the in the exponent and the implied constant, this bound is optimal, as can be seen from Dirichlet’s theorem. This theorem is a good example of the ineffectivity phenomenon that affects a large portion of modern number theory: the implied constant in the notation is known to be finite, but there is no explicit bound for it in terms of the coefficients of the polynomial defining (in contrast to (1), for which an effective bound may be easily established). This is ultimately due to the reliance on the “dueling conspiracy” (or “repulsion phenomenon”) strategy. We do not as yet have a good way to rule out one counterexample to (2), in which is far closer to than ; however we can rule out two such counterexamples, by playing them off of each other.
A powerful strengthening of the Thue-Siegel-Roth theorem is given by the subspace theorem, first proven by Schmidt and then generalised further by several authors. To motivate the theorem, first observe that the Thue-Siegel-Roth theorem may be rephrased as a bound of the form
for any algebraic numbers with and linearly independent (over the algebraic numbers), and any and , with the exception when or are rationally dependent (i.e. one is a rational multiple of the other), in which case one has to remove some lines (i.e. subspaces in ) of rational slope from the space of pairs to which the bound (3) does not apply (namely, those lines for which the left-hand side vanishes). Here can depend on but not on . More generally, we have
Theorem 1 (Schmidt subspace theorem) Let be a natural number. Let be linearly independent linear forms. Then for any , one has the bound
for all , outside of a finite number of proper subspaces of , where
and depends on and the , but is independent of .
Being a generalisation of the Thue-Siegel-Roth theorem, it is unsurprising that the known proofs of the subspace theorem are also ineffective with regards to the constant . (However, the number of exceptional subspaces may be bounded effectively; cf. the situation with the Skolem-Mahler-Lech theorem, discussed in this previous blog post.) Once again, the lower bound here is basically sharp except for the factor and the implied constant: given any with , a simple volume packing argument (the same one used to prove the Dirichlet approximation theorem) shows that for any sufficiently large , one can find integers , not all zero, such that
for all . Thus one can get comparable to in many different ways.
There are important generalisations of the subspace theorem to other number fields than the rationals (and to other valuations than the Archimedean valuation ); we will develop one such generalisation below.
The subspace theorem is one of many finiteness theorems in Diophantine geometry; in this case, it is the number of exceptional subspaces which is finite. It turns out that finiteness theorems are very compatible with the language of nonstandard analysis. (See this previous blog post for a review of the basics of nonstandard analysis, and in particular for the nonstandard interpretation of asymptotic notation such as and .) The reason for this is that a standard set is finite if and only if it contains no strictly nonstandard elements (that is to say, elements of ). This makes for a clean formulation of finiteness theorems in the nonstandard setting. For instance, the standard form of Bezout’s theorem asserts that if are coprime polynomials over some field, then the curves and intersect in only finitely many points. The nonstandard version of this is then
Theorem 2 (Bezout’s theorem, nonstandard form) Let be standard coprime polynomials. Then there are no strictly nonstandard solutions to .
Now we reformulate Theorem 1 in nonstandard language. We need a definition:
Definition 3 (General position) Let be nested fields. A point in is said to be in -general position if it is not contained in any hyperplane of definable over , or equivalently if one has
for any .
Theorem 4 (Schmidt subspace theorem, nonstandard version) Let be a standard natural number. Let be linearly independent standard linear forms. Let be a tuple of nonstandard integers which is in -general position (in particular, this forces to be strictly nonstandard). Then one has
where we extend from to (and also similarly extend from to ) in the usual fashion.
Observe that (as is usual when translating to nonstandard analysis) some of the epsilons and quantifiers that are present in the standard version become hidden in the nonstandard framework, being moved inside concepts such as “strictly nonstandard” or “general position”. We remark that as is in -general position, it is also in -general position (as an easy Galois-theoretic argument shows), and the requirement that the are linearly independent is thus equivalent to being -linearly independent.
Exercise 1 Verify that Theorem 1 and Theorem 4 are equivalent. (Hint: there are only countably many proper subspaces of .)
We will not prove the subspace theorem here, but instead focus on a particular application of the subspace theorem, namely to counting integer points on curves. In this paper of Corvaja and Zannier, the subspace theorem was used to give a new proof of the following basic result of Siegel:
Theorem 5 (Siegel’s theorem on integer points) Let be an irreducible polynomial of two variables, such that the affine plane curve either has genus at least one, or has at least three points on the line at infinity, or both. Then has only finitely many integer points .
This is a finiteness theorem, and as such may be easily converted to a nonstandard form:
Theorem 6 (Siegel’s theorem, nonstandard form) Let be a standard irreducible polynomial of two variables, such that the affine plane curve either has genus at least one, or has at least three points on the line at infinity, or both. Then does not contain any strictly nonstandard integer points .
Note that Siegel’s theorem can fail for genus zero curves that only meet the line at infinity at just one or two points; the key examples here are the graphs for a polynomial , and the Pell equation curves . Siegel’s theorem can be compared with the more difficult theorem of Faltings, which establishes finiteness of rational points (not just integer points), but now needs the stricter requirement that the curve has genus at least two (to avoid the additional counterexample of elliptic curves of positive rank, which have infinitely many rational points).
The standard proofs of Siegel’s theorem rely on a combination of the Thue-Siegel-Roth theorem and a number of results on abelian varieties (notably the Mordell-Weil theorem). The Corvaja-Zannier argument rebalances the difficulty of the argument by replacing the Thue-Siegel-Roth theorem by the more powerful subspace theorem (in fact, they need one of the stronger versions of this theorem alluded to earlier), while greatly reducing the reliance on results on abelian varieties. Indeed, for curves with three or more points at infinity, no theory from abelian varieties is needed at all, while for the remaining cases, one mainly needs the existence of the Abel-Jacobi embedding, together with a relatively elementary theorem of Chevalley-Weil which is used in the proof of the Mordell-Weil theorem, but is significantly easier to prove.
The Corvaja-Zannier argument (together with several further applications of the subspace theorem) is presented nicely in this Bourbaki expose of Bilu. To establish the theorem in full generality requires a certain amount of algebraic number theory machinery, such as the theory of valuations on number fields, or of relative discriminants between such number fields. However, the basic ideas can be presented without much of this machinery by focusing on simple special cases of Siegel’s theorem. For instance, we can handle irreducible cubics that meet the line at infinity at exactly three points :
Theorem 7 (Siegel’s theorem with three points at infinity) Siegel’s theorem holds when the irreducible polynomial takes the form
for some quadratic polynomial and some distinct algebraic numbers .
Proof: We use the nonstandard formalism. Suppose for sake of contradiction that we can find a strictly nonstandard integer point on a curve of the indicated form. As this point is infinitesimally close to the line at infinity, must be infinitesimally close to one of ; without loss of generality we may assume that is infinitesimally close to .
We now use a version of the polynomial method, to find some polynomials of controlled degree that vanish to high order on the “arm” of the cubic curve that asymptotes to . More precisely, let be a large integer (actually will already suffice here), and consider the -vector space of polynomials of degree at most , and of degree at most in the variable; this space has dimension . Also, as one traverses the arm of , any polynomial in grows at a rate of at most , that is to say has a pole of order at most at the point at infinity . By performing Laurent expansions around this point (which is a non-singular point of , as the are assumed to be distinct), we may thus find a basis of , with the property that has a pole of order at most at for each .
From the control of the pole at , we have
for all . The exponents here become negative for , and on multiplying them all together we see that
This exponent is negative for large enough (or just take ). If we expand
for some algebraic numbers , then we thus have
for some standard . Note that the -dimensional vectors are linearly independent in , because the are linearly independent in . Applying the Schmidt subspace theorem in the contrapositive, we conclude that the -tuple is not in -general position. That is to say, one has a non-trivial constraint of the form
for some standard rational coefficients , not all zero. But, as is irreducible and cubic in , it has no common factor with the standard polynomial , so by Bezout’s theorem (Theorem 2) the constraint (4) only has standard solutions, contradicting the strictly nonstandard nature of .
Exercise 2 Rewrite the above argument so that it makes no reference to nonstandard analysis. (In this case, the rewriting is quite straightforward; however, there will be a subsequent argument in which the standard version is significantly messier than the nonstandard counterpart, which is the reason why I am working with the nonstandard formalism in this blog post.)
A similar argument works for higher degree curves that meet the line at infinity in three or more points, though if the curve has singularities at infinity then it becomes convenient to rely on the Riemann-Roch theorem to control the dimension of the analogue of the space . Note that when there are only two or fewer points at infinity, though, one cannot get the negative exponent of needed to usefully apply the subspace theorem. To deal with this case we require some additional tricks. For simplicity we focus on the case of Mordell curves, although it will be convenient to work with more general number fields than the rationals:
Theorem 8 (Siegel’s theorem for Mordell curves) Let be a non-zero integer. Then there are only finitely many integer solutions to . More generally, for any number field , and any nonzero , there are only finitely many algebraic integer solutions to , where is the ring of algebraic integers in .
Again, we will establish the nonstandard version. We need some additional notation:
Definition 9
We define an almost rational integer to be a nonstandard such that for some standard positive integer , and write for the -algebra of almost rational integers. If is a standard number field, we define an almost -integer to be a nonstandard such that for some standard positive integer , and write for the -algebra of almost -integers. We define an almost algebraic integer to be a nonstandard such that is a nonstandard algebraic integer for some standard positive integer , and write for the -algebra of almost algebraic integers.
Theorem 10 (Siegel for Mordell, nonstandard version) Let be a non-zero standard algebraic number. Then the curve does not contain any strictly nonstandard almost algebraic integer point.
Another way of phrasing this theorem is that if are strictly nonstandard almost algebraic integers, then is either strictly nonstandard or zero.
Exercise 3 Verify that Theorem 8 and Theorem 10 are equivalent.
Due to all the ineffectivity, our proof does not supply any bound on the solutions in terms of , even if one removes all references to nonstandard analysis. It is a conjecture of Hall (a special case of the notorious ABC conjecture) that one has the bound for all (or equivalently ), but even the weaker conjecture that are of polynomial size in is open. (The best known bounds are of exponential nature, and are proven using a version of Baker’s method: see for instance this text of Sprindzuk.)
A direct repetition of the arguments used to prove Theorem 7 will not work here, because the Mordell curve only hits the line at infinity at one point, . To get around this we will exploit the fact that the Mordell curve is an elliptic curve and thus has a group law on it. We will then divide all the integer points on this curve by two; as elliptic curves have four 2-torsion points, this will end up placing us in a situation like Theorem 7, with four points at infinity. However, there is an obstruction: it is not obvious that dividing an integer point on the Mordell curve by two will produce another integer point. However, this is essentially true (after enlarging the ring of integers slightly) thanks to a general principle of Chevalley and Weil, which can be worked out explicitly in the case of division by two on Mordell curves by relatively elementary means (relying mostly on unique factorisation of ideals of algebraic integers). We give the details below the fold.
As laid out in the foundational work of Kolmogorov, a classical probability space (or probability space for short) is a triplet , where is a set, is a -algebra of subsets of , and is a countably additive probability measure on . Given such a space, one can form a number of interesting function spaces, including
- the (real) Hilbert space of square-integrable functions , modulo -almost everywhere equivalence, and with the positive definite inner product ; and
- the unital commutative Banach algebra of essentially bounded functions , modulo -almost everywhere equivalence, with defined as the essential supremum of .
There is also a trace on defined by integration: .
One can form the category of classical probability spaces, by defining a morphism between probability spaces to be a function which is measurable (thus for all ) and measure-preserving (thus for all ).
Let us now abstract the algebraic features of these spaces as follows; for want of a better name, I will refer to this abstraction as an algebraic probability space, and is very similar to the non-commutative probability spaces studied in this previous post, except that these spaces are now commutative (and real).
Definition 1 An algebraic probability space is a pair where
- is a unital commutative real algebra;
- is a homomorphism such that and for all ;
- Every element of is bounded in the sense that . (Technically, this isn’t an algebraic property, but I need it for technical reasons.)
A morphism is a homomorphism which is trace-preserving, in the sense that for all .
For want of a better name, I’ll denote the category of algebraic probability spaces as . One can view this category as the opposite category to that of (a subcategory of) the category of tracial commutative real algebras. One could emphasise this opposite nature by denoting the algebraic probability space as rather than ; another suggestive (but slightly inaccurate) notation, inspired by the language of schemes, would be rather than . However, we will not adopt these conventions here, and refer to algebraic probability spaces just by the pair .
By the previous discussion, we have a covariant functor that takes a classical probability space to its algebraic counterpart , with a morphism of classical probability spaces mapping to a morphism of the corresponding algebraic probability spaces by the formula
for . One easily verifies that this is a functor.
In this post I would like to describe a functor which partially inverts (up to natural isomorphism), that is to say a recipe for starting with an algebraic probability space and producing a classical probability space . This recipe is not new – it is basically the (commutative) Gelfand-Naimark-Segal construction (discussed in this previous post) combined with the Loomis-Sikorski theorem (discussed in this previous post). However, I wanted to put the construction in a single location for sake of reference. I also wanted to make the point that and are not complete inverses; there is a bit of information in the algebraic probability space (e.g. topological information) which is lost when passing back to the classical probability space. In some future posts, I would like to develop some ergodic theory using the algebraic foundations of probability theory rather than the classical foundations; this turns out to be convenient in the ergodic theory arising from nonstandard analysis (such as that described in this previous post), in which the groups involved are uncountable and the underlying spaces are not standard Borel spaces.
Let us describe how to construct the functor , with details postponed to below the fold.
- Starting with an algebraic probability space , form an inner product on by the formula , and also form the spectral radius .
- The inner product is clearly positive semi-definite. Quotienting out the null vectors and taking completions, we arrive at a real Hilbert space , to which the trace may be extended.
- Somewhat less obviously, the spectral radius is well-defined and gives a norm on . Taking limits of sequences in of bounded spectral radius gives us a subspace of that has the structure of a real commutative Banach algebra.
- The idempotents of the Banach algebra may be indexed by elements of an abstract -algebra .
- The Boolean algebra homomorphisms (or equivalently, the real algebra homomorphisms ) may be indexed by elements of a space .
- Let denote the -algebra on generated by the basic sets for every .
- Let be the -ideal of generated by the sets , where is a sequence with .
- One verifies that is isomorphic to . Using this isomorphism, the trace on can be used to construct a countably additive measure on . The classical probability space is then , and the abstract spaces may now be identified with their concrete counterparts , .
- Every algebraic probability space morphism generates a classical probability morphism via the formula
using a pullback operation on the abstract -algebras that can be defined by density.
Remark 1 The classical probability space constructed by the functor has some additional structure; namely is a -Stone space (a Stone space with the property that the closure of any countable union of clopen sets is clopen), is the Baire -algebra (generated by the clopen sets), and the null sets are the meager sets. However, we will not use this additional structure here.
The partial inversion relationship between the functors and is given by the following assertion:
- There is a natural transformation from to the identity functor .
More informally: if one starts with an algebraic probability space and converts it back into a classical probability space , then there is a trace-preserving algebra homomorphism of to , which respects morphisms of the algebraic probability space. While this relationship is far weaker than an equivalence of categories (which would require that and are both natural isomorphisms), it is still good enough to allow many ergodic theory problems formulated using classical probability spaces to be reformulated instead as an equivalent problem in algebraic probability spaces.
Remark 2 The opposite composition is a little odd: it takes an arbitrary probability space and returns a more complicated probability space , with being the space of homomorphisms . while there is “morally” an embedding of into using the evaluation map, this map does not exist in general because points in may well have zero measure. However, if one takes a “pointless” approach and focuses just on the measure algebras , , then these algebras become naturally isomorphic after quotienting out by null sets.
Remark 3 An algebraic probability space captures a bit more structure than a classical probability space, because may be identified with a proper subset of that describes the “regular” functions (or random variables) of the space. For instance, starting with the unit circle (with the usual Haar measure and the usual trace ), any unital subalgebra of that is dense in will generate the same classical probability space on applying the functor , namely one will get the space of homomorphisms from to (with the measure induced from ). Thus for instance could be the continuous functions , the Wiener algebra or the full space , but the classical space will be unable to distinguish these spaces from each other. In particular, the functor loses information (roughly speaking, this functor takes an algebraic probability space and completes it to a von Neumann algebra, but then forgets exactly what algebra was initially used to create this completion). In ergodic theory, this sort of “extra structure” is traditionally encoded in topological terms, by assuming that the underlying probability space has a nice topological structure (e.g. a standard Borel space); however, with the algebraic perspective one has the freedom to have non-topological notions of extra structure, by choosing to be something other than an algebra of continuous functions on a topological space. I hope to discuss one such example of extra structure (coming from the Gowers-Host-Kra theory of uniformity seminorms) in a later blog post (this generalises the example of the Wiener algebra given previously, which is encoding “Fourier structure”).
A small example of how one could use the functors is as follows. Suppose one has a classical probability space with a measure-preserving action of an uncountable group , which is only defined (and an action) up to almost everywhere equivalence; thus for instance for any set and any , and might not be exactly equal, but only equal up to a null set. For similar reasons, an element of the invariant factor might not be exactly invariant with respect to , but instead one only has and equal up to null sets for each . One might like to “clean up” the action of to make it defined everywhere, and a genuine action everywhere, but this is not immediately achievable if is uncountable, since the union of all the null sets where something bad occurs may cease to be a null set. However, by applying the functor , each shift defines a morphism on the associated algebraic probability space (i.e. the Koopman operator), and then applying , we obtain a shift on a new classical probability space which now gives a genuine measure-preserving action of , and which is equivalent to the original action from a measure algebra standpoint. The invariant factor now consists of those sets in which are genuinely -invariant, not just up to null sets. (Basically, the classical probability space contains a Boolean algebra with the property that every measurable set is equivalent up to null sets to precisely one set in , allowing for a canonical “retraction” onto that eliminates all null set issues.)
More indirectly, the functors suggest that one should be able to develop a “pointless” form of ergodic theory, in which the underlying probability spaces are given algebraically rather than classically. I hope to give some more specific examples of this in later posts.
There are a number of ways to construct the real numbers , for instance
- as the metric completion of (thus, is defined as the set of Cauchy sequences of rationals, modulo Cauchy equivalence);
- as the space of Dedekind cuts on the rationals ;
- as the space of quasimorphisms on the integers, quotiented by bounded functions. (I believe this construction first appears in this paper of Street, who credits the idea to Schanuel, though the germ of this construction arguably goes all the way back to Eudoxus.)
There is also a fourth family of constructions that proceeds via nonstandard analysis, as a special case of what is known as the nonstandard hull construction. (Here I will assume some basic familiarity with nonstandard analysis and ultraproducts, as covered for instance in this previous blog post.) Given an unbounded nonstandard natural number , one can define two external additive subgroups of the nonstandard integers :
- The group of all nonstandard integers of magnitude less than or comparable to ; and
- The group of nonstandard integers of magnitude infinitesimally smaller than .
The group is a subgroup of , so we may form the quotient group . This space is isomorphic to the reals , and can in fact be used to construct the reals:
Proposition 1 For any coset of , there is a unique real number with the property that . The map is then an isomorphism between the additive groups and .
Proof: Uniqueness is clear. For existence, observe that the set is a Dedekind cut, and its supremum can be verified to have the required properties for .
In a similar vein, we can view the unit interval in the reals as the quotient
where is the nonstandard (i.e. internal) set ; of course, is not a group, so one should interpret as the image of under the quotient map (or , if one prefers). Or to put it another way, (1) asserts that is the image of with respect to the map .
In this post I would like to record a nice measure-theoretic version of the equivalence (1), which essentially appears already in standard texts on Loeb measure (see e.g. this text of Cutland). To describe the results, we must first quickly recall the construction of Loeb measure on . Given an internal subset of , we may define the elementary measure of by the formula
This is a finitely additive probability measure on the Boolean algebra of internal subsets of . We can then construct the Loeb outer measure of any subset in complete analogy with Lebesgue outer measure by the formula
where ranges over all sequences of internal subsets of that cover . We say that a subset of is Loeb measurable if, for any (standard) , one can find an internal subset of which differs from by a set of Loeb outer measure at most , and in that case we define the Loeb measure of to be . It is a routine matter to show (e.g. using the Carathéodory extension theorem) that the space of Loeb measurable sets is a -algebra, and that is a countably additive probability measure on this space that extends the elementary measure . Thus now has the structure of a probability space .
Now, the group acts (Loeb-almost everywhere) on the probability space by the addition map, thus for and (excluding a set of Loeb measure zero where exits ). This action is clearly seen to be measure-preserving. As such, we can form the invariant factor , defined by restricting attention to those Loeb measurable sets with the property that is equal -almost everywhere to for each .
The claim is then that this invariant factor is equivalent (up to almost everywhere equivalence) to the unit interval with Lebesgue measure (and the trivial action of ), by the same factor map used in (1). More precisely:
Theorem 2 Given a set , there exists a Lebesgue measurable set , unique up to -a.e. equivalence, such that is -a.e. equivalent to the set . Conversely, if is Lebesgue measurable, then is in , and .
More informally, we have the measure-theoretic version
of (1).
Proof: We first prove the converse. It is clear that is -invariant, so it suffices to show that is Loeb measurable with Loeb measure . This is easily verified when is an elementary set (a finite union of intervals). By countable subadditivity of outer measure, this implies that Loeb outer measure of is bounded by the Lebesgue outer measure of for any set ; since every Lebesgue measurable set differs from an elementary set by a set of arbitrarily small Lebesgue outer measure, the claim follows.
Now we establish the forward claim. Uniqueness is clear from the converse claim, so it suffices to show existence. Let . Let be an arbitrary standard real number, then we can find an internal set which differs from by a set of Loeb measure at most . As is -invariant, we conclude that for every , and differ by a set of Loeb measure (and hence elementary measure) at most . By the (contrapositive of the) underspill principle, there must exist a standard such that and differ by a set of elementary measure at most for all . If we then define the nonstandard function by the formula
then from the (nonstandard) triangle inequality we have
(say). On the other hand, has the Lipschitz continuity property
and so in particular we see that
for some Lipschitz continuous function . If we then let be the set where , one can check that differs from by a set of Loeb outer measure , and hence does so also. Sending to zero, we see (from the converse claim) that is a Cauchy sequence in and thus converges in for some Lebesgue measurable . The sets then converge in Loeb outer measure to , giving the claim.
Thanks to the Lebesgue differentiation theorem, the conditional expectation of a bounded Loeb-measurable function can be expressed (as a function on , defined -a.e.) as
By the abstract ergodic theorem from the previous post, one can also view this conditional expectation as the element in the closed convex hull of the shifts , of minimal norm. In particular, we obtain a form of the von Neumann ergodic theorem in this context: the averages for converge (as a net, rather than a sequence) in to .
If is (the standard part of) an internal function, that is to say the ultralimit of a sequence of finitary bounded functions, one can view the measurable function as a limit of the that is analogous to the “graphons” that emerge as limits of graphs (see e.g. the recent text of Lovasz on graph limits). Indeed, the measurable function is related to the discrete functions by the formula
for all , where is the nonprincipal ultrafilter used to define the nonstandard universe. In particular, from the Arzela-Ascoli diagonalisation argument there is a subsequence such that
thus is the asymptotic density function of the . For instance, if is the indicator function of a randomly chosen subset of , then the asymptotic density function would equal (almost everywhere, at least).
I’m continuing to look into understanding the ergodic theory of actions, as I believe this may allow one to apply ergodic theory methods to the “single-scale” or “non-asymptotic” setting (in which one averages only over scales comparable to a large parameter , rather than the traditional asymptotic approach of letting the scale go to infinity). I’m planning some further posts in this direction, though this is still a work in progress.
The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if is a unitary operator on a Hilbert space , and is a vector in that Hilbert space, then one has
in the strong topology, where is the -invariant subspace of , and is the orthogonal projection to . (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if is a countable amenable group acting on a Hilbert space by unitary transformations , and is a vector in that Hilbert space, then one has
for any Folner sequence of , where is the -invariant subspace. Thus one can interpret as a certain average of elements of the orbit of .
I recently discovered that there is a simple variant of this ergodic theorem that holds even when the group is not amenable (or not discrete), using a more abstract notion of averaging:
Theorem 1 (Abstract ergodic theorem) Let be an arbitrary group acting unitarily on a Hilbert space , and let be a vector in . Then is the element in the closed convex hull of of minimal norm, and is also the unique element of in this closed convex hull.
Proof: As the closed convex hull of is closed, convex, and non-empty in a Hilbert space, it is a classical fact (see e.g. Proposition 1 of this previous post) that it has a unique element of minimal norm. If for some , then the midpoint of and would be in the closed convex hull and be of smaller norm, a contradiction; thus is -invariant. To finish the first claim, it suffices to show that is orthogonal to every element of . But if this were not the case for some such , we would have for all , and thus on taking convex hulls , a contradiction.
Finally, since is orthogonal to , the same is true for for any in the closed convex hull of , and this gives the second claim.
This result is due to Alaoglu and Birkhoff. It implies the amenable ergodic theorem (1); indeed, given any , Theorem 1 implies that there is a finite convex combination of shifts of which lies within (in the norm) to . By the triangle inequality, all the averages also lie within of , but by the Folner property this implies that the averages are eventually within (say) of , giving the claim.
It turns out to be possible to use Theorem 1 as a substitute for the mean ergodic theorem in a number of contexts, thus removing the need for an amenability hypothesis. Here is a basic application:
Corollary 2 (Relative orthogonality) Let be a group acting unitarily on a Hilbert space , and let be a -invariant subspace of . Then and are relatively orthogonal over their common subspace , that is to say the restrictions of and to the orthogonal complement of are orthogonal to each other.
Proof: By Theorem 1, we have for all , and the claim follows. (Thanks to Gergely Harcos for this short argument.)
Now we give a more advanced application of Theorem 1, to establish some “Mackey theory” over arbitrary groups . Define a -system to be a probability space together with a measure-preserving action of on ; this gives an action of on , which by abuse of notation we also call :
(In this post we follow the usual convention of defining the spaces by quotienting out by almost everywhere equivalence.) We say that a -system is ergodic if consists only of the constants.
(A technical point: the theory becomes slightly cleaner if we interpret our measure spaces abstractly (or “pointlessly“), removing the underlying space and quotienting by the -ideal of null sets, and considering maps such as only on this quotient -algebra (or on the associated von Neumann algebra or Hilbert space ). However, we will stick with the more traditional setting of classical probability spaces here to keep the notation familiar, but with the understanding that many of the statements below should be understood modulo null sets.)
A factor of a -system is another -system together with a factor map which commutes with the -action (thus for all ) and respects the measure in the sense that for all . For instance, the -invariant factor , formed by restricting to the invariant algebra , is a factor of . (This factor is the first factor in an important hierachy, the next element of which is the Kronecker factor , but we will not discuss higher elements of this hierarchy further here.) If is a factor of , we refer to as an extension of .
From Corollary 2 we have
Corollary 3 (Relative independence) Let be a -system for a group , and let be a factor of . Then and are relatively independent over their common factor , in the sense that the spaces and are relatively orthogonal over when all these spaces are embedded into .
This has a simple consequence regarding the product of two -systems and , in the case when the action is trivial:
Lemma 4 If are two -systems, with the action of on trivial, then is isomorphic to in the obvious fashion.
This lemma is immediate for countable , since for a -invariant function , one can ensure that holds simultaneously for all outside of a null set, but is a little trickier for uncountable .
Proof: It is clear that is a factor of . To obtain the reverse inclusion, suppose that it fails, thus there is a non-zero which is orthogonal to . In particular, we have orthogonal to for any . Since lies in , we conclude from Corollary 3 (viewing as a factor of ) that is also orthogonal to . Since is an arbitrary element of , we conclude that is orthogonal to and in particular is orthogonal to itself, a contradiction. (Thanks to Gergely Harcos for this argument.)
Now we discuss the notion of a group extension.
Definition 5 (Group extension) Let be an arbitrary group, let be a -system, and let be a compact metrisable group. A -extension of is an extension whose underlying space is (with the product of and the Borel -algebra on ), the factor map is , and the shift maps are given by
where for each , is a measurable map (known as the cocycle associated to the -extension ).
An important special case of a -extension arises when the measure is the product of with the Haar measure on . In this case, also has a -action that commutes with the -action, making a -system. More generally, could be the product of with the Haar measure of some closed subgroup of , with taking values in ; then is now a system. In this latter case we will call -uniform.
If is a -extension of and is a measurable map, we can define the gauge transform of to be the -extension of whose measure is the pushforward of under the map , and whose cocycles are given by the formula
It is easy to see that is a -extension that is isomorphic to as a -extension of ; we will refer to and as equivalent systems, and as cohomologous to . We then have the following fundamental result of Mackey and of Zimmer:
Theorem 6 (Mackey-Zimmer theorem) Let be an arbitrary group, let be an ergodic -system, and let be a compact metrisable group. Then every ergodic -extension of is equivalent to an -uniform extension of for some closed subgroup of .
This theorem is usually stated for amenable groups , but by using Theorem 1 (or more precisely, Corollary 3) the result is in fact also valid for arbitrary groups; we give the proof below the fold. (In the usual formulations of the theorem, and are also required to be Lebesgue spaces, or at least standard Borel, but again with our abstract approach here, such hypotheses will be unnecessary.) Among other things, this theorem plays an important role in the Furstenberg-Zimmer structural theory of measure-preserving systems (as well as subsequent refinements of this theory by Host and Kra); see this previous blog post for some relevant discussion. One can obtain similar descriptions of non-ergodic extensions via the ergodic decomposition, but the result becomes more complicated to state, and we will not do so here.
This should be the final thread (for now, at least) for the Polymath8 project (encompassing the original Polymath8a paper, the nearly finished Polymath8b paper, and the retrospective paper), superseding the previous Polymath8b thread (which was quite full) and the Polymath8a/retrospective thread (which was more or less inactive).
On Polymath8a: I talked briefly with Andrew Granville, who is handling the paper for Algebra & Number Theory, and he said that a referee report should be coming in soon. Apparently length of the paper is a bit of an issue (not surprising, as it is 163 pages long) and there will be some suggestions to trim the size down a bit.
In view of the length issue at A&NT, I’m now leaning towards taking up Ken Ono’s offer to submit the Polymath8b paper to the new open access journal “Research in the Mathematical Sciences“. I think the paper is almost ready to be submitted (after the current participants sign off on it, of course), but it might be worth waiting on the Polymath8a referee report in case the changes suggested impact the 8b paper.
Finally, it is perhaps time to start working on the retrospective article, and collect some impressions from participants. I wrote up a quick draft of my own experiences, and also pasted in Pace Nielsen’s thoughts, as well as a contribution from an undergraduate following the project (Andrew Gibson). Hopefully we can collect a few more (either through comments on this blog, through email, or through Dropbox), and then start working on editing them together and finding some suitable concluding points to make about the Polymath8 project, and what lessons we can take from it for future projects of this type.
Given two unit vectors in a real inner product space, one can define the correlation between these vectors to be their inner product , or in more geometric terms, the cosine of the angle subtended by and . By the Cauchy-Schwarz inequality, this is a quantity between and , with the extreme positive correlation occurring when are identical, the extreme negative correlation occurring when are diametrically opposite, and the zero correlation occurring when are orthogonal. This notion is closely related to the notion of correlation between two non-constant square-integrable real-valued random variables , which is the same as the correlation between two unit vectors lying in the Hilbert space of square-integrable random variables, with being the normalisation of defined by subtracting off the mean and then dividing by the standard deviation of , and similarly for and .
One can also define correlation for complex (Hermitian) inner product spaces by taking the real part of the complex inner product to recover a real inner product.
While reading the (highly recommended) recent popular maths book “How not to be wrong“, by my friend and co-author Jordan Ellenberg, I came across the (important) point that correlation is not necessarily transitive: if correlates with , and correlates with , then this does not imply that correlates with . A simple geometric example is provided by the three unit vectors
in the Euclidean plane : and have a positive correlation of , as does and , but and are not correlated with each other. Or: for a typical undergraduate course, it is generally true that good exam scores are correlated with a deep understanding of the course material, and memorising from flash cards are correlated with good exam scores, but this does not imply that memorising flash cards is correlated with deep understanding of the course material.
However, there are at least two situations in which some partial version of transitivity of correlation can be recovered. The first is in the “99%” regime in which the correlations are very close to : if are unit vectors such that is very highly correlated with , and is very highly correlated with , then this does imply that is very highly correlated with . Indeed, from the identity
(and similarly for and ) and the triangle inequality
Thus, for instance, if and , then . This is of course closely related to (though slightly weaker than) the triangle inequality for angles:
Remark 1 (Thanks to Andrew Granville for conversations leading to this observation.) The inequality (1) also holds for sub-unit vectors, i.e. vectors with . This comes by extending in directions orthogonal to all three original vectors and to each other in order to make them unit vectors, enlarging the ambient Hilbert space if necessary. More concretely, one can apply (1) to the unit vectors
in .
But even in the “” regime in which correlations are very weak, there is still a version of transitivity of correlation, known as the van der Corput lemma, which basically asserts that if a unit vector is correlated with many unit vectors , then many of the pairs will then be correlated with each other. Indeed, from the Cauchy-Schwarz inequality
Thus, for instance, if for at least values of , then must be at least , which implies that for at least pairs . Or as another example: if a random variable exhibits at least positive correlation with other random variables , then if , at least two distinct must have positive correlation with each other (although this argument does not tell you which pair are so correlated). Thus one can view this inequality as a sort of `pigeonhole principle” for correlation.
A similar argument (multiplying each by an appropriate sign ) shows the related van der Corput inequality
and this inequality is also true for complex inner product spaces. (Also, the do not need to be unit vectors for this inequality to hold.)
Geometrically, the picture is this: if positively correlates with all of the , then the are all squashed into a somewhat narrow cone centred at . The cone is still wide enough to allow a few pairs to be orthogonal (or even negatively correlated) with each other, but (when is large enough) it is not wide enough to allow all of the to be so widely separated. Remarkably, the bound here does not depend on the dimension of the ambient inner product space; while increasing the number of dimensions should in principle add more “room” to the cone, this effect is counteracted by the fact that in high dimensions, almost all pairs of vectors are close to orthogonal, and the exceptional pairs that are even weakly correlated to each other become exponentially rare. (See this previous blog post for some related discussion; in particular, Lemma 2 from that post is closely related to the van der Corput inequality presented here.)
A particularly common special case of the van der Corput inequality arises when is a unit vector fixed by some unitary operator , and the are shifts of a single unit vector . In this case, the inner products are all equal, and we arrive at the useful van der Corput inequality
(In fact, one can even remove the absolute values from the right-hand side, by using (2) instead of (4).) Thus, to show that has negligible correlation with , it suffices to show that the shifts of have negligible correlation with each other.
Here is a basic application of the van der Corput inequality:
Proposition 1 (Weyl equidistribution estimate) Let be a polynomial with at least one non-constant coefficient irrational. Then one has
where .
Note that this assertion implies the more general assertion
for any non-zero integer (simply by replacing by ), which by the Weyl equidistribution criterion is equivalent to the sequence being asymptotically equidistributed in .
Proof: We induct on the degree of the polynomial , which must be at least one. If is equal to one, the claim is easily established from the geometric series formula, so suppose that and that the claim has already been proven for . If the top coefficient of is rational, say , then by partitioning the natural numbers into residue classes modulo , we see that the claim follows from the induction hypothesis; so we may assume that the top coefficient is irrational.
In order to use the van der Corput inequality as stated above (i.e. in the formalism of inner product spaces) we will need a non-principal ultrafilter (see e.g this previous blog post for basic theory of ultrafilters); we leave it as an exercise to the reader to figure out how to present the argument below without the use of ultrafilters (or similar devices, such as Banach limits). The ultrafilter defines an inner product on bounded complex sequences by setting
Strictly speaking, this inner product is only positive semi-definite rather than positive definite, but one can quotient out by the null vectors to obtain a positive-definite inner product. To establish the claim, it will suffice to show that
for every non-principal ultrafilter .
Note that the space of bounded sequences (modulo null vectors) admits a shift , defined by
This shift becomes unitary once we quotient out by null vectors, and the constant sequence is clearly a unit vector that is invariant with respect to the shift. So by the van der Corput inequality, we have
for any . But we may rewrite . Then observe that if , is a polynomial of degree whose coefficient is irrational, so by induction hypothesis we have for . For we of course have , and so
for any . Letting , we obtain the claim.
Recent Comments