You are currently browsing the category archive for the ‘expository’ category.
The prime number theorem can be expressed as the assertion
is the von Mangoldt function. It is a basic result in analytic number theory, but requires a bit of effort to prove. One “elementary” proof of this theorem proceeds through the Selberg symmetry formula
where the second von Mangoldt function is defined by the formula
(We are avoiding the use of the symbol here to denote Dirichlet convolution, as we will need this symbol to denote ordinary convolution shortly.) For the convenience of the reader, we give a proof of the Selberg symmetry formula below the fold. Actually, for the purposes of proving the prime number theorem, the weaker estimate
In this post I would like to record a somewhat “soft analysis” reformulation of the elementary proof of the prime number theorem in terms of Banach algebras, and specifically in Banach algebra structures on (completions of) the space of compactly supported continuous functions equipped with the convolution operation
This soft argument does not easily give any quantitative decay rate in the prime number theorem, but by the same token it avoids many of the quantitative calculations in the traditional proofs of this theorem. Ultimately, the key “soft analysis” fact used is the spectral radius formula
for any element of a unital commutative Banach algebra , where is the space of characters (i.e., continuous unital algebra homomorphisms from to ) of . This formula is due to Gelfand and may be found in any text on Banach algebras; for sake of completeness we prove it below the fold.
The connection between prime numbers and Banach algebras is given by the following consequence of the Selberg symmetry formula.
Theorem 1 (Construction of a Banach algebra norm) For any , let denote the quantity
Then is a seminorm on with the bound
for all . Furthermore, we have the Banach algebra bound
We prove this theorem below the fold. The prime number theorem then follows from Theorem 1 and the following two assertions. The first is an application of the spectral radius formula (6) and some basic Fourier analysis (in particular, the observation that contains a plentiful supply of local units:
Theorem 2 (Non-trivial Banach algebras with many local units have non-trivial spectrum) Let be a seminorm on obeying (7), (8). Suppose that is not identically zero. Then there exists such that
for all . In particular, by (7), one has
whenever is a non-negative function.
The second is a consequence of the Selberg symmetry formula and the fact that is real (as well as Mertens’ theorem, in the case), and is closely related to the non-vanishing of the Riemann zeta function on the line :
Theorem 3 (Breaking the parity barrier) Let . Then there exists such that is non-negative, and
Assuming Theorems 1, 2, 3, we may now quickly establish the prime number theorem as follows. Theorem 2 and Theorem 3 imply that the seminorm constructed in Theorem 1 is trivial, and thus
as for any Schwartz function (the decay rate in may depend on ). Specialising to functions of the form for some smooth compactly supported on , we conclude that
as ; by the smooth Urysohn lemma this implies that
as for any fixed , and the prime number theorem then follows by a telescoping series argument.
The same argument also yields the prime number theorem in arithmetic progressions, or equivalently that
for any fixed Dirichlet character ; the one difference is that the use of Mertens’ theorem is replaced by the basic fact that the quantity is non-vanishing.
In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence of finite graphs, one can extract a subsequence which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function . What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon . For instance, the edge density
converge to the integral
the triangle density
converges to the integral
the four-cycle density
converges to the integral
and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.
One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter ) to obtain a nonstandard graph , where is the ultraproduct of the , and similarly for the . The set can then be viewed as a symmetric subset of which is measurable with respect to the Loeb -algebra of the product (see this previous blog post for the construction of Loeb measure). A crucial point is that this -algebra is larger than the product of the Loeb -algebra of the individual vertex set . This leads to a decomposition
where the “graphon” is the orthogonal projection of onto , and the “regular error” is orthogonal to all product sets for . The graphon then captures the statistics of the nonstandard graph , in exact analogy with the more traditional graph limits: for instance, the edge density
(or equivalently, the limit of the along the ultrafilter ) is equal to the integral
where denotes Loeb measure on a nonstandard finite set ; the triangle density
(or equivalently, the limit along of the triangle densities of ) is equal to the integral
and so forth. Note that with this construction, the graphon is living on the Cartesian square of an abstract probability space , which is likely to be inseparable; but it is possible to cut down the Loeb -algebra on to minimal countable -algebra for which remains measurable (up to null sets), and then one can identify with , bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)
Additive combinatorics, which studies things like the additive structure of finite subsets of an abelian group , has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.
It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group in a nonstandard group , defined as the ultraproduct of finite -approximate groups for some standard . (A -approximate group is a symmetric set containing the origin such that can be covered by or fewer translates of .) We then let be the external subgroup of generated by ; equivalently, is the union of over all standard . This space has a Loeb measure , defined by setting
whenever is an internal subset of for any standard , and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.
The Loeb measure is a translation invariant measure on , normalised so that has Loeb measure one. As such, one should think of as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that is not actually a locally compact group with Haar measure, for two reasons:
- There is not an obvious topology on that makes it simultaneously locally compact, Hausdorff, and -compact. (One can get one or two out of three without difficulty, though.)
- The addition operation is not measurable from the product Loeb algebra to . Instead, it is measurable from the coarser Loeb algebra to (compare with the analogous situation for nonstandard graphs).
Nevertheless, the analogy is a useful guide for the arguments that follow.
Let denote the space of bounded Loeb measurable functions (modulo almost everywhere equivalence) that are supported on for some standard ; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation , defined by setting
whenever , are bounded nonstandard functions (extended by zero to all of ), and then extending to arbitrary elements of by density. Equivalently, is the pushforward of the -measurable function under the map .
The basic structural theorem is then as follows.
Theorem 1 (Kronecker factor) Let be an ultra approximate group. Then there exists a (standard) locally compact abelian group of the form
for some standard and some compact abelian group , equipped with a Haar measure and a measurable homomorphism (using the Loeb -algebra on and the Borel -algebra on ), with the following properties:
- (i) has dense image, and is the pushforward of Loeb measure by .
- (ii) There exists sets with open and compact, such that
- (iii) Whenever with compact and open, there exists a nonstandard finite set such that
- (iv) If , then we have the convolution formula
where are the pushforwards of to , the convolution on the right-hand side is convolution using , and is the pullback map from to . In particular, if , then for all .
One can view the locally compact abelian group as a “model “or “Kronecker factor” for the ultra approximate group (in close analogy with the Kronecker factor from ergodic theory). In the case that is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components of the Kronecker group are trivial, and this theorem was implicitly established by Szegedy. The compact group is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions , one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor .
Given any sequence of uniformly bounded functions for some fixed , we can view the function defined by
as an “additive limit” of the , in much the same way that graphons are limits of the indicator functions . The additive limits capture some of the statistics of the , for instance the normalised means
converge (along the ultrafilter ) to the mean
and for three sequences of functions, the normalised correlation
converges along to the correlation
the normalised Gowers norm
converges along to the Gowers norm
and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised norm
does not necessarily converge to the norm
but can converge instead to a larger quantity, due to the presence of the orthogonal projection in the definition (4) of .
An important special case of an additive limit occurs when the functions involved are indicator functions of some subsets of . The additive limit does not necessarily remain an indicator function, but instead takes values in (much as a graphon takes values in even though the original indicators take values in ). The convolution is then the ultralimit of the normalised convolutions ; in particular, the measure of the support of provides a lower bound on the limiting normalised cardinality of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset could contain a large number of elements which have very few () representations as the sum of two elements of , and in the limit these portions of the sumset fall outside of the support of . (One can think of the support of as describing the “essential” sumset of , discarding those elements that have only very few representations.) Similarly for higher convolutions of . Thus one can use additive limits to partially control the growth of iterated sumsets of subsets of approximate groups , in the regime where stays bounded and goes to infinity.
Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.
Example 1 (Bohr sets) We take to be the intervals , where is a sequence going to infinity; these are -approximate groups for all . Let be an irrational real number, let be an interval in , and for each natural number let be the Bohr set
In this case, the (reduced) Kronecker factor can be taken to be the infinite cylinder with the usual Lebesgue measure . The additive limits of and end up being and , where is the finite cylinder
and is the rectangle
Geometrically, one should think of and as being wrapped around the cylinder via the homomorphism , and then one sees that is converging in some normalised weak sense to , and similarly for and . In particular, the additive limit predicts the growth rate of the iterated sumsets to be quadratic in until becomes comparable to , at which point the growth transitions to linear growth, in the regime where is bounded and is large.
If were rational instead of irrational, then one would need to replace by the finite subgroup here.
Example 2 (Structured subsets of progressions) We take be the rank two progression
where is a sequence going to infinity; these are -approximate groups for all . Let be the subset
Then the (reduced) Kronecker factor can be taken to be with Lebesgue measure , and the additive limits of the and are then and , where is the square
and is the circle
Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism for to embed the original sets into the plane . In particular, one now expects the growth rate of the iterated sumsets and to be quadratic in , in the regime where is bounded and is large.
Example 3 (Dissociated sets) Let be a fixed natural number, and take
where are randomly chosen elements of a large cyclic group , where is a sequence of primes going to infinity. These are -approximate groups. The (reduced) Kronecker factor can (almost surely) then be taken to be with counting measure, and the additive limit of is , where and is the standard basis of . In particular, the growth rates of should grow approximately like for bounded and large.
Example 4 (Random subsets of groups) Let be a sequence of finite additive groups whose order is going to infinity. Let be a random subset of of some fixed density . Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group , and the additive limit of the is the constant function . The convolutions then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of ; this reflects the fact that of the elements of can be represented as the sum of two elements of in ways. In particular, occupies a proportion of .
Example 5 (Trigonometric series) Take for a sequence of primes going to infinity, and for each let be an infinite sequence of frequencies chosen uniformly and independently from . Let denote the random trigonometric series
Then (almost surely) we can take the reduced Kronecker factor to be the infinite torus (with the Haar probability measure ), and the additive limit of the then becomes the function defined by the formula
In fact, the pullback is the ultralimit of the . As such, for any standard exponent , the normalised norm
can be seen to converge to the limit
The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.
It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.
Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.
One of the first basic theorems in group theory is Cayley’s theorem, which links abstract finite groups with concrete finite groups (otherwise known as permutation groups).
Theorem 1 (Cayley’s theorem) Let be a group of some finite order . Then is isomorphic to a subgroup of the symmetric group on elements . Furthermore, this subgroup is simply transitive: given two elements of , there is precisely one element of such that .
One can therefore think of as a sort of “universal” group that contains (up to isomorphism) all the possible groups of order .
Proof: The group acts on itself by multiplication on the left, thus each element may be identified with a permutation on given by the map . This can be easily verified to identify with a simply transitive permutation group on . The claim then follows by arbitrarily identifying with .
More explicitly, the permutation group arises by arbitrarily enumerating as and then associating to each group element the permutation defined by the formula
The simply transitive group given by Cayley’s theorem is not unique, due to the arbitrary choice of identification of with , but is unique up to conjugation by an element of . On the other hand, it is easy to see that every simply transitive subgroup of is of order , and that two such groups are isomorphic if and only if they are conjugate by an element of . Thus Cayley’s theorem in fact identifies the moduli space of groups of order (up to isomorphism) with the simply transitive subgroups of (up to conjugacy by elements of ).
One can generalise Cayley’s theorem to groups of infinite order without much difficulty. But in this post, I would like to note an (easy) generalisation of Cayley’s theorem in a different direction, in which the group is no longer assumed to be of order , but rather to have an index subgroup that is isomorphic to a fixed group . The generalisation is:
Theorem 2 (Cayley’s theorem for -sets) Let be a group, and let be a group that contains an index subgroup isomorphic to . Then is isomorphic to a subgroup of the semidirect product , defined explicitly as the set of tuples with product
and inverse
(This group is a wreath product of with , and is sometimes denoted , or more precisely .) Furthermore, is simply transitive in the following sense: given any two elements of and , there is precisely one in such that and .
Of course, Theorem 1 is the special case of Theorem 2 when is trivial. This theorem allows one to view as a “universal” group for modeling all groups containing a copy of as an index subgroup, in exactly the same way that is a universal group for modeling groups of order . This observation is not at all deep, but I had not seen it before, so I thought I would record it here. (EDIT: as pointed out in comments, this is a slight variant of the universal embedding theorem of Krasner and Kaloujnine, which covers the case when is normal, in which case one can embed into the wreath product , which is a subgroup of .)
Proof: The basic idea here is to replace the category of sets in Theorem 1 by the category of -sets, by which we mean sets with a right-action of the group . A morphism between two -sets is a function which respects the right action of , thus for all and .
Observe that if contains a copy of as a subgroup, then one can view as an -set, using the right-action of (which we identify with the indicated subgroup of ). The left action of on itself commutes with the right-action of , and so we can represent by -set automorphisms on the -set .
As has index in , we see that is (non-canonically) isomorphic (as an -set) to the -set with the obvious right action of : . It is easy to see that the group of -set automorphisms of can be identified with , with the latter group acting on the former -set by the rule
(it is routine to verify that this is indeed an action of by -set automorphisms. It is then a routine matter to verify the claims (the simple transitivity of follows from the simple transitivity of the action of on itself).
More explicitly, the group arises by arbitrarily enumerating the left-cosets of in as and then associating to each group element the element , where the permutation and the elements are defined by the formula
By noting that is an index normal subgroup of , we recover the classical result of Poincaré that any group that contains as an index subgroup, contains a normal subgroup of index dividing that is contained in . (Quotienting out the right-action, we recover also the classical proof of this result, as the action of on itself then collapses to the action of on the quotient space , the stabiliser of which is .)
Exercise 1 Show that a simply transitive subgroup of contains a copy of as an index subgroup; in particular, there is a canonical embedding of into , and can be viewed as an -set.
Exercise 2 Show that any two simply transitive subgroups of are isomorphic simultaneously as groups and as -sets (that is, there is a bijection that is simultaneously a group isomorphism and an -set isomorphism) if and only if they are conjugate by an element of .
[UPDATE: Exercises corrected; thanks to Keith Conrad for some additional corrections and comments.]
Analytic number theory is often concerned with the asymptotic behaviour of various arithmetic functions: functions or from the natural numbers to the real numbers or complex numbers . In this post, we will focus on the purely algebraic properties of these functions, and for reasons that will become clear later, it will be convenient to generalise the notion of an arithmetic function to functions taking values in some abstract commutative ring . In this setting, we can add or multiply two arithmetic functions to obtain further arithmetic functions , and we can also form the Dirichlet convolution by the usual formula
Regardless of what commutative ring is in used here, we observe that Dirichlet convolution is commutative, associative, and bilinear over .
An important class of arithmetic functions in analytic number theory are the multiplicative functions, that is to say the arithmetic functions such that and
for all coprime . A subclass of these functions are the completely multiplicative functions, in which the restriction that be coprime is dropped. Basic examples of completely multiplicative functions (in the classical setting ) include
- the Kronecker delta , defined by setting for and otherwise;
- the constant function and the linear function (which by abuse of notation we denote by );
- more generally monomials for any fixed complex number (in particular, the “Archimedean characters” for any fixed ), which by abuse of notation we denote by ;
- Dirichlet characters ;
- the Liouville function ;
- the indicator function of the -smooth numbers (numbers whose prime factors are all at most ), for some given ; and
- the indicator function of the -rough numbers (numbers whose prime factors are all greater than ), for some given .
Examples of multiplicative functions that are not completely multiplicative include
- the Möbius function ;
- the divisor function (also referred to as );
- more generally, the higher order divisor functions for ;
- the Euler totient function ;
- the number of roots of a given polynomial defined over ;
- more generally, the point counting function of a given algebraic variety defined over (closely tied to the Hasse-Weil zeta function of );
- the function that counts the number of representations of as the sum of two squares;
- more generally, the function that maps a natural number to the number of ideals in a given number field of absolute norm (closely tied to the Dedekind zeta function of ).
These multiplicative functions interact well with the multiplication and convolution operations: if are multiplicative, then so are and , and if is completely multiplicative, then we also have
Finally, the product of completely multiplicative functions is again completely multiplicative. On the other hand, the sum of two multiplicative functions will never be multiplicative (just look at what happens at ), and the convolution of two completely multiplicative functions will usually just be multiplicative rather than completley multiplicative.
The specific multiplicative functions listed above are also related to each other by various important identities, for instance
where is an arbitrary arithmetic function.
On the other hand, analytic number theory also is very interested in certain arithmetic functions that are not exactly multiplicative (and certainly not completely multiplicative). One particularly important such function is the von Mangoldt function . This function is certainly not multiplicative, but is clearly closely related to such functions via such identities as and , where is the natural logarithm function. The purpose of this post is to point out that functions such as the von Mangoldt function lie in a class closely related to multiplicative functions, which I will call the derived multiplicative functions. More precisely:
Definition 1 A derived multiplicative function is an arithmetic function that can be expressed as the formal derivative
at the origin of a family of multiplicative functions parameterised by a formal parameter . Equivalently, is a derived multiplicative function if it is the coefficient of a multiplicative function in the extension of by a nilpotent infinitesimal ; in other words, there exists an arithmetic function such that the arithmetic function is multiplicative, or equivalently that is multiplicative and one has the Leibniz rule
More generally, for any , a -derived multiplicative function is an arithmetic function that can be expressed as the formal derivative
at the origin of a family of multiplicative functions parameterised by formal parameters . Equivalently, is the coefficient of a multiplicative function in the extension of by nilpotent infinitesimals .
We define the notion of a -derived completely multiplicative function similarly by replacing “multiplicative” with “completely multiplicative” in the above discussion.
There are Leibniz rules similar to (2) but they are harder to state; for instance, a doubly derived multiplicative function comes with singly derived multiplicative functions and a multiplicative function such that
for all coprime .
One can then check that the von Mangoldt function is a derived multiplicative function, because is multiplicative in the ring with one infinitesimal . Similarly, the logarithm function is derived completely multiplicative because is completely multiplicative in . More generally, any additive function is derived multiplicative because it is the top order coefficient of .
Remark 1 One can also phrase these concepts in terms of the formal Dirichlet series associated to an arithmetic function . A function is multiplicative if admits a (formal) Euler product; is derived multiplicative if is the (formal) first derivative of an Euler product with respect to some parameter (not necessarily , although this is certainly an option); and so forth.
Using the definition of a -derived multiplicative function as the top order coefficient of a multiplicative function of a ring with infinitesimals, it is easy to see that the product or convolution of a -derived multiplicative function and a -derived multiplicative function is necessarily a -derived multiplicative function (again taking values in ). Thus, for instance, the higher-order von Mangoldt functions are -derived multiplicative functions, because is a -derived completely multiplicative function. More explicitly, is the top order coeffiicent of the completely multiplicative function , and is the top order coefficient of the multiplicative function , with both functions taking values in the ring of complex numbers with infinitesimals attached.
It then turns out that most (if not all) of the basic identities used by analytic number theorists concerning derived multiplicative functions, can in fact be viewed as coefficients of identities involving purely multiplicative functions, with the latter identities being provable primarily from multiplicative identities, such as (1). This phenomenon is analogous to the one in linear algebra discussed in this previous blog post, in which many of the trace identities used there are derivatives of determinant identities. For instance, the Leibniz rule
for any arithmetic functions can be viewed as the top order term in
in the ring with one infinitesimal , and then we see that the Leibniz rule is a special case (or a derivative) of (1), since is completely multiplicative. Similarly, the formulae
are top order terms of
and the variant formula is the top order term of
which can then be deduced from the previous identities by noting that the completely multiplicative function inverts multiplicatively, and also noting that annihilates . The Selberg symmetry formula
which plays a key role in the Erdös-Selberg elementary proof of the prime number theorem (as discussed in this previous blog post), is the top order term of the identity
involving the multiplicative functions , , , with two infinitesimals , and this identity can be proven while staying purely within the realm of multiplicative functions, by using the identities
and (1). Similarly for higher identities such as
which arise from expanding out using (1) and the above identities; we leave this as an exercise to the interested reader.
An analogous phenomenon arises for identities that are not purely multiplicative in nature due to the presence of truncations, such as the Vaughan identity
for any , where is the restriction of a multiplicative function to the natural numbers greater than , and similarly for , , . In this particular case, (4) is the top order coefficient of the identity
which can be easily derived from the identities and . Similarly for the Heath-Brown identity
valid for natural numbers up to , where and are arbitrary parameters and denotes the -fold convolution of , and discussed in this previous blog post; this is the top order coefficient of
and arises by first observing that
vanishes up to , and then expanding the left-hand side using the binomial formula and the identity .
One consequence of this phenomenon is that identities involving derived multiplicative functions tend to have a dimensional consistency property: all terms in the identity have the same order of derivation in them. For instance, all the terms in the Selberg symmetry formula (3) are doubly derived functions, all the terms in the Vaughan identity (4) or the Heath-Brown identity (5) are singly derived functions, and so forth. One can then use dimensional analysis to help ensure that one has written down a key identity involving such functions correctly, much as is done in physics.
In addition to the dimensional analysis arising from the order of derivation, there is another dimensional analysis coming from the value of multiplicative functions at primes (which is more or less equivalent to the order of pole of the Dirichlet series at ). Let us say that a multiplicative function has a pole of order if one has on the average for primes , where we will be a bit vague as to what “on the average” means as it usually does not matter in applications. Thus for instance, or has a pole of order (a simple pole), or has a pole of order (i.e. neither a zero or a pole), Dirichlet characters also have a pole of order (although this is slightly nontrivial, requiring Dirichlet’s theorem), has a pole of order (a simple zero), has a pole of order , and so forth. Note that the convolution of a multiplicative function with a pole of order with a multiplicative function with a pole of order will be a multiplicative function with a pole of order . If there is no oscillation in the primes (e.g. if for all primes , rather than on the average), it is also true that the product of a multiplicative function with a pole of order with a multiplicative function with a pole of order will be a multiplicative function with a pole of order . The situation is significantly different though in the presence of oscillation; for instance, if is a quadratic character then has a pole of order even though has a pole of order .
A -derived multiplicative function will then be said to have an underived pole of order if it is the top order coefficient of a multiplicative function with a pole of order ; in terms of Dirichlet series, this roughly means that the Dirichlet series has a pole of order at . For instance, the singly derived multiplicative function has an underived pole of order , because it is the top order coefficient of , which has a pole of order ; similarly has an underived pole of order , being the top order coefficient of . More generally, and have underived poles of order and respectively for any .
By taking top order coefficients, we then see that the convolution of a -derived multiplicative function with underived pole of order and a -derived multiplicative function with underived pole of order is a -derived multiplicative function with underived pole of order . If there is no oscillation in the primes, the product of these functions will similarly have an underived pole of order , for instance has an underived pole of order . We then have the dimensional consistency property that in any of the standard identities involving derived multiplicative functions, all terms not only have the same derived order, but also the same underived pole order. For instance, in (3), (4), (5) all terms have underived pole order (with any Mobius function terms being counterbalanced by a matching term of or ). This gives a second way to use dimensional analysis as a consistency check. For instance, any identity that involves a linear combination of and is suspect because the underived pole orders do not match (being and respectively), even though the derived orders match (both are ).
One caveat, though: this latter dimensional consistency breaks down for identities that involve infinitely many terms, such as Linnik’s identity
In this case, one can still rewrite things in terms of multiplicative functions as
so the former dimensional consistency is still maintained.
I thank Andrew Granville, Kannan Soundararajan, and Emmanuel Kowalski for helpful conversations on these topics.
In the traditional foundations of probability theory, one selects a probability space , and makes a distinction between deterministic mathematical objects, which do not depend on the sampled state , and stochastic (or random) mathematical objects, which do depend (but in a measurable fashion) on the sampled state . For instance, a deterministic real number would just be an element , whereas a stochastic real number (or real random variable) would be a measurable function , where in this post will always be endowed with the Borel -algebra. (For readers familiar with nonstandard analysis, the adjectives “deterministic” and “stochastic” will be used here in a manner analogous to the uses of the adjectives “standard” and “nonstandard” in nonstandard analysis. The analogy is particularly close when comparing with the “cheap nonstandard analysis” discussed in this previous blog post. We will also use “relative to ” as a synonym for “stochastic”.)
Actually, for our purposes we will adopt the philosophy of identifying stochastic objects that agree almost surely, so if one was to be completely precise, we should define a stochastic real number to be an equivalence class of measurable functions , up to almost sure equivalence. However, we shall often abuse notation and write simply as .
More generally, given any measurable space , we can talk either about deterministic elements , or about stochastic elements of , that is to say equivalence classes of measurable maps up to almost sure equivalence. We will use to denote the set of all stochastic elements of . (For readers familiar with sheaves, it may helpful for the purposes of this post to think of as the space of measurable global sections of the trivial -bundle over .) Of course every deterministic element of can also be viewed as a stochastic element given by (the equivalence class of) the constant function , thus giving an embedding of into . We do not attempt here to give an interpretation of for sets that are not equipped with a -algebra .
Remark 1 In my previous post on the foundations of probability theory, I emphasised the freedom to extend the sample space to a larger sample space whenever one wished to inject additional sources of randomness. This is of course an important freedom to possess (and in the current formalism, is the analogue of the important operation of base change in algebraic geometry), but in this post we will focus on a single fixed sample space , and not consider extensions of this space, so that one only has to consider two types of mathematical objects (deterministic and stochastic), as opposed to having many more such types, one for each potential choice of sample space (with the deterministic objects corresponding to the case when the sample space collapses to a point).
Any (measurable) -ary operation on deterministic mathematical objects then extends to their stochastic counterparts by applying the operation pointwise. For instance, the addition operation on deterministic real numbers extends to an addition operation , by defining the class for to be the equivalence class of the function ; this operation is easily seen to be well-defined. More generally, any measurable -ary deterministic operation between measurable spaces extends to an stochastic operation in the obvious manner.
There is a similar story for -ary relations , although here one has to make a distinction between a deterministic reading of the relation and a stochastic one. Namely, if we are given stochastic objects for , the relation does not necessarily take values in the deterministic Boolean algebra , but only in the stochastic Boolean algebra – thus may be true with some positive probability and also false with some positive probability (with the event that being stochastically true being determined up to null events). Of course, the deterministic Boolean algebra embeds in the stochastic one, so we can talk about a relation being determinstically true or deterministically false, which (due to our identification of stochastic objects that agree almost surely) means that is almost surely true or almost surely false respectively. For instance given two stochastic objects , one can view their equality relation as having a stochastic truth value. This is distinct from the way the equality symbol is used in mathematical logic, which we will now call “equality in the deterministic sense” to reduce confusion. Thus, in the deterministic sense if and only if the stochastic truth value of is equal to , that is to say that for almost all .
Any universal identity for deterministic operations (or universal implication between identities) extends to their stochastic counterparts: for instance, addition is commutative, associative, and cancellative on the space of deterministic reals , and is therefore commutative, associative, and cancellative on stochastic reals as well. However, one has to be more careful when working with mathematical laws that are not expressible as universal identities, or implications between identities. For instance, is an integral domain: if are deterministic reals such that , then one must have or . However, if are stochastic reals such that (in the deterministic sense), then it is no longer necessarily the case that (in the deterministic sense) or that (in the deterministic sense); however, it is still true that “ or ” is true in the deterministic sense if one interprets the boolean operator “or” stochastically, thus “ or ” is true for almost all . Another way to properly obtain a stochastic interpretation of the integral domain property of is to rewrite it as
and then make all sets stochastic to obtain the true statement
thus we have to allow the index for which vanishing occurs to also be stochastic, rather than deterministic. (A technical note: when one proves this statement, one has to select in a measurable fashion; for instance, one can choose to equal when , and otherwise (so that in the “tie-breaking” case when and both vanish, one always selects to equal ).)
Similarly, the law of the excluded middle fails when interpreted deterministically, but remains true when interpreted stochastically: if is a stochastic statement, then it is not necessarily the case that is either deterministically true or deterministically false; however the sentence “ or not-” is still deterministically true if the boolean operator “or” is interpreted stochastically rather than deterministically.
To avoid having to keep pointing out which operations are interpreted stochastically and which ones are interpreted deterministically, we will use the following convention: if we assert that a mathematical sentence involving stochastic objects is true, then (unless otherwise specified) we mean that is deterministically true, assuming that all relations used inside are interpreted stochastically. For instance, if are stochastic reals, when we assert that “Exactly one of , , or is true”, then by default it is understood that the relations , , and the boolean operator “exactly one of” are interpreted stochastically, and the assertion is that the sentence is deterministically true.
In the above discussion, the stochastic objects being considered were elements of a deterministic space , such as the reals . However, it can often be convenient to generalise this situation by allowing the ambient space to also be stochastic. For instance, one might wish to consider a stochastic vector inside a stochastic vector space , or a stochastic edge of a stochastic graph . In order to formally describe this situation within the classical framework of measure theory, one needs to place all the ambient spaces inside a measurable space. This can certainly be done in many contexts (e.g. when considering random graphs on a deterministic set of vertices, or if one is willing to work up to equivalence and place the ambient spaces inside a suitable moduli space), but is not completely natural in other contexts. For instance, if one wishes to consider stochastic vector spaces of potentially unbounded dimension (in particular, potentially larger than any given cardinal that one might specify in advance), then the class of all possible vector spaces is so large that it becomes a proper class rather than a set (even if one works up to equivalence), making it problematic to give this class the structure of a measurable space; furthermore, even once one does so, one needs to take additional care to pin down what it would mean for a random vector lying in a random vector space to depend “measurably” on .
Of course, in any reasonable application one can avoid the set theoretic issues at least by various ad hoc means, for instance by restricting the dimension of all spaces involved to some fixed cardinal such as . However, the measure-theoretic issues can require some additional effort to resolve properly.
In this post I would like to describe a different way to formalise stochastic spaces, and stochastic elements of these spaces, by viewing the spaces as measure-theoretic analogue of a sheaf, but being over the probability space rather than over a topological space; stochastic objects are then sections of such sheaves. Actually, for minor technical reasons it is convenient to work in the slightly more general setting in which the base space is a finite measure space rather than a probability space, thus can take any value in rather than being normalised to equal . This will allow us to easily localise to subevents of without the need for normalisation, even when is a null event (though we caution that the map from deterministic objects ceases to be injective in this latter case). We will however still continue to use probabilistic terminology. despite the lack of normalisation; thus for instance, sets in will be referred to as events, the measure of such a set will be referred to as the probability (which is now permitted to exceed in some cases), and an event whose complement is a null event shall be said to hold almost surely. It is in fact likely that almost all of the theory below extends to base spaces which are -finite rather than finite (for instance, by damping the measure to become finite, without introducing any further null events), although we will not pursue this further generalisation here.
The approach taken in this post is “topos-theoretic” in nature (although we will not use the language of topoi explicitly here), and is well suited to a “pointless” or “point-free” approach to probability theory, in which the role of the stochastic state is suppressed as much as possible; instead, one strives to always adopt a “relative point of view”, with all objects under consideration being viewed as stochastic objects relative to the underlying base space . In this perspective, the stochastic version of a set is as follows.
Definition 1 (Stochastic set) Unless otherwise specified, we assume that we are given a fixed finite measure space (which we refer to as the base space). A stochastic set (relative to ) is a tuple consisting of the following objects:
- A set assigned to each event ; and
- A restriction map from to to each pair of nested events . (Strictly speaking, one should indicate the dependence on in the notation for the restriction map, e.g. using instead of , but we will abuse notation by omitting the dependence.)
We refer to elements of as local stochastic elements of the stochastic set , localised to the event , and elements of as global stochastic elements (or simply elements) of the stochastic set. (In the language of sheaves, one would use “sections” instead of “elements” here, but I prefer to use the latter terminology here, for compatibility with conventional probabilistic notation, where for instance measurable maps from to are referred to as real random variables, rather than sections of the reals.)
Furthermore, we impose the following axioms:
- (Category) The map from to is the identity map, and if are events in , then for all .
- (Null events trivial) If is a null event, then the set is a singleton set. (In particular, is always a singleton set; this is analogous to the convention that for any number .)
- (Countable gluing) Suppose that for each natural number , one has an event and an element such that for all . Then there exists a unique such that for all .
If is an event in , we define the localisation of the stochastic set to to be the stochastic set
relative to . (Note that there is no need to renormalise the measure on , as we are not demanding that our base space have total measure .)
The following fact is useful for actually verifying that a given object indeed has the structure of a stochastic set:
Exercise 1 Show that to verify the countable gluing axiom of a stochastic set, it suffices to do so under the additional hypothesis that the events are disjoint. (Note that this is quite different from the situation with sheaves over a topological space, in which the analogous gluing axiom is often trivial in the disjoint case but has non-trivial content in the overlapping case. This is ultimately because a -algebra is closed under all Boolean operations, whereas a topology is only closed under union and intersection.)
Let us illustrate the concept of a stochastic set with some examples.
Example 1 (Discrete case) A simple case arises when is a discrete space which is at most countable. If we assign a set to each , with a singleton if . One then sets , with the obvious restriction maps, giving rise to a stochastic set . (Thus, a local element of can be viewed as a map on that takes values in for each .) Conversely, it is not difficult to see that any stochastic set over an at most countable discrete probability space is of this form up to isomorphism. In this case, one can think of as a bundle of sets over each point (of positive probability) in the base space . One can extend this bundle interpretation of stochastic sets to reasonably nice sample spaces (such as standard Borel spaces) and similarly reasonable ; however, I would like to avoid this interpretation in the formalism below in order to be able to easily work in settings in which and are very “large” (e.g. not separable in any reasonable sense). Note that we permit some of the to be empty, thus it can be possible for to be empty whilst for some strict subevents of to be non-empty. (This is analogous to how it is possible for a sheaf to have local sections but no global sections.) As such, the space of global elements does not completely determine the stochastic set ; one sometimes needs to localise to an event in order to see the full structure of such a set. Thus it is important to distinguish between a stochastic set and its space of global elements. (As such, it is a slight abuse of the axiom of extensionality to refer to global elements of simply as “elements”, but hopefully this should not cause too much confusion.)
Example 2 (Measurable spaces as stochastic sets) Returning now to a general base space , any (deterministic) measurable space gives rise to a stochastic set , with being defined as in previous discussion as the measurable functions from to modulo almost everywhere equivalence (in particular, a singleton set when is null), with the usual restriction maps. The constraint of measurability on the maps , together with the quotienting by almost sure equivalence, means that is now more complicated than a plain Cartesian product of fibres, but this still serves as a useful first approximation to what is for the purposes of developing intuition. Indeed, the measurability constraint is so weak (as compared for instance to topological or smooth constraints in other contexts, such as sheaves of continuous or smooth sections of bundles) that the intuition of essentially independent fibres is quite an accurate one, at least if one avoids consideration of an uncountable number of objects simultaneously.
Example 3 (Extended Hilbert modules) This example is the one that motivated this post for me. Suppose that one has an extension of the base space , thus we have a measurable factor map such that the pushforward of the measure by is equal to . Then we have a conditional expectation operator , defined as the adjoint of the pullback map . As is well known, the conditional expectation operator also extends to a contraction ; by monotone convergence we may also extend to a map from measurable functions from to the extended non-negative reals , to measurable functions from to . We then define the “extended Hilbert module” to be the space of functions with finite almost everywhere. This is an extended version of the Hilbert module , which is defined similarly except that is required to lie in ; this is a Hilbert module over which is of particular importance in the Furstenberg-Zimmer structure theory of measure-preserving systems. We can then define the stochastic set by setting
with the obvious restriction maps. In the case that are standard Borel spaces, one can disintegrate as an integral of probability measures (supported in the fibre ), in which case this stochastic set can be viewed as having fibres (though if is not discrete, there are still some measurability conditions in on the local and global elements that need to be imposed). However, I am interested in the case when are not standard Borel spaces (in fact, I will take them to be algebraic probability spaces, as defined in this previous post), in which case disintegrations are not available. However, it appears that the stochastic analysis developed in this blog post can serve as a substitute for the tool of disintegration in this context.
We make the remark that if is a stochastic set and are events that are equivalent up to null events, then one can identify with (through their common restriction to , with the restriction maps now being bijections). As such, the notion of a stochastic set does not require the full structure of a concrete probability space ; one could also have defined the notion using only the abstract -algebra consisting of modulo null events as the base space, or equivalently one could define stochastic sets over the algebraic probability spaces defined in this previous post. However, we will stick with the classical formalism of concrete probability spaces here so as to keep the notation reasonably familiar.
As a corollary of the above observation, we see that if the base space has total measure , then all stochastic sets are trivial (they are just points).
Exercise 2 If is a stochastic set, show that there exists an event with the property that for any event , is non-empty if and only if is contained in modulo null events. (In particular, is unique up to null events.) Hint: consider the numbers for ranging over all events with non-empty, and form a maximising sequence for these numbers. Then use all three axioms of a stochastic set.
One can now start take many of the fundamental objects, operations, and results in set theory (and, hence, in most other categories of mathematics) and establish analogues relative to a finite measure space. Implicitly, what we will be doing in the next few paragraphs is endowing the category of stochastic sets with the structure of an elementary topos. However, to keep things reasonably concrete, we will not explicitly emphasise the topos-theoretic formalism here, although it is certainly lurking in the background.
Firstly, we define a stochastic function between two stochastic sets to be a collection of maps for each which form a natural transformation in the sense that for all and nested events . In the case when is discrete and at most countable (and after deleting all null points), a stochastic function is nothing more than a collection of functions for each , with the function then being a direct sum of the factor functions :
Thus (in the discrete, at most countable setting, at least) stochastic functions do not mix together information from different states in a sample space; the value of at depends only on the value of at . The situation is a bit more subtle for continuous probability spaces, due to the identification of stochastic objects that agree almost surely, nevertheness it is still good intuition to think of stochastic functions as essentially being “pointwise” or “local” in nature.
One can now form the stochastic set of functions from to , by setting for any event to be the set of local stochastic functions of the localisations of to ; this is a stochastic set if we use the obvious restriction maps. In the case when is discrete and at most countable, the fibre at a point of positive measure is simply the set of functions from to .
In a similar spirit, we say that one stochastic set is a (stochastic) subset of another , and write , if we have a stochastic inclusion map, thus for all events , with the restriction maps being compatible. We can then define the power set of a stochastic set by setting for any event to be the set of all stochastic subsets of relative to ; it is easy to see that is a stochastic set with the obvious restriction maps (one can also identify with in the obvious fashion). Again, when is discrete and at most countable, the fibre of at a point of positive measure is simply the deterministic power set .
Note that if is a stochastic function and is a stochastic subset of , then the inverse image , defined by setting for any event to be the set of those with , is a stochastic subset of . In particular, given a -ary relation , the inverse image is a stochastic subset of , which by abuse of notation we denote as
In a similar spirit, if is a stochastic subset of and is a stochastic function, we can define the image by setting to be the set of those with ; one easily verifies that this is a stochastic subset of .
Remark 2 One should caution that in the definition of the subset relation , it is important that for all events , not just the global event ; in particular, just because a stochastic set has no global sections, does not mean that it is contained in the stochastic empty set .
Now we discuss Boolean operations on stochastic subsets of a given stochastic set . Given two stochastic subsets of , the stochastic intersection is defined by setting to be the set of that lie in both and :
This is easily verified to again be a stochastic subset of . More generally one may define stochastic countable intersections for any sequence of stochastic subsets of . One could extend this definition to uncountable families if one wished, but I would advise against it, because some of the usual laws of Boolean algebra (e.g. the de Morgan laws) may break down in this setting.
Stochastic unions are a bit more subtle. The set should not be defined to simply be the union of and , as this would not respect the gluing axiom. Instead, we define to be the set of all such that one can cover by measurable subevents such that for ; then may be verified to be a stochastic subset of . Thus for instance is the stochastic union of and . Similarly for countable unions of stochastic subsets of , although for uncountable unions are extremely problematic (they are disliked by both the measure theory and the countable gluing axiom) and will not be defined here. Finally, the stochastic difference set is defined as the set of all in such that for any subevent of of positive probability. One may verify that in the case when is discrete and at most countable, these Boolean operations correspond to the classical Boolean operations applied separately to each fibre of the relevant sets . We also leave as an exercise to the reader to verify the usual laws of Boolean arithmetic, e.g. the de Morgan laws, provided that one works with at most countable unions and intersections.
One can also consider a stochastic finite union in which the number of sets in the union is itself stochastic. More precisely, let be a stochastic set, let be a stochastic natural number, and let be a stochastic function from the stochastic set (defined by setting )) to the stochastic power set . Here we are considering to be a natural number, to allow for unions that are possibly empty, with used for the positive natural numbers. We also write for the stochastic function . Then we can define the stochastic union by setting for an event to be the set of local elements with the property that there exists a covering of by measurable subevents for , such that one has and . One can verify that is a stochastic set (with the obvious restriction maps). Again, in the model case when is discrete and at most countable, the fibre is what one would expect it to be, namely .
The Cartesian product of two stochastic sets may be defined by setting for all events , with the obvious restriction maps; this is easily seen to be another stochastic set. This lets one define the concept of a -ary operation from stochastic sets to another stochastic set , or a -ary relation . In particular, given for , the relation may be deterministically true, deterministically false, or have some other stochastic truth value.
Remark 3 In the degenerate case when is null, stochastic logic becomes a bit weird: all stochastic statements are deterministically true, as are their stochastic negations, since every event in (even the empty set) now holds with full probability. Among other pathologies, the empty set now has a global element over (this is analogous to the notorious convention ), and any two deterministic objects become equal over : .
The following simple observation is crucial to subsequent discussion. If is a sequence taking values in the global elements of a stochastic space , then we may also define global elements for stochastic indices as well, by appealing to the countable gluing axiom to glue together restricted to the set for each deterministic natural number to form . With this definition, the map is a stochastic function from to ; indeed, this creates a one-to-one correspondence between external sequences (maps from to ) and stochastic sequences (stochastic functions from to ). Similarly with replaced by any other at most countable set. This observation will be important in allowing many deterministic arguments involving sequences will be able to be carried over to the stochastic setting.
We now specialise from the extremely broad discipline of set theory to the more focused discipline of real analysis. There are two fundamental axioms that underlie real analysis (and in particular distinguishes it from real algebra). The first is the Archimedean property, which we phrase in the “no infinitesimal” formulation as follows:
Proposition 2 (Archimedean property) Let be such that for all positive natural numbers . Then .
The other is the least upper bound axiom:
Proposition 3 (Least upper bound axiom) Let be a non-empty subset of which has an upper bound , thus for all . Then there exists a unique real number with the following properties:
- for all .
- For any real , there exists such that .
- .
Furthermore, does not depend on the choice of .
The Archimedean property extends easily to the stochastic setting:
Proposition 4 (Stochastic Archimedean property) Let be such that for all deterministic natural numbers . Then .
Remark 4 Here, incidentally, is one place in which this stochastic formalism deviates from the nonstandard analysis formalism, as the latter certainly permits the existence of infinitesimal elements. On the other hand, we caution that stochastic real numbers are permitted to be unbounded, so that formulation of Archimedean property is not valid in the stochastic setting.
The proof is easy and is left to the reader. The least upper bound axiom also extends nicely to the stochastic setting, but the proof requires more work (in particular, our argument uses the monotone convergence theorem):
Theorem 5 (Stochastic least upper bound axiom) Let be a stochastic subset of which has a global upper bound , thus for all , and is globally non-empty in the sense that there is at least one global element . Then there exists a unique stochastic real number with the following properties:
- for all .
- For any stochastic real , there exists such that .
- .
Furthermore, does not depend on the choice of .
For future reference, we note that the same result holds with replaced by throughout, since the latter may be embedded in the former, for instance by mapping to and to . In applications, the above theorem serves as a reasonable substitute for the countable axiom of choice, which does not appear to hold in unrestricted generality relative to a measure space; in particular, it can be used to generate various extremising sequences for stochastic functionals on various stochastic function spaces.
Proof: Uniqueness is clear (using the Archimedean property), as well as the independence on , so we turn to existence. By using an order-preserving map from to (e.g. ) we may assume that is a subset of , and that .
We observe that is a lattice: if , then and also lie in . Indeed, may be formed by appealing to the countable gluing axiom to glue (restricted the set ) with (restricted to the set ), and similarly for . (Here we use the fact that relations such as are Borel measurable on .)
Let denote the deterministic quantity
then (by Proposition 3!) is well-defined; here we use the hypothesis that is finite. Thus we may find a sequence of elements of such that
Using the lattice property, we may assume that the are non-decreasing: whenever . If we then define (after choosing measurable representatives of each equivalence class ), then is a stochastic real with .
If , then , and so
From this and (1) we conclude that
From monotone convergence, we conclude that
and so , as required.
Now let be a stochastic real. After choosing measurable representatives of each relevant equivalence class, we see that for almost every , we can find a natural number with . If we choose to be the first such positive natural number when it exists, and (say) otherwise, then is a stochastic positive natural number and . The claim follows.
Remark 5 One can abstract away the role of the measure here, leaving only the ideal of null sets. The property that the measure is finite is then replaced by the more general property that given any non-empty family of measurable sets, there is an at most countable union of sets in that family that is an upper bound modulo null sets for all elements in that faily.
Using Proposition 4 and Theorem 5, one can then revisit many of the other foundational results of deterministic real analysis, and develop stochastic analogues; we give some examples of this below the fold (focusing on the Heine-Borel theorem and a case of the spectral theorem). As an application of this formalism, we revisit some of the Furstenberg-Zimmer structural theory of measure-preserving systems, particularly that of relatively compact and relatively weakly mixing systems, and interpret them in this framework, basically as stochastic versions of compact and weakly mixing systems (though with the caveat that the shift map is allowed to act non-trivially on the underlying probability space). As this formalism is “point-free”, in that it avoids explicit use of fibres and disintegrations, it will be well suited for generalising this structure theory to settings in which the underlying probability spaces are not standard Borel, and the underlying groups are uncountable; I hope to discuss such generalisations in future blog posts.
Remark 6 Roughly speaking, stochastic real analysis can be viewed as a restricted subset of classical real analysis in which all operations have to be “measurable” with respect to the base space. In particular, indiscriminate application of the axiom of choice is not permitted, and one should largely restrict oneself to performing countable unions and intersections rather than arbitrary unions or intersections. Presumably one can formalise this intuition with a suitable “countable transfer principle”, but I was not able to formulate a clean and general principle of this sort, instead verifying various assertions about stochastic objects by hand rather than by direct transfer from the deterministic setting. However, it would be desirable to have such a principle, since otherwise one is faced with the tedious task of redoing all the foundations of real analysis (or whatever other base theory of mathematics one is going to be working in) in the stochastic setting by carefully repeating all the arguments.
More generally, topos theory is a good formalism for capturing precisely the informal idea of performing mathematics with certain operations, such as the axiom of choice, the law of the excluded middle, or arbitrary unions and intersections, being somehow “prohibited” or otherwise “restricted”.
Two of the most famous open problems in additive prime number theory are the twin prime conjecture and the binary Goldbach conjecture. They have quite similar forms:
- Twin prime conjecture The equation has infinitely many solutions with prime.
- Binary Goldbach conjecture The equation has at least one solution with prime for any given even .
In view of this similarity, it is not surprising that the partial progress on these two conjectures have tracked each other fairly closely; the twin prime conjecture is generally considered slightly easier than the binary Goldbach conjecture, but broadly speaking any progress made on one of the conjectures has also led to a comparable amount of progress on the other. (For instance, Chen’s theorem has a version for the twin prime conjecture, and a version for the binary Goldbach conjecture.) Also, the notorious parity obstruction is present in both problems, preventing a solution to either conjecture by almost all known methods (see this previous blog post for more discussion).
In this post, I would like to note a divergence from this general principle, with regards to bounded error versions of these two conjectures:
- Twin prime with bounded error The inequalities has infinitely many solutions with prime for some absolute constant .
- Binary Goldbach with bounded error The inequalities has at least one solution with prime for any sufficiently large and some absolute constant .
The first of these statements is now a well-known theorem of Zhang, and the Polymath8b project hosted on this blog has managed to lower to unconditionally, and to assuming the generalised Elliott-Halberstam conjecture. However, the second statement remains open; the best result that the Polymath8b project could manage in this direction is that (assuming GEH) at least one of the binary Goldbach conjecture with bounded error, or the twin prime conjecture with no error, had to be true.
All the known proofs of Zhang’s theorem proceed through sieve-theoretic means. Basically, they take as input equidistribution results that control the size of discrepancies such as
for various congruence classes and various arithmetic functions , e.g. (or more generaly for various ). After taking some carefully chosen linear combinations of these discrepancies, and using the trivial positivity lower bound
one eventually obtains (for suitable ) a non-trivial lower bound of the form
where is some weight function, and is the set of such that there are at least two primes in the interval . This implies at least one solution to the inequalities with , and Zhang’s theorem follows.
In a similar vein, one could hope to use bounds on discrepancies such as (1) (for comparable to ), together with the trivial lower bound (2), to obtain (for sufficiently large , and suitable ) a non-trivial lower bound of the form
for some weight function , where is the set of such that there is at least one prime in each of the intervals and . This would imply the binary Goldbach conjecture with bounded error.
However, the parity obstruction blocks such a strategy from working (for much the same reason that it blocks any bound of the form in Zhang’s theorem, as discussed in the Polymath8b paper.) The reason is as follows. The sieve-theoretic arguments are linear with respect to the summation, and as such, any such sieve-theoretic argument would automatically also work in a weighted setting in which the summation is weighted by some non-negative weight . More precisely, if one could control the weighted discrepancies
to essentially the same accuracy as the unweighted discrepancies (1), then thanks to the trivial weighted version
of (2), any sieve-theoretic argument that was capable of proving (3) would also be capable of proving the weighted estimate
However, (4) may be defeated by a suitable choice of weight , namely
where is the Liouville function, which counts the parity of the number of prime factors of a given number . Since , one can expand out as the sum of and a finite number of other terms, each of which consists of the product of two or more translates (or reflections) of . But from the Möbius randomness principle (or its analogue for the Liouville function), such products of are widely expected to be essentially orthogonal to any arithmetic function that is arising from a single multiplicative function such as , even on very short arithmetic progressions. As such, replacing by in (1) should have a negligible effect on the discrepancy. On the other hand, in order for to be non-zero, has to have the same sign as and hence the opposite sign to cannot simultaneously be prime for any , and so vanishes identically, contradicting (4). This indirectly rules out any modification of the Goldston-Pintz-Yildirim/Zhang method for establishing the binary Goldbach conjecture with bounded error.
The above argument is not watertight, and one could envisage some ways around this problem. One of them is that the Möbius randomness principle could simply be false, in which case the parity obstruction vanishes. A good example of this is the result of Heath-Brown that shows that if there are infinitely many Siegel zeroes (which is a strong violation of the Möbius randomness principle), then the twin prime conjecture holds. Another way around the obstruction is to start controlling the discrepancy (1) for functions that are combinations of more than one multiplicative function, e.g. . However, controlling such functions looks to be at least as difficult as the twin prime conjecture (which is morally equivalent to obtaining non-trivial lower-bounds for ). A third option is not to use a sieve-theoretic argument, but to try a different method (e.g. the circle method). However, most other known methods also exhibit linearity in the “” variable and I would suspect they would be vulnerable to a similar obstruction. (In any case, the circle method specifically has some other difficulties in tackling binary problems, as discussed in this previous post.)
Let be the algebraic closure of , that is to say the field of algebraic numbers. We fix an embedding of into , giving rise to a complex absolute value for algebraic numbers .
Let be of degree , so that is irrational. A classical theorem of Liouville gives the quantitative bound
for the irrationality of fails to be approximated by rational numbers , where depends on but not on . Indeed, if one lets be the Galois conjugates of , then the quantity is a non-zero natural number divided by a constant, and so we have the trivial lower bound
from which the bound (1) easily follows. A well known corollary of the bound (1) is that Liouville numbers are automatically transcendental.
The famous theorem of Thue, Siegel and Roth improves the bound (1) to
for any and rationals , where depends on but not on . Apart from the in the exponent and the implied constant, this bound is optimal, as can be seen from Dirichlet’s theorem. This theorem is a good example of the ineffectivity phenomenon that affects a large portion of modern number theory: the implied constant in the notation is known to be finite, but there is no explicit bound for it in terms of the coefficients of the polynomial defining (in contrast to (1), for which an effective bound may be easily established). This is ultimately due to the reliance on the “dueling conspiracy” (or “repulsion phenomenon”) strategy. We do not as yet have a good way to rule out one counterexample to (2), in which is far closer to than ; however we can rule out two such counterexamples, by playing them off of each other.
A powerful strengthening of the Thue-Siegel-Roth theorem is given by the subspace theorem, first proven by Schmidt and then generalised further by several authors. To motivate the theorem, first observe that the Thue-Siegel-Roth theorem may be rephrased as a bound of the form
for any algebraic numbers with and linearly independent (over the algebraic numbers), and any and , with the exception when or are rationally dependent (i.e. one is a rational multiple of the other), in which case one has to remove some lines (i.e. subspaces in ) of rational slope from the space of pairs to which the bound (3) does not apply (namely, those lines for which the left-hand side vanishes). Here can depend on but not on . More generally, we have
Theorem 1 (Schmidt subspace theorem) Let be a natural number. Let be linearly independent linear forms. Then for any , one has the bound
for all , outside of a finite number of proper subspaces of , where
and depends on and the , but is independent of .
Being a generalisation of the Thue-Siegel-Roth theorem, it is unsurprising that the known proofs of the subspace theorem are also ineffective with regards to the constant . (However, the number of exceptional subspaces may be bounded effectively; cf. the situation with the Skolem-Mahler-Lech theorem, discussed in this previous blog post.) Once again, the lower bound here is basically sharp except for the factor and the implied constant: given any with , a simple volume packing argument (the same one used to prove the Dirichlet approximation theorem) shows that for any sufficiently large , one can find integers , not all zero, such that
for all . Thus one can get comparable to in many different ways.
There are important generalisations of the subspace theorem to other number fields than the rationals (and to other valuations than the Archimedean valuation ); we will develop one such generalisation below.
The subspace theorem is one of many finiteness theorems in Diophantine geometry; in this case, it is the number of exceptional subspaces which is finite. It turns out that finiteness theorems are very compatible with the language of nonstandard analysis. (See this previous blog post for a review of the basics of nonstandard analysis, and in particular for the nonstandard interpretation of asymptotic notation such as and .) The reason for this is that a standard set is finite if and only if it contains no strictly nonstandard elements (that is to say, elements of ). This makes for a clean formulation of finiteness theorems in the nonstandard setting. For instance, the standard form of Bezout’s theorem asserts that if are coprime polynomials over some field, then the curves and intersect in only finitely many points. The nonstandard version of this is then
Theorem 2 (Bezout’s theorem, nonstandard form) Let be standard coprime polynomials. Then there are no strictly nonstandard solutions to .
Now we reformulate Theorem 1 in nonstandard language. We need a definition:
Definition 3 (General position) Let be nested fields. A point in is said to be in -general position if it is not contained in any hyperplane of definable over , or equivalently if one has
for any .
Theorem 4 (Schmidt subspace theorem, nonstandard version) Let be a standard natural number. Let be linearly independent standard linear forms. Let be a tuple of nonstandard integers which is in -general position (in particular, this forces to be strictly nonstandard). Then one has
where we extend from to (and also similarly extend from to ) in the usual fashion.
Observe that (as is usual when translating to nonstandard analysis) some of the epsilons and quantifiers that are present in the standard version become hidden in the nonstandard framework, being moved inside concepts such as “strictly nonstandard” or “general position”. We remark that as is in -general position, it is also in -general position (as an easy Galois-theoretic argument shows), and the requirement that the are linearly independent is thus equivalent to being -linearly independent.
Exercise 1 Verify that Theorem 1 and Theorem 4 are equivalent. (Hint: there are only countably many proper subspaces of .)
We will not prove the subspace theorem here, but instead focus on a particular application of the subspace theorem, namely to counting integer points on curves. In this paper of Corvaja and Zannier, the subspace theorem was used to give a new proof of the following basic result of Siegel:
Theorem 5 (Siegel’s theorem on integer points) Let be an irreducible polynomial of two variables, such that the affine plane curve either has genus at least one, or has at least three points on the line at infinity, or both. Then has only finitely many integer points .
This is a finiteness theorem, and as such may be easily converted to a nonstandard form:
Theorem 6 (Siegel’s theorem, nonstandard form) Let be a standard irreducible polynomial of two variables, such that the affine plane curve either has genus at least one, or has at least three points on the line at infinity, or both. Then does not contain any strictly nonstandard integer points .
Note that Siegel’s theorem can fail for genus zero curves that only meet the line at infinity at just one or two points; the key examples here are the graphs for a polynomial , and the Pell equation curves . Siegel’s theorem can be compared with the more difficult theorem of Faltings, which establishes finiteness of rational points (not just integer points), but now needs the stricter requirement that the curve has genus at least two (to avoid the additional counterexample of elliptic curves of positive rank, which have infinitely many rational points).
The standard proofs of Siegel’s theorem rely on a combination of the Thue-Siegel-Roth theorem and a number of results on abelian varieties (notably the Mordell-Weil theorem). The Corvaja-Zannier argument rebalances the difficulty of the argument by replacing the Thue-Siegel-Roth theorem by the more powerful subspace theorem (in fact, they need one of the stronger versions of this theorem alluded to earlier), while greatly reducing the reliance on results on abelian varieties. Indeed, for curves with three or more points at infinity, no theory from abelian varieties is needed at all, while for the remaining cases, one mainly needs the existence of the Abel-Jacobi embedding, together with a relatively elementary theorem of Chevalley-Weil which is used in the proof of the Mordell-Weil theorem, but is significantly easier to prove.
The Corvaja-Zannier argument (together with several further applications of the subspace theorem) is presented nicely in this Bourbaki expose of Bilu. To establish the theorem in full generality requires a certain amount of algebraic number theory machinery, such as the theory of valuations on number fields, or of relative discriminants between such number fields. However, the basic ideas can be presented without much of this machinery by focusing on simple special cases of Siegel’s theorem. For instance, we can handle irreducible cubics that meet the line at infinity at exactly three points :
Theorem 7 (Siegel’s theorem with three points at infinity) Siegel’s theorem holds when the irreducible polynomial takes the form
for some quadratic polynomial and some distinct algebraic numbers .
Proof: We use the nonstandard formalism. Suppose for sake of contradiction that we can find a strictly nonstandard integer point on a curve of the indicated form. As this point is infinitesimally close to the line at infinity, must be infinitesimally close to one of ; without loss of generality we may assume that is infinitesimally close to .
We now use a version of the polynomial method, to find some polynomials of controlled degree that vanish to high order on the “arm” of the cubic curve that asymptotes to . More precisely, let be a large integer (actually will already suffice here), and consider the -vector space of polynomials of degree at most , and of degree at most in the variable; this space has dimension . Also, as one traverses the arm of , any polynomial in grows at a rate of at most , that is to say has a pole of order at most at the point at infinity . By performing Laurent expansions around this point (which is a non-singular point of , as the are assumed to be distinct), we may thus find a basis of , with the property that has a pole of order at most at for each .
From the control of the pole at , we have
for all . The exponents here become negative for , and on multiplying them all together we see that
This exponent is negative for large enough (or just take ). If we expand
for some algebraic numbers , then we thus have
for some standard . Note that the -dimensional vectors are linearly independent in , because the are linearly independent in . Applying the Schmidt subspace theorem in the contrapositive, we conclude that the -tuple is not in -general position. That is to say, one has a non-trivial constraint of the form
for some standard rational coefficients , not all zero. But, as is irreducible and cubic in , it has no common factor with the standard polynomial , so by Bezout’s theorem (Theorem 2) the constraint (4) only has standard solutions, contradicting the strictly nonstandard nature of .
Exercise 2 Rewrite the above argument so that it makes no reference to nonstandard analysis. (In this case, the rewriting is quite straightforward; however, there will be a subsequent argument in which the standard version is significantly messier than the nonstandard counterpart, which is the reason why I am working with the nonstandard formalism in this blog post.)
A similar argument works for higher degree curves that meet the line at infinity in three or more points, though if the curve has singularities at infinity then it becomes convenient to rely on the Riemann-Roch theorem to control the dimension of the analogue of the space . Note that when there are only two or fewer points at infinity, though, one cannot get the negative exponent of needed to usefully apply the subspace theorem. To deal with this case we require some additional tricks. For simplicity we focus on the case of Mordell curves, although it will be convenient to work with more general number fields than the rationals:
Theorem 8 (Siegel’s theorem for Mordell curves) Let be a non-zero integer. Then there are only finitely many integer solutions to . More generally, for any number field , and any nonzero , there are only finitely many algebraic integer solutions to , where is the ring of algebraic integers in .
Again, we will establish the nonstandard version. We need some additional notation:
Definition 9
We define an almost rational integer to be a nonstandard such that for some standard positive integer , and write for the -algebra of almost rational integers. If is a standard number field, we define an almost -integer to be a nonstandard such that for some standard positive integer , and write for the -algebra of almost -integers. We define an almost algebraic integer to be a nonstandard such that is a nonstandard algebraic integer for some standard positive integer , and write for the -algebra of almost algebraic integers.
Theorem 10 (Siegel for Mordell, nonstandard version) Let be a non-zero standard algebraic number. Then the curve does not contain any strictly nonstandard almost algebraic integer point.
Another way of phrasing this theorem is that if are strictly nonstandard almost algebraic integers, then is either strictly nonstandard or zero.
Exercise 3 Verify that Theorem 8 and Theorem 10 are equivalent.
Due to all the ineffectivity, our proof does not supply any bound on the solutions in terms of , even if one removes all references to nonstandard analysis. It is a conjecture of Hall (a special case of the notorious ABC conjecture) that one has the bound for all (or equivalently ), but even the weaker conjecture that are of polynomial size in is open. (The best known bounds are of exponential nature, and are proven using a version of Baker’s method: see for instance this text of Sprindzuk.)
A direct repetition of the arguments used to prove Theorem 7 will not work here, because the Mordell curve only hits the line at infinity at one point, . To get around this we will exploit the fact that the Mordell curve is an elliptic curve and thus has a group law on it. We will then divide all the integer points on this curve by two; as elliptic curves have four 2-torsion points, this will end up placing us in a situation like Theorem 7, with four points at infinity. However, there is an obstruction: it is not obvious that dividing an integer point on the Mordell curve by two will produce another integer point. However, this is essentially true (after enlarging the ring of integers slightly) thanks to a general principle of Chevalley and Weil, which can be worked out explicitly in the case of division by two on Mordell curves by relatively elementary means (relying mostly on unique factorisation of ideals of algebraic integers). We give the details below the fold.
As laid out in the foundational work of Kolmogorov, a classical probability space (or probability space for short) is a triplet , where is a set, is a -algebra of subsets of , and is a countably additive probability measure on . Given such a space, one can form a number of interesting function spaces, including
- the (real) Hilbert space of square-integrable functions , modulo -almost everywhere equivalence, and with the positive definite inner product ; and
- the unital commutative Banach algebra of essentially bounded functions , modulo -almost everywhere equivalence, with defined as the essential supremum of .
There is also a trace on defined by integration: .
One can form the category of classical probability spaces, by defining a morphism between probability spaces to be a function which is measurable (thus for all ) and measure-preserving (thus for all ).
Let us now abstract the algebraic features of these spaces as follows; for want of a better name, I will refer to this abstraction as an algebraic probability space, and is very similar to the non-commutative probability spaces studied in this previous post, except that these spaces are now commutative (and real).
Definition 1 An algebraic probability space is a pair where
- is a unital commutative real algebra;
- is a homomorphism such that and for all ;
- Every element of is bounded in the sense that . (Technically, this isn’t an algebraic property, but I need it for technical reasons.)
A morphism is a homomorphism which is trace-preserving, in the sense that for all .
For want of a better name, I’ll denote the category of algebraic probability spaces as . One can view this category as the opposite category to that of (a subcategory of) the category of tracial commutative real algebras. One could emphasise this opposite nature by denoting the algebraic probability space as rather than ; another suggestive (but slightly inaccurate) notation, inspired by the language of schemes, would be rather than . However, we will not adopt these conventions here, and refer to algebraic probability spaces just by the pair .
By the previous discussion, we have a covariant functor that takes a classical probability space to its algebraic counterpart , with a morphism of classical probability spaces mapping to a morphism of the corresponding algebraic probability spaces by the formula
for . One easily verifies that this is a functor.
In this post I would like to describe a functor which partially inverts (up to natural isomorphism), that is to say a recipe for starting with an algebraic probability space and producing a classical probability space . This recipe is not new – it is basically the (commutative) Gelfand-Naimark-Segal construction (discussed in this previous post) combined with the Loomis-Sikorski theorem (discussed in this previous post). However, I wanted to put the construction in a single location for sake of reference. I also wanted to make the point that and are not complete inverses; there is a bit of information in the algebraic probability space (e.g. topological information) which is lost when passing back to the classical probability space. In some future posts, I would like to develop some ergodic theory using the algebraic foundations of probability theory rather than the classical foundations; this turns out to be convenient in the ergodic theory arising from nonstandard analysis (such as that described in this previous post), in which the groups involved are uncountable and the underlying spaces are not standard Borel spaces.
Let us describe how to construct the functor , with details postponed to below the fold.
- Starting with an algebraic probability space , form an inner product on by the formula , and also form the spectral radius .
- The inner product is clearly positive semi-definite. Quotienting out the null vectors and taking completions, we arrive at a real Hilbert space , to which the trace may be extended.
- Somewhat less obviously, the spectral radius is well-defined and gives a norm on . Taking limits of sequences in of bounded spectral radius gives us a subspace of that has the structure of a real commutative Banach algebra.
- The idempotents of the Banach algebra may be indexed by elements of an abstract -algebra .
- The Boolean algebra homomorphisms (or equivalently, the real algebra homomorphisms ) may be indexed by elements of a space .
- Let denote the -algebra on generated by the basic sets for every .
- Let be the -ideal of generated by the sets , where is a sequence with .
- One verifies that is isomorphic to . Using this isomorphism, the trace on can be used to construct a countably additive measure on . The classical probability space is then , and the abstract spaces may now be identified with their concrete counterparts , .
- Every algebraic probability space morphism generates a classical probability morphism via the formula
using a pullback operation on the abstract -algebras that can be defined by density.
Remark 1 The classical probability space constructed by the functor has some additional structure; namely is a -Stone space (a Stone space with the property that the closure of any countable union of clopen sets is clopen), is the Baire -algebra (generated by the clopen sets), and the null sets are the meager sets. However, we will not use this additional structure here.
The partial inversion relationship between the functors and is given by the following assertion:
- There is a natural transformation from to the identity functor .
More informally: if one starts with an algebraic probability space and converts it back into a classical probability space , then there is a trace-preserving algebra homomorphism of to , which respects morphisms of the algebraic probability space. While this relationship is far weaker than an equivalence of categories (which would require that and are both natural isomorphisms), it is still good enough to allow many ergodic theory problems formulated using classical probability spaces to be reformulated instead as an equivalent problem in algebraic probability spaces.
Remark 2 The opposite composition is a little odd: it takes an arbitrary probability space and returns a more complicated probability space , with being the space of homomorphisms . while there is “morally” an embedding of into using the evaluation map, this map does not exist in general because points in may well have zero measure. However, if one takes a “pointless” approach and focuses just on the measure algebras , , then these algebras become naturally isomorphic after quotienting out by null sets.
Remark 3 An algebraic probability space captures a bit more structure than a classical probability space, because may be identified with a proper subset of that describes the “regular” functions (or random variables) of the space. For instance, starting with the unit circle (with the usual Haar measure and the usual trace ), any unital subalgebra of that is dense in will generate the same classical probability space on applying the functor , namely one will get the space of homomorphisms from to (with the measure induced from ). Thus for instance could be the continuous functions , the Wiener algebra or the full space , but the classical space will be unable to distinguish these spaces from each other. In particular, the functor loses information (roughly speaking, this functor takes an algebraic probability space and completes it to a von Neumann algebra, but then forgets exactly what algebra was initially used to create this completion). In ergodic theory, this sort of “extra structure” is traditionally encoded in topological terms, by assuming that the underlying probability space has a nice topological structure (e.g. a standard Borel space); however, with the algebraic perspective one has the freedom to have non-topological notions of extra structure, by choosing to be something other than an algebra of continuous functions on a topological space. I hope to discuss one such example of extra structure (coming from the Gowers-Host-Kra theory of uniformity seminorms) in a later blog post (this generalises the example of the Wiener algebra given previously, which is encoding “Fourier structure”).
A small example of how one could use the functors is as follows. Suppose one has a classical probability space with a measure-preserving action of an uncountable group , which is only defined (and an action) up to almost everywhere equivalence; thus for instance for any set and any , and might not be exactly equal, but only equal up to a null set. For similar reasons, an element of the invariant factor might not be exactly invariant with respect to , but instead one only has and equal up to null sets for each . One might like to “clean up” the action of to make it defined everywhere, and a genuine action everywhere, but this is not immediately achievable if is uncountable, since the union of all the null sets where something bad occurs may cease to be a null set. However, by applying the functor , each shift defines a morphism on the associated algebraic probability space (i.e. the Koopman operator), and then applying , we obtain a shift on a new classical probability space which now gives a genuine measure-preserving action of , and which is equivalent to the original action from a measure algebra standpoint. The invariant factor now consists of those sets in which are genuinely -invariant, not just up to null sets. (Basically, the classical probability space contains a Boolean algebra with the property that every measurable set is equivalent up to null sets to precisely one set in , allowing for a canonical “retraction” onto that eliminates all null set issues.)
More indirectly, the functors suggest that one should be able to develop a “pointless” form of ergodic theory, in which the underlying probability spaces are given algebraically rather than classically. I hope to give some more specific examples of this in later posts.
There are a number of ways to construct the real numbers , for instance
- as the metric completion of (thus, is defined as the set of Cauchy sequences of rationals, modulo Cauchy equivalence);
- as the space of Dedekind cuts on the rationals ;
- as the space of quasimorphisms on the integers, quotiented by bounded functions. (I believe this construction first appears in this paper of Street, who credits the idea to Schanuel, though the germ of this construction arguably goes all the way back to Eudoxus.)
There is also a fourth family of constructions that proceeds via nonstandard analysis, as a special case of what is known as the nonstandard hull construction. (Here I will assume some basic familiarity with nonstandard analysis and ultraproducts, as covered for instance in this previous blog post.) Given an unbounded nonstandard natural number , one can define two external additive subgroups of the nonstandard integers :
- The group of all nonstandard integers of magnitude less than or comparable to ; and
- The group of nonstandard integers of magnitude infinitesimally smaller than .
The group is a subgroup of , so we may form the quotient group . This space is isomorphic to the reals , and can in fact be used to construct the reals:
Proposition 1 For any coset of , there is a unique real number with the property that . The map is then an isomorphism between the additive groups and .
Proof: Uniqueness is clear. For existence, observe that the set is a Dedekind cut, and its supremum can be verified to have the required properties for .
In a similar vein, we can view the unit interval in the reals as the quotient
where is the nonstandard (i.e. internal) set ; of course, is not a group, so one should interpret as the image of under the quotient map (or , if one prefers). Or to put it another way, (1) asserts that is the image of with respect to the map .
In this post I would like to record a nice measure-theoretic version of the equivalence (1), which essentially appears already in standard texts on Loeb measure (see e.g. this text of Cutland). To describe the results, we must first quickly recall the construction of Loeb measure on . Given an internal subset of , we may define the elementary measure of by the formula
This is a finitely additive probability measure on the Boolean algebra of internal subsets of . We can then construct the Loeb outer measure of any subset in complete analogy with Lebesgue outer measure by the formula
where ranges over all sequences of internal subsets of that cover . We say that a subset of is Loeb measurable if, for any (standard) , one can find an internal subset of which differs from by a set of Loeb outer measure at most , and in that case we define the Loeb measure of to be . It is a routine matter to show (e.g. using the Carathéodory extension theorem) that the space of Loeb measurable sets is a -algebra, and that is a countably additive probability measure on this space that extends the elementary measure . Thus now has the structure of a probability space .
Now, the group acts (Loeb-almost everywhere) on the probability space by the addition map, thus for and (excluding a set of Loeb measure zero where exits ). This action is clearly seen to be measure-preserving. As such, we can form the invariant factor , defined by restricting attention to those Loeb measurable sets with the property that is equal -almost everywhere to for each .
The claim is then that this invariant factor is equivalent (up to almost everywhere equivalence) to the unit interval with Lebesgue measure (and the trivial action of ), by the same factor map used in (1). More precisely:
Theorem 2 Given a set , there exists a Lebesgue measurable set , unique up to -a.e. equivalence, such that is -a.e. equivalent to the set . Conversely, if is Lebesgue measurable, then is in , and .
More informally, we have the measure-theoretic version
of (1).
Proof: We first prove the converse. It is clear that is -invariant, so it suffices to show that is Loeb measurable with Loeb measure . This is easily verified when is an elementary set (a finite union of intervals). By countable subadditivity of outer measure, this implies that Loeb outer measure of is bounded by the Lebesgue outer measure of for any set ; since every Lebesgue measurable set differs from an elementary set by a set of arbitrarily small Lebesgue outer measure, the claim follows.
Now we establish the forward claim. Uniqueness is clear from the converse claim, so it suffices to show existence. Let . Let be an arbitrary standard real number, then we can find an internal set which differs from by a set of Loeb measure at most . As is -invariant, we conclude that for every , and differ by a set of Loeb measure (and hence elementary measure) at most . By the (contrapositive of the) underspill principle, there must exist a standard such that and differ by a set of elementary measure at most for all . If we then define the nonstandard function by the formula
then from the (nonstandard) triangle inequality we have
(say). On the other hand, has the Lipschitz continuity property
and so in particular we see that
for some Lipschitz continuous function . If we then let be the set where , one can check that differs from by a set of Loeb outer measure , and hence does so also. Sending to zero, we see (from the converse claim) that is a Cauchy sequence in and thus converges in for some Lebesgue measurable . The sets then converge in Loeb outer measure to , giving the claim.
Thanks to the Lebesgue differentiation theorem, the conditional expectation of a bounded Loeb-measurable function can be expressed (as a function on , defined -a.e.) as
By the abstract ergodic theorem from the previous post, one can also view this conditional expectation as the element in the closed convex hull of the shifts , of minimal norm. In particular, we obtain a form of the von Neumann ergodic theorem in this context: the averages for converge (as a net, rather than a sequence) in to .
If is (the standard part of) an internal function, that is to say the ultralimit of a sequence of finitary bounded functions, one can view the measurable function as a limit of the that is analogous to the “graphons” that emerge as limits of graphs (see e.g. the recent text of Lovasz on graph limits). Indeed, the measurable function is related to the discrete functions by the formula
for all , where is the nonprincipal ultrafilter used to define the nonstandard universe. In particular, from the Arzela-Ascoli diagonalisation argument there is a subsequence such that
thus is the asymptotic density function of the . For instance, if is the indicator function of a randomly chosen subset of , then the asymptotic density function would equal (almost everywhere, at least).
I’m continuing to look into understanding the ergodic theory of actions, as I believe this may allow one to apply ergodic theory methods to the “single-scale” or “non-asymptotic” setting (in which one averages only over scales comparable to a large parameter , rather than the traditional asymptotic approach of letting the scale go to infinity). I’m planning some further posts in this direction, though this is still a work in progress.
The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if is a unitary operator on a Hilbert space , and is a vector in that Hilbert space, then one has
in the strong topology, where is the -invariant subspace of , and is the orthogonal projection to . (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if is a countable amenable group acting on a Hilbert space by unitary transformations , and is a vector in that Hilbert space, then one has
for any Folner sequence of , where is the -invariant subspace. Thus one can interpret as a certain average of elements of the orbit of .
I recently discovered that there is a simple variant of this ergodic theorem that holds even when the group is not amenable (or not discrete), using a more abstract notion of averaging:
Theorem 1 (Abstract ergodic theorem) Let be an arbitrary group acting unitarily on a Hilbert space , and let be a vector in . Then is the element in the closed convex hull of of minimal norm, and is also the unique element of in this closed convex hull.
Proof: As the closed convex hull of is closed, convex, and non-empty in a Hilbert space, it is a classical fact (see e.g. Proposition 1 of this previous post) that it has a unique element of minimal norm. If for some , then the midpoint of and would be in the closed convex hull and be of smaller norm, a contradiction; thus is -invariant. To finish the first claim, it suffices to show that is orthogonal to every element of . But if this were not the case for some such , we would have for all , and thus on taking convex hulls , a contradiction.
Finally, since is orthogonal to , the same is true for for any in the closed convex hull of , and this gives the second claim.
This result is due to Alaoglu and Birkhoff. It implies the amenable ergodic theorem (1); indeed, given any , Theorem 1 implies that there is a finite convex combination of shifts of which lies within (in the norm) to . By the triangle inequality, all the averages also lie within of , but by the Folner property this implies that the averages are eventually within (say) of , giving the claim.
It turns out to be possible to use Theorem 1 as a substitute for the mean ergodic theorem in a number of contexts, thus removing the need for an amenability hypothesis. Here is a basic application:
Corollary 2 (Relative orthogonality) Let be a group acting unitarily on a Hilbert space , and let be a -invariant subspace of . Then and are relatively orthogonal over their common subspace , that is to say the restrictions of and to the orthogonal complement of are orthogonal to each other.
Proof: By Theorem 1, we have for all , and the claim follows. (Thanks to Gergely Harcos for this short argument.)
Now we give a more advanced application of Theorem 1, to establish some “Mackey theory” over arbitrary groups . Define a -system to be a probability space together with a measure-preserving action of on ; this gives an action of on , which by abuse of notation we also call :
(In this post we follow the usual convention of defining the spaces by quotienting out by almost everywhere equivalence.) We say that a -system is ergodic if consists only of the constants.
(A technical point: the theory becomes slightly cleaner if we interpret our measure spaces abstractly (or “pointlessly“), removing the underlying space and quotienting by the -ideal of null sets, and considering maps such as only on this quotient -algebra (or on the associated von Neumann algebra or Hilbert space ). However, we will stick with the more traditional setting of classical probability spaces here to keep the notation familiar, but with the understanding that many of the statements below should be understood modulo null sets.)
A factor of a -system is another -system together with a factor map which commutes with the -action (thus for all ) and respects the measure in the sense that for all . For instance, the -invariant factor , formed by restricting to the invariant algebra , is a factor of . (This factor is the first factor in an important hierachy, the next element of which is the Kronecker factor , but we will not discuss higher elements of this hierarchy further here.) If is a factor of , we refer to as an extension of .
From Corollary 2 we have
Corollary 3 (Relative independence) Let be a -system for a group , and let be a factor of . Then and are relatively independent over their common factor , in the sense that the spaces and are relatively orthogonal over when all these spaces are embedded into .
This has a simple consequence regarding the product of two -systems and , in the case when the action is trivial:
Lemma 4 If are two -systems, with the action of on trivial, then is isomorphic to in the obvious fashion.
This lemma is immediate for countable , since for a -invariant function , one can ensure that holds simultaneously for all outside of a null set, but is a little trickier for uncountable .
Proof: It is clear that is a factor of . To obtain the reverse inclusion, suppose that it fails, thus there is a non-zero which is orthogonal to . In particular, we have orthogonal to for any . Since lies in , we conclude from Corollary 3 (viewing as a factor of ) that is also orthogonal to . Since is an arbitrary element of , we conclude that is orthogonal to and in particular is orthogonal to itself, a contradiction. (Thanks to Gergely Harcos for this argument.)
Now we discuss the notion of a group extension.
Definition 5 (Group extension) Let be an arbitrary group, let be a -system, and let be a compact metrisable group. A -extension of is an extension whose underlying space is (with the product of and the Borel -algebra on ), the factor map is , and the shift maps are given by
where for each , is a measurable map (known as the cocycle associated to the -extension ).
An important special case of a -extension arises when the measure is the product of with the Haar measure on . In this case, also has a -action that commutes with the -action, making a -system. More generally, could be the product of with the Haar measure of some closed subgroup of , with taking values in ; then is now a system. In this latter case we will call -uniform.
If is a -extension of and is a measurable map, we can define the gauge transform of to be the -extension of whose measure is the pushforward of under the map , and whose cocycles are given by the formula
It is easy to see that is a -extension that is isomorphic to as a -extension of ; we will refer to and as equivalent systems, and as cohomologous to . We then have the following fundamental result of Mackey and of Zimmer:
Theorem 6 (Mackey-Zimmer theorem) Let be an arbitrary group, let be an ergodic -system, and let be a compact metrisable group. Then every ergodic -extension of is equivalent to an -uniform extension of for some closed subgroup of .
This theorem is usually stated for amenable groups , but by using Theorem 1 (or more precisely, Corollary 3) the result is in fact also valid for arbitrary groups; we give the proof below the fold. (In the usual formulations of the theorem, and are also required to be Lebesgue spaces, or at least standard Borel, but again with our abstract approach here, such hypotheses will be unnecessary.) Among other things, this theorem plays an important role in the Furstenberg-Zimmer structural theory of measure-preserving systems (as well as subsequent refinements of this theory by Host and Kra); see this previous blog post for some relevant discussion. One can obtain similar descriptions of non-ergodic extensions via the ergodic decomposition, but the result becomes more complicated to state, and we will not do so here.
Recent Comments