You are currently browsing the category archive for the ‘math.DS’ category.
The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if is a unitary operator on a Hilbert space , and is a vector in that Hilbert space, then one has
in the strong topology, where is the -invariant subspace of , and is the orthogonal projection to . (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if is a countable amenable group acting on a Hilbert space by unitary transformations , and is a vector in that Hilbert space, then one has
for any Folner sequence of , where is the -invariant subspace. Thus one can interpret as a certain average of elements of the orbit of .
I recently discovered that there is a simple variant of this ergodic theorem that holds even when the group is not amenable (or not discrete), using a more abstract notion of averaging:
Theorem 1 (Abstract ergodic theorem) Let be an arbitrary group acting unitarily on a Hilbert space , and let be a vector in . Then is the element in the closed convex hull of of minimal norm, and is also the unique element of in this closed convex hull.
Proof: As the closed convex hull of is closed, convex, and non-empty in a Hilbert space, it is a classical fact (see e.g. Proposition 1 of this previous post) that it has a unique element of minimal norm. If for some , then the midpoint of and would be in the closed convex hull and be of smaller norm, a contradiction; thus is -invariant. To finish the first claim, it suffices to show that is orthogonal to every element of . But if this were not the case for some such , we would have for all , and thus on taking convex hulls , a contradiction.
Finally, since is orthogonal to , the same is true for for any in the closed convex hull of , and this gives the second claim.
This result is due to Alaoglu and Birkhoff. It implies the amenable ergodic theorem (1); indeed, given any , Theorem 1 implies that there is a finite convex combination of shifts of which lies within (in the norm) to . By the triangle inequality, all the averages also lie within of , but by the Folner property this implies that the averages are eventually within (say) of , giving the claim.
It turns out to be possible to use Theorem 1 as a substitute for the mean ergodic theorem in a number of contexts, thus removing the need for an amenability hypothesis. Here is a basic application:
Corollary 2 (Relative orthogonality) Let be a group acting unitarily on a Hilbert space , and let be a -invariant subspace of . Then and are relatively orthogonal over their common subspace , that is to say the restrictions of and to the orthogonal complement of are orthogonal to each other.
Proof: By Theorem 1, we have for all , and the claim follows. (Thanks to Gergely Harcos for this short argument.)
Now we give a more advanced application of Theorem 1, to establish some “Mackey theory” over arbitrary groups . Define a -system to be a probability space together with a measure-preserving action of on ; this gives an action of on , which by abuse of notation we also call :
(In this post we follow the usual convention of defining the spaces by quotienting out by almost everywhere equivalence.) We say that a -system is ergodic if consists only of the constants.
(A technical point: the theory becomes slightly cleaner if we interpret our measure spaces abstractly (or “pointlessly“), removing the underlying space and quotienting by the -ideal of null sets, and considering maps such as only on this quotient -algebra (or on the associated von Neumann algebra or Hilbert space ). However, we will stick with the more traditional setting of classical probability spaces here to keep the notation familiar, but with the understanding that many of the statements below should be understood modulo null sets.)
A factor of a -system is another -system together with a factor map which commutes with the -action (thus for all ) and respects the measure in the sense that for all . For instance, the -invariant factor , formed by restricting to the invariant algebra , is a factor of . (This factor is the first factor in an important hierachy, the next element of which is the Kronecker factor , but we will not discuss higher elements of this hierarchy further here.) If is a factor of , we refer to as an extension of .
From Corollary 2 we have
Corollary 3 (Relative independence) Let be a -system for a group , and let be a factor of . Then and are relatively independent over their common factor , in the sense that the spaces and are relatively orthogonal over when all these spaces are embedded into .
This has a simple consequence regarding the product of two -systems and , in the case when the action is trivial:
This lemma is immediate for countable , since for a -invariant function , one can ensure that holds simultaneously for all outside of a null set, but is a little trickier for uncountable .
Proof: It is clear that is a factor of . To obtain the reverse inclusion, suppose that it fails, thus there is a non-zero which is orthogonal to . In particular, we have orthogonal to for any . Since lies in , we conclude from Corollary 3 (viewing as a factor of ) that is also orthogonal to . Since is an arbitrary element of , we conclude that is orthogonal to and in particular is orthogonal to itself, a contradiction. (Thanks to Gergely Harcos for this argument.)
Now we discuss the notion of a group extension.
Definition 5 (Group extension) Let be an arbitrary group, let be a -system, and let be a compact metrisable group. A -extension of is an extension whose underlying space is (with the product of and the Borel -algebra on ), the factor map is , and the shift maps are given by
where for each , is a measurable map (known as the cocycle associated to the -extension ).
An important special case of a -extension arises when the measure is the product of with the Haar measure on . In this case, also has a -action that commutes with the -action, making a -system. More generally, could be the product of with the Haar measure of some closed subgroup of , with taking values in ; then is now a system. In this latter case we will call -uniform.
If is a -extension of and is a measurable map, we can define the gauge transform of to be the -extension of whose measure is the pushforward of under the map , and whose cocycles are given by the formula
It is easy to see that is a -extension that is isomorphic to as a -extension of ; we will refer to and as equivalent systems, and as cohomologous to . We then have the following fundamental result of Mackey and of Zimmer:
Theorem 6 (Mackey-Zimmer theorem) Let be an arbitrary group, let be an ergodic -system, and let be a compact metrisable group. Then every ergodic -extension of is equivalent to an -uniform extension of for some closed subgroup of .
This theorem is usually stated for amenable groups , but by using Theorem 1 (or more precisely, Corollary 3) the result is in fact also valid for arbitrary groups; we give the proof below the fold. (In the usual formulations of the theorem, and are also required to be Lebesgue spaces, or at least standard Borel, but again with our abstract approach here, such hypotheses will be unnecessary.) Among other things, this theorem plays an important role in the Furstenberg-Zimmer structural theory of measure-preserving systems (as well as subsequent refinements of this theory by Host and Kra); see this previous blog post for some relevant discussion. One can obtain similar descriptions of non-ergodic extensions via the ergodic decomposition, but the result becomes more complicated to state, and we will not do so here.
(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)
Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.
The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:
|Ramsey theory||Topological dynamics||Compactness|
|Density Ramsey theory||Ergodic theory||Furstenberg correspondence principle|
|Graph/hypergraph regularity||Measure theory||Graph limits|
|Polynomial regularity||Linear algebra||Ultralimits|
|Structural decompositions||Hilbert space geometry||Ultralimits|
|Fourier analysis||Spectral theory||Direct and inverse limits|
|Quantitative algebraic geometry||Algebraic geometry||Schemes|
|Discrete metric spaces||Continuous metric spaces||Gromov-Hausdorff limits|
|Approximate group theory||Topological group theory||Model theory|
As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:
- Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects in a common space , which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object , which remains in the same space, and is “close” to many of the original objects with respect to the given metric or topology.
- Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects in a category , which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit or the inverse limit of these objects, which is another object in the same category , and is connected to the original objects by various morphisms.
- Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects or of spaces , each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, might be groups and might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object or a new space , which is still a model of the same language (e.g. if the spaces were all groups, then the limiting space will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if is an abelian group, then the will also be abelian groups for many .)
The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects to all lie in a common space in order to form an ultralimit ; they are permitted to lie in different spaces ; this is more natural in many discrete contexts, e.g. when considering graphs on vertices in the limit when goes to infinity. Also, no convergence properties on the are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces involved are required in order to construct the ultraproduct.
With so few requirements on the objects or spaces , the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the , will be exactly obeyed by the limit object ; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.
Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.
Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.
Tamar Ziegler and I have just uploaded to the arXiv our joint paper “A multi-dimensional Szemerédi theorem for the primes via a correspondence principle“. This paper is related to an earlier result of Ben Green and mine in which we established that the primes contain arbitrarily long arithmetic progressions. Actually, in that paper we proved a more general result:
This result was based in part on an earlier paper of Green that handled the case of progressions of length three. With the primes replaced by the integers, this is of course the famous theorem of Szemerédi.
Szemerédi’s theorem has now been generalised in many different directions. One of these is the multidimensional Szemerédi theorem of Furstenberg and Katznelson, who used ergodic-theoretic techniques to show that any dense subset of necessarily contained infinitely many constellations of any prescribed shape. Our main result is to relativise that theorem to the primes as well:
Then for any , contains infinitely many “constellations” of the form with and a positive integer.
In the case when is itself a Cartesian product of one-dimensional sets (in particular, if is all of ), this result already follows from Theorem 1, but there does not seem to be a similarly easy argument to deduce the general case of Theorem 2 from previous results. Simultaneously with this paper, an independent proof of Theorem 2 using a somewhat different method has been established by Cook, Maygar, and Titichetrakun.
The result is reminiscent of an earlier result of mine on finding constellations in the Gaussian primes (or dense subsets thereof). That paper followed closely the arguments of my original paper with Ben Green, namely it first enclosed (a W-tricked version of) the primes or Gaussian primes (in a sieve theoretic-sense) by a slightly larger set (or more precisely, a weight function ) of almost primes or almost Gaussian primes, which one could then verify (using methods closely related to the sieve-theoretic methods in the ongoing Polymath8 project) to obey certain pseudorandomness conditions, known as the linear forms condition and the correlation condition. Very roughly speaking, these conditions assert statements of the following form: if is a randomly selected integer, then the events of simultaneously being an almost prime (or almost Gaussian prime) are approximately independent for most choices of . Once these conditions are satisfied, one can then run a transference argument (initially based on ergodic-theory methods, but nowadays there are simpler transference results based on the Hahn-Banach theorem, due to Gowers and Reingold-Trevisan-Tulsiani-Vadhan) to obtain relative Szemerédi-type theorems from their absolute counterparts.
However, when one tries to adapt these arguments to sets such as , a new difficulty occurs: the natural analogue of the almost primes would be the Cartesian square of the almost primes – pairs whose entries are both almost primes. (Actually, for technical reasons, one does not work directly with a set of almost primes, but would instead work with a weight function such as that is concentrated on a set such as , but let me ignore this distinction for now.) However, this set does not enjoy as many pseudorandomness conditions as one would need for a direct application of the transference strategy to work. More specifically, given any fixed , and random , the four events
do not behave independently (as they would if were replaced for instance by the Gaussian almost primes), because any three of these events imply the fourth. This blocks the transference strategy for constellations which contain some right-angles to them (e.g. constellations of the form ) as such constellations soon turn into rectangles such as the one above after applying Cauchy-Schwarz a few times. (But a few years ago, Cook and Magyar showed that if one restricted attention to constellations which were in general position in the sense that any coordinate hyperplane contained at most one element in the constellation, then this obstruction does not occur and one can establish Theorem 2 in this case through the transference argument.) It’s worth noting that very recently, Conlon, Fox, and Zhao have succeeded in removing of the pseudorandomness conditions (namely the correlation condition) from the transference principle, leaving only the linear forms condition as the remaining pseudorandomness condition to be verified, but unfortunately this does not completely solve the above problem because the linear forms condition also fails for (or for weights concentrated on ) when applied to rectangular patterns.
There are now two ways known to get around this problem and establish Theorem 2 in full generality. The approach of Cook, Magyar, and Titichetrakun proceeds by starting with one of the known proofs of the multidimensional Szemerédi theorem – namely, the proof that proceeds through hypergraph regularity and hypergraph removal – and attach pseudorandom weights directly within the proof itself, rather than trying to add the weights to the result of that proof through a transference argument. (A key technical issue is that weights have to be added to all the levels of the hypergraph – not just the vertices and top-order edges – in order to circumvent the failure of naive pseudorandomness.) As one has to modify the entire proof of the multidimensional Szemerédi theorem, rather than use that theorem as a black box, the Cook-Magyar-Titichetrakun argument is lengthier than ours; on the other hand, it is more general and does not rely on some difficult theorems about primes that are used in our paper.
In our approach, we continue to use the multidimensional Szemerédi theorem (or more precisely, the equivalent theorem of Furstenberg and Katznelson concerning multiple recurrence for commuting shifts) as a black box. The difference is that instead of using a transference principle to connect the relative multidimensional Szemerédi theorem we need to the multiple recurrence theorem, we instead proceed by a version of the Furstenberg correspondence principle, similar to the one that connects the absolute multidimensional Szemerédi theorem to the multiple recurrence theorem. I had discovered this approach many years ago in an unpublished note, but had abandoned it because it required an infinite number of linear forms conditions (in contrast to the transference technique, which only needed a finite number of linear forms conditions and (until the recent work of Conlon-Fox-Zhao) a correlation condition). The reason for this infinite number of conditions is that the correspondence principle has to build a probability measure on an entire -algebra; for this, it is not enough to specify the measure of a single set such as , but one also has to specify the measure of “cylinder sets” such as where could be arbitrarily large. The larger gets, the more linear forms conditions one needs to keep the correspondence under control.
With the sieve weights we were using at the time, standard sieve theory methods could indeed provide a finite number of linear forms conditions, but not an infinite number, so my idea was abandoned. However, with my later work with Green and Ziegler on linear equations in primes (and related work on the Mobius-nilsequences conjecture and the inverse conjecture on the Gowers norm), Tamar and I realised that the primes themselves obey an infinite number of linear forms conditions, so one can basically use the primes (or a proxy for the primes, such as the von Mangoldt function ) as the enveloping sieve weight, rather than a classical sieve. Thus my old idea of using the Furstenberg correspondence principle to transfer Szemerédi-type theorems to the primes could actually be realised. In the one-dimensional case, this simply produces a much more complicated proof of Theorem 1 than the existing one; but it turns out that the argument works as well in higher dimensions and yields Theorem 2 relatively painlessly, except for the fact that it needs the results on linear equations in primes, the known proofs of which are extremely lengthy (and also require some of the transference machinery mentioned earlier). The problem of correlations in rectangles is avoided in the correspondence principle approach because one can compensate for such correlations by performing a suitable weighted limit to compute the measure of cylinder sets, with each requiring a different weighted correction. (This may be related to the Cook-Magyar-Titichetrakun strategy of weighting all of the facets of the hypergraph in order to recover pseudorandomness, although our contexts are rather different.)
Vitaly Bergelson, Tamar Ziegler, and I have just uploaded to the arXiv our joint paper “Multiple recurrence and convergence results associated to -actions“. This paper is primarily concerned with limit formulae in the theory of multiple recurrence in ergodic theory. Perhaps the most basic formula of this type is the mean ergodic theorem, which (among other things) asserts that if is a measure-preserving -system (which, in this post, means that is a probability space and is measure-preserving and invertible, thus giving an action of the integers), and are functions, and is ergodic (which means that contains no -invariant functions other than the constants (up to almost everywhere equivalence, of course)), then the average
converges as to the expression
see e.g. this previous blog post. Informally, one can interpret this limit formula as an equidistribution result: if is drawn at random from (using the probability measure ), and is drawn at random from for some large , then the pair becomes uniformly distributed in the product space (using product measure ) in the limit as .
If we allow to be non-ergodic, then we still have a limit formula, but it is a bit more complicated. Let be the -invariant measurable sets in ; the -system can then be viewed as a factor of the original system , which is equivalent (in the sense of measure-preserving systems) to a trivial system (known as the invariant factor) in which the shift is trivial. There is then a projection map to the invariant factor which is a factor map, and the average (1) converges in the limit to the expression
where is the pushforward map associated to the map ; see e.g. this previous blog post. We can interpret this as an equidistribution result. If is a pair as before, then we no longer expect complete equidistribution in in the non-ergodic, because there are now non-trivial constraints relating with ; indeed, for any -invariant function , we have the constraint ; putting all these constraints together we see that (for almost every , at least). The limit (2) can be viewed as an assertion that this constraint are in some sense the “only” constraints between and , and that the pair is uniformly distributed relative to these constraints.
for three functions ; this is analogous to the combinatorial task of counting length three progressions in various sets. For simplicity we assume the system to be ergodic. Naively one might expect this limit to then converge to
which would roughly speaking correspond to an assertion that the triplet is asymptotically equidistributed in . However, even in the ergodic case there can be additional constraints on this triplet that cannot be seen at the level of the individual pairs , . The key obstruction here is that of eigenfunctions of the shift , that is to say non-trivial functions that obey the eigenfunction equation almost everywhere for some constant (or -invariant) . Each such eigenfunction generates a constraint
tying together , , and . However, it turns out that these are in some sense the only constraints on that are relevant for the limit (3). More precisely, if one sets to be the sub-algebra of generated by the eigenfunctions of , then it turns out that the factor is isomorphic to a shift system known as the Kronecker factor, for some compact abelian group and some (irrational) shift ; the factor map pushes eigenfunctions forward to (affine) characters on . It is then known that the limit of (3) is
where is the closed subgroup
and is the Haar probability measure on ; see this previous blog post. The equation defining corresponds to the constraint (4) mentioned earlier. Among other things, this limit formula implies Roth’s theorem, which in the context of ergodic theory is the assertion that the limit (or at least the limit inferior) of (3) is positive when is non-negative and not identically vanishing.
(analogous to counting length four progressions) then the situation becomes more complicated still, even in the ergodic case. In addition to the (linear) eigenfunctions that already showed up in the computation of the triple average (3), a new type of constraint also arises from quadratic eigenfunctions , which obey an eigenfunction equation in which is no longer constant, but is now a linear eigenfunction. For such functions, behaves quadratically in , and one can compute the existence of a constraint
between , , , and that is not detected at the triple average level. As it turns out, this is not the only type of constraint relevant for (5); there is a more general class of constraint involving two-step nilsystems which we will not detail here, but see e.g. this previous blog post for more discussion. Nevertheless there is still a similar limit formula to previous examples, involving a special factor which turns out to be an inverse limit of two-step nilsystems; this limit theorem can be extracted from the structural theory in this paper of Host and Kra combined with a limit formula for nilsystems obtained by Lesigne, but will not be reproduced here. The pattern continues to higher averages (and higher step nilsystems); this was first done explicitly by Ziegler, and can also in principle be extracted from the structural theory of Host-Kra combined with nilsystem equidistribution results of Leibman. These sorts of limit formulae can lead to various recurrence results refining Roth’s theorem in various ways; see this paper of Bergelson, Host, and Kra for some examples of this.
The above discussion was concerned with -systems, but one can adapt much of the theory to measure-preserving -systems for other discrete countable abelian groups , in which one now has a family of shifts indexed by rather than a single shift, obeying the compatibility relation . The role of the intervals in this more general setting is replaced by that of Folner sequences. For arbitrary countable abelian , the theory for double averages (1) and triple limits (3) is essentially identical to the -system case. But when one turns to quadruple and higher limits, the situation becomes more complicated (and, for arbitrary , still not fully understood). However one model case which is now well understood is the finite field case when is an infinite-dimensional vector space over a finite field (with the finite subspaces then being a good choice for the Folner sequence). Here, the analogue of the structural theory of Host and Kra was worked out by Vitaly, Tamar, and myself in these previous papers (treating the high characteristic and low characteristic cases respectively). In the finite field setting, it turns out that nilsystems no longer appear, and one only needs to deal with linear, quadratic, and higher order eigenfunctions (known collectively as phase polynomials). It is then natural to look for a limit formula that asserts, roughly speaking, that if is drawn at random from a -system and drawn randomly from a large subspace of , then the only constraints between are those that arise from phase polynomials. The main theorem of this paper is to establish this limit formula (which, again, is a little complicated to state explicitly and will not be done here). In particular, we establish for the first time that the limit actually exists (a result which, for -systems, was one of the main results of this paper of Host and Kra).
As a consequence, we can recover finite field analogues of most of the results of Bergelson-Host-Kra, though interestingly some of the counterexamples demonstrating sharpness of their results for -systems (based on Behrend set constructions) do not seem to be present in the finite field setting (cf. this previous blog post on the cap set problem). In particular, we are able to largely settle the question of when one has a Khintchine-type theorem that asserts that for any measurable set in an ergodic -system and any , one has
for a syndetic set of , where are distinct residue classes. It turns out that Khintchine-type theorems always hold for (and for ergodicity is not required), and for it holds whenever form a parallelogram, but not otherwise (though the counterexample here was such a painful computation that we ended up removing it from the paper, and may end up putting it online somewhere instead), and for larger we could show that the Khintchine property failed for generic choices of , though the problem of determining exactly the tuples for which the Khintchine property failed looked to be rather messy and we did not completely settle it.
One of the basic objects of study in combinatorics are finite strings or infinite strings of symbols from some given alphabet , which could be either finite or infinite (but which we shall usually take to be compact). For instance, a set of natural numbers can be identified with the infinite string of s and s formed by the indicator of , e.g. the even numbers can be identified with the string from the alphabet , the multiples of three can be identified with the string , and so forth. One can also consider doubly infinite strings , which among other things can be used to describe arbitrary subsets of integers.
On the other hand, the basic object of study in dynamics (and in related fields, such as ergodic theory) is that of a dynamical system , that is to say a space together with a shift map (which is often assumed to be invertible, although one can certainly study non-invertible dynamical systems as well). One often adds additional structure to this dynamical system, such as topological structure (giving rise topological dynamics), measure-theoretic structure (giving rise to ergodic theory), complex structure (giving rise to complex dynamics), and so forth. A dynamical system gives rise to an action of the natural numbers on the space by using the iterates of for ; if is invertible, we can extend this action to an action of the integers on the same space. One can certainly also consider dynamical systems whose underlying group (or semi-group) is something other than or (e.g. one can consider continuous dynamical systems in which the evolution group is ), but we will restrict attention to the classical situation of or actions here.
There is a fundamental correspondence principle connecting the study of strings (or subsets of natural numbers or integers) with the study of dynamical systems. In one direction, given a dynamical system , an observable taking values in some alphabet , and some initial datum , we can first form the forward orbit of , and then observe this orbit using to obtain an infinite string . If the shift in this system is invertible, one can extend this infinite string into a doubly infinite string . Thus we see that every quadruplet consisting of a dynamical system , an observable , and an initial datum creates an infinite string.
Example 1 If is the three-element set with the shift map , is the observable that takes the value at the residue class and zero at the other two classes, and one starts with the initial datum , then the observed string becomes the indicator of the multiples of three.
In the converse direction, every infinite string in some alphabet arises (in a decidedly non-unique fashion) from a quadruple in the above fashion. This can be easily seen by the following “universal” construction: take to be the set of infinite strings in the alphabet , let be the shift map
let be the observable
and let be the initial point
Then one easily sees that the observed string is nothing more than the original string . Note also that this construction can easily be adapted to doubly infinite strings by using instead of , at which point the shift map now becomes invertible. An important variant of this construction also attaches an invariant probability measure to that is associated to the limiting density of various sets associated to the string , and leads to the Furstenberg correspondence principle, discussed for instance in these previous blog posts. Such principles allow one to rigorously pass back and forth between the combinatorics of strings and the dynamics of systems; for instance, Furstenberg famously used his correspondence principle to demonstrate the equivalence of Szemerédi’s theorem on arithmetic progressions with what is now known as the Furstenberg multiple recurrence theorem in ergodic theory.
In the case when the alphabet is the binary alphabet , and (for technical reasons related to the infamous non-injectivity of the decimal representation system) the string does not end with an infinite string of s, then one can reformulate the above universal construction by taking to be the interval , to be the doubling map , to be the observable that takes the value on and on (that is, is the first binary digit of ), and is the real number (that is, in binary).
The above universal construction is very easy to describe, and is well suited for “generic” strings that have no further obvious structure to them, but it often leads to dynamical systems that are much larger and more complicated than is actually needed to produce the desired string , and also often obscures some of the key dynamical features associated to that sequence. For instance, to generate the indicator of the multiples of three that were mentioned previously, the above universal construction requires an uncountable space and a dynamics which does not obviously reflect the key features of the sequence such as its periodicity. (Using the unit interval model, the dynamics arise from the orbit of under the doubling map, which is a rather artificial way to describe the indicator function of the multiples of three.)
A related aesthetic objection to the universal construction is that of the four components of the quadruplet used to generate the sequence , three of the components are completely universal (in that they do not depend at all on the sequence ), leaving only the initial datum to carry all the distinctive features of the original sequence. While there is nothing wrong with this mathematically, from a conceptual point of view it would make sense to make all four components of the quadruplet to be adapted to the sequence, in order to take advantage of the accumulated intuition about various special dynamical systems (and special observables), not just special initial data.
One step in this direction can be made by restricting to the orbit of the initial datum (actually for technical reasons it is better to restrict to the topological closure of this orbit, in order to keep compact). For instance, starting with the sequence , the orbit now consists of just three points , , , bringing the system more in line with the example in Example 1. Technically, this is the “optimal” representation of the sequence by a quadruplet , because any other such representation is a factor of this representation (in the sense that there is a unique map with , , and ). However, from a conceptual point of view this representation is still somewhat unsatisfactory, given that the elements of the system are interpreted as infinite strings rather than elements of a more geometrically or algebraically rich object (e.g. points in a circle, torus, or other homogeneous space).
For general sequences , locating relevant geometric or algebraic structure in a dynamical system generating that sequence is an important but very difficult task (see e.g. this paper of Host and Kra, which is more or less devoted to precisely this task in the context of working out what component of a dynamical system controls the multiple recurrence behaviour of that system). However, for specific examples of sequences , one can use an informal procedure of educated guesswork in order to produce a more natural-looking quadruple that generates that sequence. This is not a particularly difficult or deep operation, but I found it very helpful in internalising the intuition behind the correspondence principle. Being non-rigorous, this procedure does not seem to be emphasised in most presentations of the correspondence principle, so I thought I would describe it here.
I’ve just uploaded to the arXiv my joint paper with Vitaly Bergelson, “Multiple recurrence in quasirandom groups“, which is submitted to Geom. Func. Anal.. This paper builds upon a paper of Gowers in which he introduced the concept of a quasirandom group, and established some mixing (or recurrence) properties of such groups. A -quasirandom group is a finite group with no non-trivial unitary representations of dimension at most . We will informally refer to a “quasirandom group” as a -quasirandom group with the quasirandomness parameter large (more formally, one can work with a sequence of -quasirandom groups with going to infinity). A typical example of a quasirandom group is where is a large prime. Quasirandom groups are discussed in depth in this blog post. One of the key properties of quasirandom groups established in Gowers’ paper is the following “weak mixing” property: if are subsets of , then for “almost all” , one has
where denotes the density of in . Here, we use to informally represent an estimate of the form (where is a quantity that goes to zero when the quasirandomness parameter goes to infinity), and “almost all ” denotes “for all in a subset of of density “. As a corollary, if have positive density in (by which we mean that is bounded away from zero, uniformly in the quasirandomness parameter , and similarly for ), then (if the quasirandomness parameter is sufficiently large) we can find elements such that , , . In fact we can find approximately such pairs . To put it another way: if we choose uniformly and independently at random from , then the events , , are approximately independent (thus the random variable resembles a uniformly distributed random variable on in some weak sense). One can also express this mixing property in integral form as
for any bounded functions . (Of course, with being finite, one could replace the integrals here by finite averages if desired.) Or in probabilistic language, we have
where are drawn uniformly and independently at random from .
As observed in Gowers’ paper, one can iterate this observation to find “parallelopipeds” of any given dimension in dense subsets of . For instance, applying (1) with replaced by , , and one can assert (after some relabeling) that for chosen uniformly and independently at random from , the events , , , , , , are approximately independent whenever are dense subsets of ; thus the tuple resebles a uniformly distributed random variable in in some weak sense.
However, there are other tuples for which the above iteration argument does not seem to apply. One of the simplest tuples in this vein is the tuple in , when are drawn uniformly at random from a quasirandom group . Here, one does not expect the tuple to behave as if it were uniformly distributed in , because there is an obvious constraint connecting the last two components of this tuple: they must lie in the same conjugacy class! In particular, if is a subset of that is the union of conjugacy classes, then the events , are perfectly correlated, so that is equal to rather than . Our main result, though, is that in a quasirandom group, this is (approximately) the only constraint on the tuple. More precisely, we have
where goes to zero as , are drawn uniformly and independently at random from , and is drawn uniformly at random from the conjugates of for each fixed choice of .
This is the probabilistic formulation of the above theorem; one can also phrase the theorem in other formulations (such as an integral formulation), and this is detailed in the paper. This theorem leads to a number of recurrence results; for instance, as a corollary of this result, we have
for almost all , and any dense subsets of ; the lower and upper bounds are sharp, with the lower bound being attained when is randomly distributed, and the upper bound when is conjugation-invariant.
To me, the more interesting thing here is not the result itself, but how it is proven. Vitaly and I were not able to find a purely finitary way to establish this mixing theorem. Instead, we had to first use the machinery of ultraproducts (as discussed in this previous post) to convert the finitary statement about a quasirandom group to an infinitary statement about a type of infinite group which we call an ultra quasirandom group (basically, an ultraproduct of increasingly quasirandom finite groups). This is analogous to how the Furstenberg correspondence principle is used to convert a finitary combinatorial problem into an infinitary ergodic theory problem.
Ultra quasirandom groups come equipped with a finite, countably additive measure known as Loeb measure , which is very analogous to the Haar measure of a compact group, except that in the case of ultra quasirandom groups one does not quite have a topological structure that would give compactness. Instead, one has a slightly weaker structure known as a -topology, which is like a topology except that open sets are only closed under countable unions rather than arbitrary ones. There are some interesting measure-theoretic and topological issues regarding the distinction between topologies and -topologies (and between Haar measure and Loeb measure), but for this post it is perhaps best to gloss over these issues and pretend that ultra quasirandom groups come with a Haar measure. One can then recast Theorem 1 as a mixing theorem for the left and right actions of the ultra approximate group on itself, which roughly speaking is the assertion that
for “almost all” , if are bounded measurable functions on , with having zero mean on all conjugacy classes of , where are the left and right translation operators
To establish this mixing theorem, we use the machinery of idempotent ultrafilters, which is a particularly useful tool for understanding the ergodic theory of actions of countable groups that need not be amenable; in the non-amenable setting the classical ergodic averages do not make much sense, but ultrafilter-based averages are still available. To oversimplify substantially, the idempotent ultrafilter arguments let one establish mixing estimates of the form (2) for “many” elements of an infinite-dimensional parallelopiped known as an IP system (provided that the actions of this IP system obey some technical mixing hypotheses, but let’s ignore that for sake of this discussion). The claim then follows by using the quasirandomness hypothesis to show that if the estimate (2) failed for a large set of , then this large set would then contain an IP system, contradicting the previous claim.
Idempotent ultrafilters are an extremely infinitary type of mathematical object (one has to use Zorn’s lemma no fewer than three times just to construct one of these objects!). So it is quite remarkable that they can be used to establish a finitary theorem such as Theorem 1, though as is often the case with such infinitary arguments, one gets absolutely no quantitative control whatsoever on the error terms appearing in that theorem. (It is also mildly amusing to note that our arguments involve the use of ultrafilters in two completely different ways: firstly in order to set up the ultraproduct that converts the finitary mixing problem to an infinitary one, and secondly to solve the infinitary mixing problem. Despite some superficial similarities, there appear to be no substantial commonalities between these two usages of ultrafilters.) There is already a fair amount of literature on using idempotent ultrafilter methods in infinitary ergodic theory, and perhaps by further development of ultraproduct correspondence principles, one can use such methods to obtain further finitary consequences (although the state of the art for idempotent ultrafilter ergodic theory has not advanced much beyond the analysis of two commuting shifts currently, which is the main reason why our arguments only handle the pattern and not more sophisticated patterns).
We also have some miscellaneous other results in the paper. It turns out that by using the triangle removal lemma from graph theory, one can obtain a recurrence result that asserts that whenever is a dense subset of a finite group (not necessarily quasirandom), then there are pairs such that all lie in . Using a hypergraph generalisation of the triangle removal lemma known as the hypergraph removal lemma, one can obtain more complicated versions of this statement; for instance, if is a dense subset of , then one can find triples such that all lie in . But the method is tailored to the specific types of patterns given here, and we do not have a general method for obtaining recurrence or mixing properties for arbitrary patterns of words in some finite alphabet such as .
We also give some properties of a model example of an ultra quasirandom group, namely the ultraproduct of where is a sequence of primes going off to infinity. Thanks to the substantial recent progress (by Helfgott, Bourgain, Gamburd, Breuillard, and others) on understanding the expansion properties of the finite groups , we have a fair amount of knowledge on the ultraproduct as well; for instance any two elements of will almost surely generate a spectral gap. We don’t have any direct application of this particular ultra quasirandom group, but it might be interesting to study it further.
One of the most notorious problems in elementary mathematics that remains unsolved is the Collatz conjecture, concerning the function defined by setting when is odd, and when is even. (Here, is understood to be the positive natural numbers .)
Open questions with this level of notoriety can lead to what Richard Lipton calls “mathematical diseases” (and what I termed an unhealthy amount of obsession on a single famous problem). (See also this xkcd comic regarding the Collatz conjecture.) As such, most practicing mathematicians tend to spend the majority of their time on more productive research areas that are only just beyond the range of current techniques. Nevertheless, it can still be diverting to spend a day or two each year on these sorts of questions, before returning to other matters; so I recently had a go at the problem. Needless to say, I didn’t solve the problem, but I have a better appreciation of why the conjecture is (a) plausible, and (b) unlikely be proven by current technology, and I thought I would share what I had found out here on this blog.
Let me begin with some very well known facts. If is odd, then is even, and so . Because of this, one could replace by the function , defined by when is odd, and when is even, and obtain an equivalent conjecture. Now we see that if one chooses “at random”, in the sense that it is odd with probability and even with probability , then increases by a factor of roughly half the time, and decreases it by a factor of half the time. Furthermore, if is uniformly distributed modulo , one easily verifies that is uniformly distributed modulo , and so should be roughly times as large as half the time, and roughly times as large as the other half of the time. Continuing this at a heuristic level, we expect generically that half the time, and the other half of the time. The logarithm of this orbit can then be modeled heuristically by a random walk with steps and occuring with equal probability. The expectation
is negative, and so (by the classic gambler’s ruin) we expect the orbit to decrease over the long term. This can be viewed as heuristic justification of the Collatz conjecture, at least in the “average case” scenario in which is chosen uniform at random (e.g. in some large interval ). (It also suggests that if one modifies the problem, e.g. by replacing to , then one can obtain orbits that tend to increase over time, and indeed numerically for this variant one sees orbits that appear to escape to infinity.) Unfortunately, one can only rigorously keep the orbit uniformly distributed modulo for time about or so; after that, the system is too complicated for naive methods to control at anything other than a heuristic level.
Remark 1 One can obtain a rigorous analogue of the above arguments by extending from the integers to the -adics . This compact abelian group comes with a Haar probability measure, and one can verify that this measure is invariant with respect to ; with a bit more effort one can verify that it is ergodic. This suggests the introduction of ergodic theory methods. For instance, using the pointwise ergodic theorem, we see that if is a random -adic integer, then almost surely the orbit will be even half the time and odd half the time asymptotically, thus supporting the above heuristics. Unfortunately, this does not directly tell us much about the dynamics on , as this is a measure zero subset of . More generally, unless a dynamical system is somehow “polynomial”, “nilpotent”, or “unipotent” in nature, the current state of ergodic theory is usually only able to say something meaningful about generic orbits, but not about all orbits. For instance, the very simple system on the unit circle is well understood from ergodic theory (in particular, almost all orbits will be uniformly distributed), but the orbit of a specific point, e.g. , is still nearly impossible to understand (this particular problem being equivalent to the notorious unsolved question of whether the digits of are uniformly distributed).
The above heuristic argument only suggests decreasing orbits for almost all (though even this remains unproven, the state of the art is that the number of in that eventually go to is , a result of Krasikov and Lagarias). It leaves open the possibility of some very rare exceptional for which the orbit goes to infinity, or gets trapped in a periodic loop. Since the only loop that lies in is (for ) or (for ), we thus may isolate a weaker consequence of the Collatz conjecture:
Of course, we may replace with (and delete ““) and obtain an equivalent conjecture.
This weaker version of the Collatz conjecture is also unproven. However, it was observed by Bohm and Sontacchi that this weak conjecture is equivalent to a divisibility problem involving powers of and :
such that is a positive integer that is a proper divisor of
Proof: To see this, it is convenient to reformulate Conjecture 2 slightly. Define an equivalence relation on by declaring if for some integer , thus giving rise to the quotient space of equivalence classes (which can be placed, if one wishes, in one-to-one correspondence with the odd natural numbers). We can then define a function by declaring
for any , where is the largest power of that divides . It is easy to see that is well-defined (it is essentially the Syracuse function, after identifying with the odd natural numbers), and that periodic orbits of correspond to periodic orbits of or . Thus, Conjecture 2 is equivalent to the conjecture that is the only periodic orbit of .
Now suppose that Conjecture 2 failed, thus there exists such that for some . Without loss of generality we may take to be odd, then . It is easy to see that is the only fixed point of , and so . An easy induction using (2) shows that
Since , we have
Conversely, suppose that Conjecture 3 failed. Then we have , integers
and a natural number such that (1) holds. As , we see that the right-hand side of (1) is odd, so is odd also. If we then introduce the natural numbers by the formula (3), then an easy induction using (4) shows that
with the periodic convention for . As the are increasing in (even for ), we see that is the largest power of that divides the right-hand side of (5); as is odd, we conclude that is also the largest power of that divides . We conclude that
and thus is a periodic orbit of . Since is an odd number larger than , this contradicts Conjecture 3.
Call a counterexample a tuple that contradicts Conjecture 3, i.e. an integer and an increasing set of integers
Proof: The first bound is immediate from the positivity of . To prove the second bound, observe from the proof of Proposition 4 that the counterexample will generate a counterexample to Conjecture 2, i.e. a non-trivial periodic orbit . As the conjecture is true for all , all terms in this orbit must be at least . An inspection of the proof of Proposition 4 reveals that this orbit consists of steps of the form , and steps of the form . As all terms are at least , the former steps can increase magnitude by a multiplicative factor of at most . As the orbit returns to where it started, we conclude that
whence the claim.
The Collatz conjecture has already been verified for many values of (up to at least , according to this web site). Inserting this into the above lemma, one can get lower bounds on . For instance, by methods such as this, it is known that any non-trivial periodic orbit has length at least , as shown in Garner’s paper (and this bound, which uses the much smaller value that was available in 1981, can surely be improved using the most recent computational bounds).
Now we can perform a heuristic count on the number of counterexamples. If we fix and , then , and from basic combinatorics we see that there are different ways to choose the remaining integers
to form a potential counterexample . As a crude heuristic, one expects that for a “random” such choice of integers, the expression (1) has a probability of holding for some integer . (Note that is not divisible by or , and so one does not expect the special structure of the right-hand side of (1) with respect to those moduli to be relevant. There will be some choices of where the right-hand side in (1) is too small to be divisible by , but using the estimates in Lemma 5, one expects this to occur very infrequently.) Thus, the total expected number of solutions for this choice of is
where, in view of Lemma 5, one should restrict the double summation to the heuristic regime , with the approximation here accurate to many decimal places.
where is the entropy function
A brief computation shows that
and so (ignoring all subexponential terms)
which makes the series (6) convergent. (Actually, one does not need the full strength of Lemma 5 here; anything that kept well away from would suffice. In particular, one does not need an enormous value of ; even (say) would be more than sufficient to obtain the heuristic that there are finitely many counterexamples.) Heuristically applying the Borel-Cantelli lemma, we thus expect that there are only a finite number of counterexamples to the weak Collatz conjecture (and inserting a bound such as , one in fact expects it to be extremely likely that there are no counterexamples at all).
This, of course, is far short of any rigorous proof of Conjecture 2. In order to make rigorous progress on this conjecture, it seems that one would need to somehow exploit the structural properties of numbers of the form
In some very special cases, this can be done. For instance, suppose that one had with at most one exception (this is essentially what is called a -cycle by Steiner). Then (8) simplifies via the geometric series formula to a combination of just a bounded number of powers of and , rather than an unbounded number. In that case, one can start using tools from transcendence theory such as Baker’s theorem to obtain good results; for instance, in the above-referenced paper of Steiner, it was shown that -cycles cannot actually occur, and similar methods have been used to show that -cycles (in which there are at most exceptions to ) do not occur for any , as was shown by Simons and de Weger. However, for general increasing tuples of integers , there is no such representation by bounded numbers of powers, and it does not seem that methods from transcendence theory will be sufficient to control the expressions (8) to the extent that one can understand their divisibility properties by quantities such as .
Amusingly, there is a slight connection to Littlewood-Offord theory in additive combinatorics – the study of the random sums
generated by some elements of an additive group , or equivalently, the vertices of an -dimensional parallelepiped inside . Here, the relevant group is . The point is that if one fixes and (and hence ), and lets vary inside the simplex
then the set of all sums of the form (8) (viewed as an element of ) contains many large parallelepipeds. (Note, incidentally, that once one fixes , all the sums of the form (8) are distinct; because given (8) and , one can read off as the largest power of that divides (8), and then subtracting off one can then read off , and so forth.) This is because the simplex contains many large cubes. Indeed, if one picks a typical element of , then one expects (thanks to Lemma 5) that there there will be indices such that for , which allows one to adjust each of the independently by if desired and still remain inside . This gives a cube in of dimension , which then induces a parallelepiped of the same dimension in . A short computation shows that the generators of this parallelepiped consist of products of a power of and a power of , and in particular will be coprime to .
If the weak Collatz conjecture is true, then the set must avoid the residue class in . Let us suppose temporarily that we did not know about Baker’s theorem (and the associated bound (7)), so that could potentially be quite small. Then we would have a large parallelepiped inside a small cyclic group that did not cover all of , which would not be possible for small enough. Indeed, an easy induction shows that a -dimensional parallelepiped in , with all generators coprime to , has cardinality at least . This argument already shows the lower bound . In other words, we have
Proposition 6 Suppose the weak Collatz conjecture is true. Then for any natural numbers with , one has .
This bound is very weak when compared against the unconditional bound (7). However, I know of no way to get a nontrivial separation property between powers of and powers of other than via transcendence theory methods. Thus, this result strongly suggests that any proof of the Collatz conjecture must either use existing results in transcendence theory, or else must contribute a new method to give non-trivial results in transcendence theory. (This already rules out a lot of possible approaches to solve the Collatz conjecture.)
By using more sophisticated tools in additive combinatorics, one can improve the above proposition (though it is still well short of the transcendence theory bound (7)):
Proposition 7 Suppose the weak Collatz conjecture is true. Then for any natural numbers with , one has for some absolute constant .
Proof: (Informal sketch only) Suppose not, then we can find with of size . We form the set as before, which contains parallelepipeds in of large dimension that avoid . We can count the number of times occurs in one of these parallelepipeds by a standard Fourier-analytic computation involving Riesz products (see Chapter 7 of my book with Van Vu, or this recent preprint of Maples). Using this Fourier representation, the fact that this parallelepiped avoids (and the fact that ) forces the generators to be concentrated in a Bohr set, in that one can find a non-zero frequency such that of the generators lie in the set . However, one can choose the generators to essentially have the structure of a (generalised) geometric progression (up to scaling, it resembles something like for ranging over a generalised arithmetic progression, and a fixed irrational), and one can show that such progressions cannot be concentrated in Bohr sets (this is similar in spirit to the exponential sum estimates of Bourgain on approximate multiplicative subgroups of , though one can use more elementary methods here due to the very strong nature of the Bohr set concentration (being of the “ concentration” variety rather than the “ concentration”).). This furnishes the required contradiction.
Thus we see that any proposed proof of the Collatz conjecture must either use transcendence theory, or introduce new techniques that are powerful enough to create exponential separation between powers of and powers of .
Unfortunately, once one uses the transcendence theory bound (7), the size of the cyclic group becomes larger than the volume of any cube in , and Littlewood-Offord techniques are no longer of much use (they can be used to show that is highly equidistributed in , but this does not directly give any way to prevent from containing ).
One possible toy model problem for the (weak) Collatz conjecture is a conjecture of Erdos asserting that for , the base representation of contains at least one . (See this paper of Lagarias for some work on this conjecture and on related problems.) To put it another way, the conjecture asserts that there are no integer solutions to
with and . (When , of course, one has .) In this form we see a resemblance to Conjecture 3, but it looks like a simpler problem to attack (though one which is still a fair distance beyond what one can do with current technology). Note that one has a similar heuristic support for this conjecture as one does for Proposition 3; a number of magnitude has about base digits, so the heuristic probability that none of these digits are equal to is , which is absolutely summable.
In 1977, Furstenberg established his multiple recurrence theorem:
Theorem 1 (Furstenberg multiple recurrence) Let be a measure-preserving system, thus is a probability space and is a measure-preserving bijection such that and are both measurable. Let be a measurable subset of of positive measure . Then for any , there exists such that
Equivalently, there exists and such that
The multiple recurrence theorem is proven, roughly speaking, by an induction on the “complexity” of the system . Indeed, for very simple systems, such as periodic systems (in which is the identity for some , which is for instance the case for the circle shift , with a rational shift ), the theorem is trivial; at a slightly more advanced level, almost periodic (or compact) systems (in which is a precompact subset of for every , which is for instance the case for irrational circle shifts), is also quite easy. One then shows that the multiple recurrence property is preserved under various extension operations (specifically, compact extensions, weakly mixing extensions, and limits of chains of extensions), which then gives the multiple recurrence theorem as a consequence of the Furstenberg-Zimmer structure theorem for measure-preserving systems. See these lecture notes for further discussion.
From a high-level perspective, this is still one of the most conceptual proofs known of Szemerédi’s theorem. However, the individual components of the proof are still somewhat intricate. Perhaps the most difficult step is the demonstration that the multiple recurrence property is preserved under compact extensions; see for instance these lecture notes, which is devoted entirely to this step. This step requires quite a bit of measure-theoretic and/or functional analytic machinery, such as the theory of disintegrations, relatively almost periodic functions, or Hilbert modules.
However, I recently realised that there is a special case of the compact extension step – namely that of finite extensions – which avoids almost all of these technical issues while still capturing the essence of the argument (and in particular, the key idea of using van der Waerden’s theorem). As such, this may serve as a pedagogical device for motivating this step of the proof of the multiple recurrence theorem.
Let us first explain what a finite extension is. Given a measure-preserving system , a finite set , and a measurable map from to the permutation group of , one can form the finite extension
One easily verifies that this is indeed a measure-preserving system. We refer to as the cocycle of the system.
An example of finite extensions comes from group theory. Suppose we have a short exact sequence
of finite groups. Let be a group element of , and let be its projection in . Then the shift map on (with the discrete -algebra and uniform probability measure) can be viewed as a finite extension of the shift map on (again with the discrete -algebra and uniform probability measure), by arbitrarily selecting a section that inverts the projection map, identifying with by identifying with for , and using the cocycle
Thus, for instance, the unit shift on can be thought of as a finite extension of the unit shift on whenever is a multiple of .
Another example comes from Riemannian geometry. If is a Riemannian manifold that is a finite cover of another Riemannian manifold (with the metric on being the pullback of that on ), then (unit time) geodesic flow on the cosphere bundle of is a finite extension of the corresponding flow on .
Here, then, is the finite extension special case of the compact extension step in the proof of the multiple recurrence theorem:
Before we prove this proposition, let us first give the combinatorial analogue.
Lemma 3 Let be a subset of the integers that contains arbitrarily long arithmetic progressions, and let be a colouring of by colours (or equivalently, a partition of into colour classes ). Then at least one of the contains arbitrarily long arithmetic progressions.
Proof: By the infinite pigeonhole principle, it suffices to show that for each , one of the colour classes contains an arithmetic progression of length .
Let be a large integer (depending on and ) to be chosen later. Then contains an arithmetic progression of length , which may be identified with . The colouring of then induces a colouring on into colour classes. Applying (the finitary form of) van der Waerden’s theorem, we conclude that if is sufficiently large depending on and , then one of these colouring classes contains an arithmetic progression of length ; undoing the identification, we conclude that one of the contains an arithmetic progression of length , as desired.
Of course, by specialising to the case , we see that the above Lemma is in fact equivalent to van der Waerden’s theorem.
Now we prove Proposition 2.
Proof: Fix . Let be a positive measure subset of . By Fubini’s theorem, we have
where and is the fibre of at . Since is positive, we conclude that the set
is a positive measure subset of . Note for each , we can find an element such that . While not strictly necessary for this argument, one can ensure if one wishes that the function is measurable by totally ordering , and then letting the minimal element of for which .
Let be a large integer (which will depend on and the cardinality of ) to be chosen later. Because obeys the multiple recurrence theorem, we can find a positive integer and such that
Now consider the sequence of points
for . From (1), we see that
for some sequence . This can be viewed as a colouring of by colours, where is the cardinality of . Applying van der Waerden’s theorem, we conclude (if is sufficiently large depending on and ) that there is an arithmetic progression in with such that
for some . If we then let , we see from (2) that
for all , and the claim follows.
can be partitioned into the classes
where is the graph of . The multiple recurrence property for ensures that contains arbitrarily long arithmetic progressions, and so therefore one of the must also, which gives the multiple recurrence property for .
Remark 2 Compact extensions can be viewed as a generalisation of finite extensions, in which the fibres are no longer finite sets, but are themselves measure spaces obeying an additional property, which roughly speaking asserts that for many functions on the extension, the shifts of behave in an almost periodic fashion on most fibres, so that the orbits become totally bounded on each fibre. This total boundedness allows one to obtain an analogue of the above colouring map to which van der Waerden’s theorem can be applied.
Let be an abelian countable discrete group. A measure-preserving -system (or -system for short) is a probability space , equipped with a measure-preserving action of the group , thus
for all and , and
for all , with equal to the identity map. Classically, ergodic theory has focused on the cyclic case (in which the are iterates of a single map , with elements of being interpreted as a time parameter), but one can certainly consider actions of other groups also (including continuous or non-abelian groups).
A -system is said to be strongly -mixing, or strongly mixing for short, if one has
for all , where the convergence is with respect to the one-point compactification of (thus, for every , there exists a compact (hence finite) subset of such that for all ).
Similarly, we say that a -system is strongly -mixing if one has
for all , thus for every , there exists a finite subset of such that
whenever all lie outside .
It is obvious that a strongly -mixing system is necessarily strong -mixing. In the case of -systems, it has been an open problem for some time, due to Rohlin, whether the converse is true:
Problem 1 (Rohlin’s problem) Is every strongly mixing -system necessarily strongly -mixing?
This is a surprisingly difficult problem. In the positive direction, a routine application of the Cauchy-Schwarz inequality (via van der Corput’s inequality) shows that every strongly mixing system is weakly -mixing, which roughly speaking means that converges to for most . Indeed, every weakly mixing system is in fact weakly mixing of all orders; see for instance this blog post of Carlos Matheus, or these lecture notes of myself. So the problem is to exclude the possibility of correlation between , , and for a small but non-trivial number of pairs .
It is also known that the answer to Rohlin’s problem is affirmative for rank one transformations (a result of Kalikow) and for shifts with purely singular continuous spectrum (a result of Host; note that strongly mixing systems cannot have any non-trivial point spectrum). Indeed, any counterexample to the problem, if it exists, is likely to be highly pathological.
In the other direction, Rohlin’s problem is known to have a negative answer for -systems, by a well-known counterexample of Ledrappier which can be described as follows. One can view a -system as being essentially equivalent to a stationary process of random variables in some range space indexed by , with being with the obvious shift map
In Ledrappier’s example, the take values in the finite field of two elements, and are selected at uniformly random subject to the “Pascal’s triangle” linear constraints
A routine application of the Kolmogorov extension theorem allows one to build such a process. The point is that due to the properties of Pascal’s triangle modulo (known as Sierpinski’s triangle), one has
for all powers of two . This is enough to destroy strong -mixing, because it shows a strong correlation between , , and for arbitrarily large and randomly chosen . On the other hand, one can still show that and are asymptotically uncorrelated for large , giving strong -mixing. Unfortunately, there are significant obstructions to converting Ledrappier’s example from a -system to a -system, as pointed out by de la Rue.
In this post, I would like to record a “finite field” variant of Ledrappier’s construction, in which is replaced by the function field ring , which is a “dyadic” (or more precisely, “triadic”) model for the integers (cf. this earlier blog post of mine). In other words:
Theorem 2 There exists a -system that is strongly -mixing but not strongly -mixing.
for all and all . Again, this system is manifestly not strongly -mixing, but can be shown to be strongly -mixing; I give details below the fold.
As I discussed in this previous post, in many cases the dyadic model serves as a good guide for the non-dyadic model. However, in this case there is a curious rigidity phenomenon that seems to prevent Ledrappier-type examples from being transferable to the one-dimensional non-dyadic setting; once one restores the Archimedean nature of the underlying group, the constraints (1) not only reinforce each other strongly, but also force so much linearity on the system that one loses the strong mixing property.