You are currently browsing the category archive for the ‘math.DS’ category.

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

(Discrete) (Continuous) (Limit method)
Ramsey theory Topological dynamics Compactness
Density Ramsey theory Ergodic theory Furstenberg correspondence principle
Graph/hypergraph regularity Measure theory Graph limits
Polynomial regularity Linear algebra Ultralimits
Structural decompositions Hilbert space geometry Ultralimits
Fourier analysis Spectral theory Direct and inverse limits
Quantitative algebraic geometry Algebraic geometry Schemes
Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits
Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

  • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects {x_n} in a common space {X}, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object {\lim_{n \rightarrow \infty} x_n}, which remains in the same space, and is “close” to many of the original objects {x_n} with respect to the given metric or topology.
  • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects {x_n} in a category {X}, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit {\varinjlim x_n} or the inverse limit {\varprojlim x_n} of these objects, which is another object in the same category {X}, and is connected to the original objects {x_n} by various morphisms.
  • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects {x_{\bf n}} or of spaces {X_{\bf n}}, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, {X_{\bf n}} might be groups and {x_{\bf n}} might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}} or a new space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}, which is still a model of the same language (e.g. if the spaces {X_{\bf n}} were all groups, then the limiting space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} is an abelian group, then the {X_{\bf n}} will also be abelian groups for many {{\bf n}}.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects {x_{\bf n}} to all lie in a common space {X} in order to form an ultralimit {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; they are permitted to lie in different spaces {X_{\bf n}}; this is more natural in many discrete contexts, e.g. when considering graphs on {{\bf n}} vertices in the limit when {{\bf n}} goes to infinity. Also, no convergence properties on the {x_{\bf n}} are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces {X_{\bf n}} involved are required in order to construct the ultraproduct.

With so few requirements on the objects {x_{\bf n}} or spaces {X_{\bf n}}, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the {x_{\bf n}}, will be exactly obeyed by the limit object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Read the rest of this entry »

Tamar Ziegler and I have just uploaded to the arXiv our joint paper “A multi-dimensional Szemerédi theorem for the primes via a correspondence principle“. This paper is related to an earlier result of Ben Green and mine in which we established that the primes contain arbitrarily long arithmetic progressions. Actually, in that paper we proved a more general result:

Theorem 1 (Szemerédi’s theorem in the primes) Let {A} be a subset of the primes {{\mathcal P}} of positive relative density, thus {\limsup_{N \rightarrow \infty} \frac{|A \cap [N]|}{|{\mathcal P} \cap [N]|} > 0}. Then {A} contains arbitrarily long arithmetic progressions.

This result was based in part on an earlier paper of Green that handled the case of progressions of length three. With the primes replaced by the integers, this is of course the famous theorem of Szemerédi.

Szemerédi’s theorem has now been generalised in many different directions. One of these is the multidimensional Szemerédi theorem of Furstenberg and Katznelson, who used ergodic-theoretic techniques to show that any dense subset of {{\bf Z}^d} necessarily contained infinitely many constellations of any prescribed shape. Our main result is to relativise that theorem to the primes as well:

Theorem 2 (Multidimensional Szemerédi theorem in the primes) Let {d \geq 1}, and let {A} be a subset of the {d^{th}} Cartesian power {{\mathcal P}^d} of the primes of positive relative density, thus

\displaystyle  \limsup_{N \rightarrow \infty} \frac{|A \cap [N]^d|}{|{\mathcal P}^d \cap [N]^d|} > 0.

Then for any {v_1,\ldots,v_k \in {\bf Z}^d}, {A} contains infinitely many “constellations” of the form {a+r v_1, \ldots, a + rv_k} with {a \in {\bf Z}^k} and {r} a positive integer.

In the case when {A} is itself a Cartesian product of one-dimensional sets (in particular, if {A} is all of {{\mathcal P}^d}), this result already follows from Theorem 1, but there does not seem to be a similarly easy argument to deduce the general case of Theorem 2 from previous results. Simultaneously with this paper, an independent proof of Theorem 2 using a somewhat different method has been established by Cook, Maygar, and Titichetrakun.

The result is reminiscent of an earlier result of mine on finding constellations in the Gaussian primes (or dense subsets thereof). That paper followed closely the arguments of my original paper with Ben Green, namely it first enclosed (a W-tricked version of) the primes or Gaussian primes (in a sieve theoretic-sense) by a slightly larger set (or more precisely, a weight function {\nu}) of almost primes or almost Gaussian primes, which one could then verify (using methods closely related to the sieve-theoretic methods in the ongoing Polymath8 project) to obey certain pseudorandomness conditions, known as the linear forms condition and the correlation condition. Very roughly speaking, these conditions assert statements of the following form: if {n} is a randomly selected integer, then the events of {n+h_1,\ldots,n+h_k} simultaneously being an almost prime (or almost Gaussian prime) are approximately independent for most choices of {h_1,\ldots,h_k}. Once these conditions are satisfied, one can then run a transference argument (initially based on ergodic-theory methods, but nowadays there are simpler transference results based on the Hahn-Banach theorem, due to Gowers and Reingold-Trevisan-Tulsiani-Vadhan) to obtain relative Szemerédi-type theorems from their absolute counterparts.

However, when one tries to adapt these arguments to sets such as {{\mathcal P}^2}, a new difficulty occurs: the natural analogue of the almost primes would be the Cartesian square {{\mathcal A}^2} of the almost primes – pairs {(n,m)} whose entries are both almost primes. (Actually, for technical reasons, one does not work directly with a set of almost primes, but would instead work with a weight function such as {\nu(n) \nu(m)} that is concentrated on a set such as {{\mathcal A}^2}, but let me ignore this distinction for now.) However, this set {{\mathcal A}^2} does not enjoy as many pseudorandomness conditions as one would need for a direct application of the transference strategy to work. More specifically, given any fixed {h, k}, and random {(n,m)}, the four events

\displaystyle  (n,m) \in {\mathcal A}^2

\displaystyle  (n+h,m) \in {\mathcal A}^2

\displaystyle  (n,m+k) \in {\mathcal A}^2

\displaystyle  (n+h,m+k) \in {\mathcal A}^2

do not behave independently (as they would if {{\mathcal A}^2} were replaced for instance by the Gaussian almost primes), because any three of these events imply the fourth. This blocks the transference strategy for constellations which contain some right-angles to them (e.g. constellations of the form {(n,m), (n+r,m), (n,m+r)}) as such constellations soon turn into rectangles such as the one above after applying Cauchy-Schwarz a few times. (But a few years ago, Cook and Magyar showed that if one restricted attention to constellations which were in general position in the sense that any coordinate hyperplane contained at most one element in the constellation, then this obstruction does not occur and one can establish Theorem 2 in this case through the transference argument.) It’s worth noting that very recently, Conlon, Fox, and Zhao have succeeded in removing of the pseudorandomness conditions (namely the correlation condition) from the transference principle, leaving only the linear forms condition as the remaining pseudorandomness condition to be verified, but unfortunately this does not completely solve the above problem because the linear forms condition also fails for {{\mathcal A}^2} (or for weights concentrated on {{\mathcal A}^2}) when applied to rectangular patterns.

There are now two ways known to get around this problem and establish Theorem 2 in full generality. The approach of Cook, Magyar, and Titichetrakun proceeds by starting with one of the known proofs of the multidimensional Szemerédi theorem – namely, the proof that proceeds through hypergraph regularity and hypergraph removal – and attach pseudorandom weights directly within the proof itself, rather than trying to add the weights to the result of that proof through a transference argument. (A key technical issue is that weights have to be added to all the levels of the hypergraph – not just the vertices and top-order edges – in order to circumvent the failure of naive pseudorandomness.) As one has to modify the entire proof of the multidimensional Szemerédi theorem, rather than use that theorem as a black box, the Cook-Magyar-Titichetrakun argument is lengthier than ours; on the other hand, it is more general and does not rely on some difficult theorems about primes that are used in our paper.

In our approach, we continue to use the multidimensional Szemerédi theorem (or more precisely, the equivalent theorem of Furstenberg and Katznelson concerning multiple recurrence for commuting shifts) as a black box. The difference is that instead of using a transference principle to connect the relative multidimensional Szemerédi theorem we need to the multiple recurrence theorem, we instead proceed by a version of the Furstenberg correspondence principle, similar to the one that connects the absolute multidimensional Szemerédi theorem to the multiple recurrence theorem. I had discovered this approach many years ago in an unpublished note, but had abandoned it because it required an infinite number of linear forms conditions (in contrast to the transference technique, which only needed a finite number of linear forms conditions and (until the recent work of Conlon-Fox-Zhao) a correlation condition). The reason for this infinite number of conditions is that the correspondence principle has to build a probability measure on an entire {\sigma}-algebra; for this, it is not enough to specify the measure {\mu(A)} of a single set such as {A}, but one also has to specify the measure {\mu( T^{n_1} A \cap \ldots \cap T^{n_m} A)} of “cylinder sets” such as {T^{n_1} A \cap \ldots \cap T^{n_m} A} where {m} could be arbitrarily large. The larger {m} gets, the more linear forms conditions one needs to keep the correspondence under control.

With the sieve weights {\nu} we were using at the time, standard sieve theory methods could indeed provide a finite number of linear forms conditions, but not an infinite number, so my idea was abandoned. However, with my later work with Green and Ziegler on linear equations in primes (and related work on the Mobius-nilsequences conjecture and the inverse conjecture on the Gowers norm), Tamar and I realised that the primes themselves obey an infinite number of linear forms conditions, so one can basically use the primes (or a proxy for the primes, such as the von Mangoldt function {\Lambda}) as the enveloping sieve weight, rather than a classical sieve. Thus my old idea of using the Furstenberg correspondence principle to transfer Szemerédi-type theorems to the primes could actually be realised. In the one-dimensional case, this simply produces a much more complicated proof of Theorem 1 than the existing one; but it turns out that the argument works as well in higher dimensions and yields Theorem 2 relatively painlessly, except for the fact that it needs the results on linear equations in primes, the known proofs of which are extremely lengthy (and also require some of the transference machinery mentioned earlier). The problem of correlations in rectangles is avoided in the correspondence principle approach because one can compensate for such correlations by performing a suitable weighted limit to compute the measure {\mu( T^{n_1} A \cap \ldots \cap T^{n_m} A)} of cylinder sets, with each {m} requiring a different weighted correction. (This may be related to the Cook-Magyar-Titichetrakun strategy of weighting all of the facets of the hypergraph in order to recover pseudorandomness, although our contexts are rather different.)

Vitaly Bergelson, Tamar Ziegler, and I have just uploaded to the arXiv our joint paper “Multiple recurrence and convergence results associated to {{\bf F}_{p}^{\omega}}-actions“. This paper is primarily concerned with limit formulae in the theory of multiple recurrence in ergodic theory. Perhaps the most basic formula of this type is the mean ergodic theorem, which (among other things) asserts that if {(X,{\mathcal X}, \mu,T)} is a measure-preserving {{\bf Z}}-system (which, in this post, means that {(X,{\mathcal X}, \mu)} is a probability space and {T: X \mapsto X} is measure-preserving and invertible, thus giving an action {(T^n)_{n \in {\bf Z}}} of the integers), and {f,g \in L^2(X,{\mathcal X}, \mu)} are functions, and {X} is ergodic (which means that {L^2(X,{\mathcal X}, \mu)} contains no {T}-invariant functions other than the constants (up to almost everywhere equivalence, of course)), then the average

\displaystyle  \frac{1}{N} \sum_{n=1}^N \int_X f(x) g(T^n x)\ d\mu \ \ \ \ \ (1)

converges as {N \rightarrow \infty} to the expression

\displaystyle  (\int_X f(x)\ d\mu) (\int_X g(x)\ d\mu);

see e.g. this previous blog post. Informally, one can interpret this limit formula as an equidistribution result: if {x} is drawn at random from {X} (using the probability measure {\mu}), and {n} is drawn at random from {\{1,\ldots,N\}} for some large {N}, then the pair {(x, T^n x)} becomes uniformly distributed in the product space {X \times X} (using product measure {\mu \times \mu}) in the limit as {N \rightarrow \infty}.

If we allow {(X,\mu)} to be non-ergodic, then we still have a limit formula, but it is a bit more complicated. Let {{\mathcal X}^T} be the {T}-invariant measurable sets in {{\mathcal X}}; the {{\bf Z}}-system {(X, {\mathcal X}^T, \mu, T)} can then be viewed as a factor of the original system {(X, {\mathcal X}, \mu, T)}, which is equivalent (in the sense of measure-preserving systems) to a trivial system {(Z_0, {\mathcal Z}_0, \mu_{Z_0}, 1)} (known as the invariant factor) in which the shift is trivial. There is then a projection map {\pi_0: X \rightarrow Z_0} to the invariant factor which is a factor map, and the average (1) converges in the limit to the expression

\displaystyle  \int_{Z_0} (\pi_0)_* f(z) (\pi_0)_* g(z)\ d\mu_{Z_0}(x), \ \ \ \ \ (2)

where {(\pi_0)_*: L^2(X,{\mathcal X},\mu) \rightarrow L^2(Z_0,{\mathcal Z}_0,\mu_{Z_0})} is the pushforward map associated to the map {\pi_0: X \rightarrow Z_0}; see e.g. this previous blog post. We can interpret this as an equidistribution result. If {(x,T^n x)} is a pair as before, then we no longer expect complete equidistribution in {X \times X} in the non-ergodic, because there are now non-trivial constraints relating {x} with {T^n x}; indeed, for any {T}-invariant function {f: X \rightarrow {\bf C}}, we have the constraint {f(x) = f(T^n x)}; putting all these constraints together we see that {\pi_0(x) = \pi_0(T^n x)} (for almost every {x}, at least). The limit (2) can be viewed as an assertion that this constraint {\pi_0(x) = \pi_0(T^n x)} are in some sense the “only” constraints between {x} and {T^n x}, and that the pair {(x,T^n x)} is uniformly distributed relative to these constraints.

Limit formulae are known for multiple ergodic averages as well, although the statement becomes more complicated. For instance, consider the expression

\displaystyle  \frac{1}{N} \sum_{n=1}^N \int_X f(x) g(T^n x) h(T^{2n} x)\ d\mu \ \ \ \ \ (3)

for three functions {f,g,h \in L^\infty(X, {\mathcal X}, \mu)}; this is analogous to the combinatorial task of counting length three progressions in various sets. For simplicity we assume the system {(X,{\mathcal X},\mu,T)} to be ergodic. Naively one might expect this limit to then converge to

\displaystyle  (\int_X f\ d\mu) (\int_X g\ d\mu) (\int_X h\ d\mu)

which would roughly speaking correspond to an assertion that the triplet {(x,T^n x, T^{2n} x)} is asymptotically equidistributed in {X \times X \times X}. However, even in the ergodic case there can be additional constraints on this triplet that cannot be seen at the level of the individual pairs {(x,T^n x)}, {(x, T^{2n} x)}. The key obstruction here is that of eigenfunctions of the shift {T: X \rightarrow X}, that is to say non-trivial functions {f: X \rightarrow S^1} that obey the eigenfunction equation {Tf = \lambda f} almost everywhere for some constant (or {T}-invariant) {\lambda}. Each such eigenfunction generates a constraint

\displaystyle  f(x) \overline{f(T^n x)}^2 f(T^{2n} x) = 1 \ \ \ \ \ (4)

tying together {x}, {T^n x}, and {T^{2n} x}. However, it turns out that these are in some sense the only constraints on {x,T^n x, T^{2n} x} that are relevant for the limit (3). More precisely, if one sets {{\mathcal X}_1} to be the sub-algebra of {{\mathcal X}} generated by the eigenfunctions of {T}, then it turns out that the factor {(X, {\mathcal X}_1, \mu, T)} is isomorphic to a shift system {(Z_1, {\mathcal Z}_1, \mu_{Z_1}, x \mapsto x+\alpha)} known as the Kronecker factor, for some compact abelian group {Z_1 = (Z_1,+)} and some (irrational) shift {\alpha \in Z_1}; the factor map {\pi_1: X \rightarrow Z_1} pushes eigenfunctions forward to (affine) characters on {Z_1}. It is then known that the limit of (3) is

\displaystyle  \int_\Sigma (\pi_1)_* f(x_0) (\pi_1)_* g(x_1) (\pi_1)_* h(x_2)\ d\mu_\Sigma

where {\Sigma \subset Z_1^3} is the closed subgroup

\displaystyle  \Sigma = \{ (x_1,x_2,x_3) \in Z_1^3: x_1-2x_2+x_3=0 \}

and {\mu_\Sigma} is the Haar probability measure on {\Sigma}; see this previous blog post. The equation {x_1-2x_2+x_3=0} defining {\Sigma} corresponds to the constraint (4) mentioned earlier. Among other things, this limit formula implies Roth’s theorem, which in the context of ergodic theory is the assertion that the limit (or at least the limit inferior) of (3) is positive when {f=g=h} is non-negative and not identically vanishing.

If one considers a quadruple average

\displaystyle  \frac{1}{N} \sum_{n=1}^N \int_X f(x) g(T^n x) h(T^{2n} x) k(T^{3n} x)\ d\mu \ \ \ \ \ (5)

(analogous to counting length four progressions) then the situation becomes more complicated still, even in the ergodic case. In addition to the (linear) eigenfunctions that already showed up in the computation of the triple average (3), a new type of constraint also arises from quadratic eigenfunctions {f: X \rightarrow S^1}, which obey an eigenfunction equation {Tf = \lambda f} in which {\lambda} is no longer constant, but is now a linear eigenfunction. For such functions, {f(T^n x)} behaves quadratically in {n}, and one can compute the existence of a constraint

\displaystyle  f(x) \overline{f(T^n x)}^3 f(T^{2n} x)^3 \overline{f(T^{3n} x)} = 1 \ \ \ \ \ (6)

between {x}, {T^n x}, {T^{2n} x}, and {T^{3n} x} that is not detected at the triple average level. As it turns out, this is not the only type of constraint relevant for (5); there is a more general class of constraint involving two-step nilsystems which we will not detail here, but see e.g. this previous blog post for more discussion. Nevertheless there is still a similar limit formula to previous examples, involving a special factor {(Z_2, {\mathcal Z}_2, \mu_{Z_2}, S)} which turns out to be an inverse limit of two-step nilsystems; this limit theorem can be extracted from the structural theory in this paper of Host and Kra combined with a limit formula for nilsystems obtained by Lesigne, but will not be reproduced here. The pattern continues to higher averages (and higher step nilsystems); this was first done explicitly by Ziegler, and can also in principle be extracted from the structural theory of Host-Kra combined with nilsystem equidistribution results of Leibman. These sorts of limit formulae can lead to various recurrence results refining Roth’s theorem in various ways; see this paper of Bergelson, Host, and Kra for some examples of this.

The above discussion was concerned with {{\bf Z}}-systems, but one can adapt much of the theory to measure-preserving {G}-systems for other discrete countable abelian groups {G}, in which one now has a family {(T_g)_{g \in G}} of shifts indexed by {G} rather than a single shift, obeying the compatibility relation {T_{g+h}=T_g T_h}. The role of the intervals {\{1,\ldots,N\}} in this more general setting is replaced by that of Folner sequences. For arbitrary countable abelian {G}, the theory for double averages (1) and triple limits (3) is essentially identical to the {{\bf Z}}-system case. But when one turns to quadruple and higher limits, the situation becomes more complicated (and, for arbitrary {G}, still not fully understood). However one model case which is now well understood is the finite field case when {G = {\bf F}_p^\omega = \bigcup_{n=1}^\infty {\bf F}_p^n} is an infinite-dimensional vector space over a finite field {{\bf F}_p} (with the finite subspaces {{\bf F}_p^n} then being a good choice for the Folner sequence). Here, the analogue of the structural theory of Host and Kra was worked out by Vitaly, Tamar, and myself in these previous papers (treating the high characteristic and low characteristic cases respectively). In the finite field setting, it turns out that nilsystems no longer appear, and one only needs to deal with linear, quadratic, and higher order eigenfunctions (known collectively as phase polynomials). It is then natural to look for a limit formula that asserts, roughly speaking, that if {x} is drawn at random from a {{\bf F}_p^\omega}-system and {n} drawn randomly from a large subspace of {{\bf F}_p^\omega}, then the only constraints between {x, T^n x, \ldots, T^{(p-1)n} x} are those that arise from phase polynomials. The main theorem of this paper is to establish this limit formula (which, again, is a little complicated to state explicitly and will not be done here). In particular, we establish for the first time that the limit actually exists (a result which, for {{\bf Z}}-systems, was one of the main results of this paper of Host and Kra).

As a consequence, we can recover finite field analogues of most of the results of Bergelson-Host-Kra, though interestingly some of the counterexamples demonstrating sharpness of their results for {{\bf Z}}-systems (based on Behrend set constructions) do not seem to be present in the finite field setting (cf. this previous blog post on the cap set problem). In particular, we are able to largely settle the question of when one has a Khintchine-type theorem that asserts that for any measurable set {A} in an ergodic {{\bf F}_p^\omega}-system and any {\epsilon>0}, one has

\displaystyle  \mu( T_{c_1 n} A \cap \ldots \cap T_{c_k n} A ) > \mu(A)^k - \epsilon

for a syndetic set of {n}, where {c_1,\ldots,c_k \in {\bf F}_p} are distinct residue classes. It turns out that Khintchine-type theorems always hold for {k=1,2,3} (and for {k=1,2} ergodicity is not required), and for {k=4} it holds whenever {c_1,c_2,c_3,c_4} form a parallelogram, but not otherwise (though the counterexample here was such a painful computation that we ended up removing it from the paper, and may end up putting it online somewhere instead), and for larger {k} we could show that the Khintchine property failed for generic choices of {c_1,\ldots,c_k}, though the problem of determining exactly the tuples for which the Khintchine property failed looked to be rather messy and we did not completely settle it.

One of the basic objects of study in combinatorics are finite strings {(a_n)_{n=0}^N} or infinite strings {(a_n)_{n=0}^\infty} of symbols {a_n} from some given alphabet {{\mathcal A}}, which could be either finite or infinite (but which we shall usually take to be compact). For instance, a set {A} of natural numbers can be identified with the infinite string {(1_A(n))_{n=0}^\infty} of {0}s and {1}s formed by the indicator of {A}, e.g. the even numbers can be identified with the string {1010101\ldots} from the alphabet {\{0,1\}}, the multiples of three can be identified with the string {100100100\ldots}, and so forth. One can also consider doubly infinite strings {(a_n)_{n \in {\bf Z}}}, which among other things can be used to describe arbitrary subsets of integers.

On the other hand, the basic object of study in dynamics (and in related fields, such as ergodic theory) is that of a dynamical system {(X,T)}, that is to say a space {X} together with a shift map {T: X \rightarrow X} (which is often assumed to be invertible, although one can certainly study non-invertible dynamical systems as well). One often adds additional structure to this dynamical system, such as topological structure (giving rise topological dynamics), measure-theoretic structure (giving rise to ergodic theory), complex structure (giving rise to complex dynamics), and so forth. A dynamical system gives rise to an action of the natural numbers {{\bf N}} on the space {X} by using the iterates {T^n: X \rightarrow X} of {T} for {n=0,1,2,\ldots}; if {T} is invertible, we can extend this action to an action of the integers {{\bf Z}} on the same space. One can certainly also consider dynamical systems whose underlying group (or semi-group) is something other than {{\bf N}} or {{\bf Z}} (e.g. one can consider continuous dynamical systems in which the evolution group is {{\bf R}}), but we will restrict attention to the classical situation of {{\bf N}} or {{\bf Z}} actions here.

There is a fundamental correspondence principle connecting the study of strings (or subsets of natural numbers or integers) with the study of dynamical systems. In one direction, given a dynamical system {(X,T)}, an observable {c: X \rightarrow {\mathcal A}} taking values in some alphabet {{\mathcal A}}, and some initial datum {x_0 \in X}, we can first form the forward orbit {(T^n x_0)_{n=0}^\infty} of {x_0}, and then observe this orbit using {c} to obtain an infinite string {(c(T^n x_0))_{n=0}^\infty}. If the shift {T} in this system is invertible, one can extend this infinite string into a doubly infinite string {(c(T^n x_0))_{n \in {\bf Z}}}. Thus we see that every quadruplet {(X,T,c,x_0)} consisting of a dynamical system {(X,T)}, an observable {c}, and an initial datum {x_0} creates an infinite string.

Example 1 If {X} is the three-element set {X = {\bf Z}/3{\bf Z}} with the shift map {Tx := x+1}, {c: {\bf Z}/3{\bf Z} \rightarrow \{0,1\}} is the observable that takes the value {1} at the residue class {0 \hbox{ mod } 3} and zero at the other two classes, and one starts with the initial datum {x_0 = 0 \hbox{ mod } 3}, then the observed string {(c(T^n x_0))_{n=0}^\infty} becomes the indicator {100100100\ldots} of the multiples of three.

In the converse direction, every infinite string {(a_n)_{n=0}^\infty} in some alphabet {{\mathcal A}} arises (in a decidedly non-unique fashion) from a quadruple {(X,T,c,x_0)} in the above fashion. This can be easily seen by the following “universal” construction: take {X} to be the set {X:= {\mathcal A}^{\bf N}} of infinite strings {(b_i)_{n=0}^\infty} in the alphabet {{\mathcal A}}, let {T: X \rightarrow X} be the shift map

\displaystyle  T(b_i)_{n=0}^\infty := (b_{i+1})_{n=0}^\infty,

let {c: X \rightarrow {\mathcal A}} be the observable

\displaystyle  c((b_i)_{n=0}^\infty) := b_0,

and let {x_0 \in X} be the initial point

\displaystyle  x_0 := (a_i)_{n=0}^\infty.

Then one easily sees that the observed string {(c(T^n x_0))_{n=0}^\infty} is nothing more than the original string {(a_n)_{n=0}^\infty}. Note also that this construction can easily be adapted to doubly infinite strings by using {{\mathcal A}^{\bf Z}} instead of {{\mathcal A}^{\bf N}}, at which point the shift map {T} now becomes invertible. An important variant of this construction also attaches an invariant probability measure to {X} that is associated to the limiting density of various sets associated to the string {(a_i)_{n=0}^\infty}, and leads to the Furstenberg correspondence principle, discussed for instance in these previous blog posts. Such principles allow one to rigorously pass back and forth between the combinatorics of strings and the dynamics of systems; for instance, Furstenberg famously used his correspondence principle to demonstrate the equivalence of Szemerédi’s theorem on arithmetic progressions with what is now known as the Furstenberg multiple recurrence theorem in ergodic theory.

In the case when the alphabet {{\mathcal A}} is the binary alphabet {\{0,1\}}, and (for technical reasons related to the infamous non-injectivity {0.999\ldots = 1.00\ldots} of the decimal representation system) the string {(a_n)_{n=0}^\infty} does not end with an infinite string of {1}s, then one can reformulate the above universal construction by taking {X} to be the interval {[0,1)}, {T} to be the doubling map {Tx := 2x \hbox{ mod } 1}, {c: X \rightarrow \{0,1\}} to be the observable that takes the value {1} on {[1/2,1)} and {0} on {[0,1/2)} (that is, {c(x)} is the first binary digit of {x}), and {x_0} is the real number {x_0 := \sum_{n=0}^\infty a_n 2^{-n-1}} (that is, {x_0 = 0.a_0a_1\ldots} in binary).

The above universal construction is very easy to describe, and is well suited for “generic” strings {(a_n)_{n=0}^\infty} that have no further obvious structure to them, but it often leads to dynamical systems that are much larger and more complicated than is actually needed to produce the desired string {(a_n)_{n=0}^\infty}, and also often obscures some of the key dynamical features associated to that sequence. For instance, to generate the indicator {100100100\ldots} of the multiples of three that were mentioned previously, the above universal construction requires an uncountable space {X} and a dynamics which does not obviously reflect the key features of the sequence such as its periodicity. (Using the unit interval model, the dynamics arise from the orbit of {2/7} under the doubling map, which is a rather artificial way to describe the indicator function of the multiples of three.)

A related aesthetic objection to the universal construction is that of the four components {X,T,c,x_0} of the quadruplet {(X,T,c,x_0)} used to generate the sequence {(a_n)_{n=0}^\infty}, three of the components {X,T,c} are completely universal (in that they do not depend at all on the sequence {(a_n)_{n=0}^\infty}), leaving only the initial datum {x_0} to carry all the distinctive features of the original sequence. While there is nothing wrong with this mathematically, from a conceptual point of view it would make sense to make all four components of the quadruplet to be adapted to the sequence, in order to take advantage of the accumulated intuition about various special dynamical systems (and special observables), not just special initial data.

One step in this direction can be made by restricting {X} to the orbit {\{ T^n x_0: n \in {\bf N} \}} of the initial datum {x_0} (actually for technical reasons it is better to restrict to the topological closure {\overline{\{ T^n x_0: n \in {\bf N} \}}} of this orbit, in order to keep {X} compact). For instance, starting with the sequence {100100100\ldots}, the orbit now consists of just three points {100100100\ldots}, {010010010\ldots}, {001001001\ldots}, bringing the system more in line with the example in Example 1. Technically, this is the “optimal” representation of the sequence by a quadruplet {(X,T,c,x_0)}, because any other such representation {(X',T',c',x'_0)} is a factor of this representation (in the sense that there is a unique map {\pi: X \rightarrow X'} with {T' \circ \pi = \pi \circ T}, {c' \circ \pi = c}, and {x'_0 = \pi(x_0)}). However, from a conceptual point of view this representation is still somewhat unsatisfactory, given that the elements of the system {X} are interpreted as infinite strings rather than elements of a more geometrically or algebraically rich object (e.g. points in a circle, torus, or other homogeneous space).

For general sequences {(a_n)_{n=0}^\infty}, locating relevant geometric or algebraic structure in a dynamical system generating that sequence is an important but very difficult task (see e.g. this paper of Host and Kra, which is more or less devoted to precisely this task in the context of working out what component of a dynamical system controls the multiple recurrence behaviour of that system). However, for specific examples of sequences {(a_n)_{n=0}^\infty}, one can use an informal procedure of educated guesswork in order to produce a more natural-looking quadruple {(X,T,c,x_0)} that generates that sequence. This is not a particularly difficult or deep operation, but I found it very helpful in internalising the intuition behind the correspondence principle. Being non-rigorous, this procedure does not seem to be emphasised in most presentations of the correspondence principle, so I thought I would describe it here.

Read the rest of this entry »

I’ve just uploaded to the arXiv my joint paper with Vitaly Bergelson, “Multiple recurrence in quasirandom groups“, which is submitted to Geom. Func. Anal.. This paper builds upon a paper of Gowers in which he introduced the concept of a quasirandom group, and established some mixing (or recurrence) properties of such groups. A {D}-quasirandom group is a finite group with no non-trivial unitary representations of dimension at most {D}. We will informally refer to a “quasirandom group” as a {D}-quasirandom group with the quasirandomness parameter {D} large (more formally, one can work with a sequence of {D_n}-quasirandom groups with {D_n} going to infinity). A typical example of a quasirandom group is {SL_2(F_p)} where {p} is a large prime. Quasirandom groups are discussed in depth in this blog post. One of the key properties of quasirandom groups established in Gowers’ paper is the following “weak mixing” property: if {A, B} are subsets of {G}, then for “almost all” {g \in G}, one has

\displaystyle  \mu( A \cap gB ) \approx \mu(A) \mu(B) \ \ \ \ \ (1)

where {\mu(A) := |A|/|G|} denotes the density of {A} in {G}. Here, we use {x \approx y} to informally represent an estimate of the form {x=y+o(1)} (where {o(1)} is a quantity that goes to zero when the quasirandomness parameter {D} goes to infinity), and “almost all {g \in G}” denotes “for all {g} in a subset of {G} of density {1-o(1)}“. As a corollary, if {A,B,C} have positive density in {G} (by which we mean that {\mu(A)} is bounded away from zero, uniformly in the quasirandomness parameter {D}, and similarly for {B,C}), then (if the quasirandomness parameter {D} is sufficiently large) we can find elements {g, x \in G} such that {g \in A}, {x \in B}, {gx \in C}. In fact we can find approximately {\mu(A)\mu(B)\mu(C) |G|^2} such pairs {(g,x)}. To put it another way: if we choose {g,x} uniformly and independently at random from {G}, then the events {g \in A}, {x \in B}, {gx \in C} are approximately independent (thus the random variable {(g,x,gx) \in G^3} resembles a uniformly distributed random variable on {G^3} in some weak sense). One can also express this mixing property in integral form as

\displaystyle  \int_G \int_G f_1(g) f_2(x) f_3(gx)\ d\mu(g) d\mu(x) \approx (\int_G f_1\ d\mu) (\int_G f_2\ d\mu) (\int_G f_3\ d\mu)

for any bounded functions {f_1,f_2,f_3: G \rightarrow {\bf R}}. (Of course, with {G} being finite, one could replace the integrals here by finite averages if desired.) Or in probabilistic language, we have

\displaystyle  \mathop{\bf E} f_1(g) f_2(x) f_3(gx) \approx \mathop{\bf E} f_1(x_1) f_2(x_2) f_3(x_3)

where {g, x, x_1, x_2, x_3} are drawn uniformly and independently at random from {G}.

As observed in Gowers’ paper, one can iterate this observation to find “parallelopipeds” of any given dimension in dense subsets of {G}. For instance, applying (1) with {A,B,C} replaced by {A \cap hB}, {C \cap hD}, and {E \cap hF} one can assert (after some relabeling) that for {g,h,x} chosen uniformly and independently at random from {G}, the events {g \in A}, {h \in B}, {gh \in C}, {x \in D}, {gx \in E}, {hx \in F}, {ghx \in H} are approximately independent whenever {A,B,C,D,E,F,H} are dense subsets of {G}; thus the tuple {(g,h,gh,x,gh,hx,ghx)} resebles a uniformly distributed random variable in {G^7} in some weak sense.

However, there are other tuples for which the above iteration argument does not seem to apply. One of the simplest tuples in this vein is the tuple {(g, x, xg, gx)} in {G^4}, when {g, x} are drawn uniformly at random from a quasirandom group {G}. Here, one does not expect the tuple to behave as if it were uniformly distributed in {G^4}, because there is an obvious constraint connecting the last two components {gx, xg} of this tuple: they must lie in the same conjugacy class! In particular, if {A} is a subset of {G} that is the union of conjugacy classes, then the events {gx \in A}, {xg \in A} are perfectly correlated, so that {\mu( gx \in A, xg \in A)} is equal to {\mu(A)} rather than {\mu(A)^2}. Our main result, though, is that in a quasirandom group, this is (approximately) the only constraint on the tuple. More precisely, we have

Theorem 1 Let {G} be a {D}-quasirandom group, and let {g, x} be drawn uniformly at random from {G}. Then for any {f_1,f_2,f_3,f_4: G \rightarrow [-1,1]}, we have

\displaystyle  \mathop{\bf E} f_1(g) f_2(x) f_3(gx) f_4(xg) = \mathop{\bf E} f_1(x_1) f_2(x_2) f_3(x_3) f_4(x_4) + o(1)

where {o(1)} goes to zero as {D \rightarrow \infty}, {x_1,x_2,x_3} are drawn uniformly and independently at random from {G}, and {x_4} is drawn uniformly at random from the conjugates of {x_3} for each fixed choice of {x_1,x_2,x_3}.

This is the probabilistic formulation of the above theorem; one can also phrase the theorem in other formulations (such as an integral formulation), and this is detailed in the paper. This theorem leads to a number of recurrence results; for instance, as a corollary of this result, we have

\displaystyle  \mu(A) \mu(B)^2 - o(1) \leq \mu( A \cap gB \cap Bg ) \leq \mu(A) \mu(B) + o(1)

for almost all {g \in G}, and any dense subsets {A, B} of {G}; the lower and upper bounds are sharp, with the lower bound being attained when {B} is randomly distributed, and the upper bound when {B} is conjugation-invariant.

To me, the more interesting thing here is not the result itself, but how it is proven. Vitaly and I were not able to find a purely finitary way to establish this mixing theorem. Instead, we had to first use the machinery of ultraproducts (as discussed in this previous post) to convert the finitary statement about a quasirandom group to an infinitary statement about a type of infinite group which we call an ultra quasirandom group (basically, an ultraproduct of increasingly quasirandom finite groups). This is analogous to how the Furstenberg correspondence principle is used to convert a finitary combinatorial problem into an infinitary ergodic theory problem.

Ultra quasirandom groups come equipped with a finite, countably additive measure known as Loeb measure {\mu_G}, which is very analogous to the Haar measure of a compact group, except that in the case of ultra quasirandom groups one does not quite have a topological structure that would give compactness. Instead, one has a slightly weaker structure known as a {\sigma}-topology, which is like a topology except that open sets are only closed under countable unions rather than arbitrary ones. There are some interesting measure-theoretic and topological issues regarding the distinction between topologies and {\sigma}-topologies (and between Haar measure and Loeb measure), but for this post it is perhaps best to gloss over these issues and pretend that ultra quasirandom groups {G} come with a Haar measure. One can then recast Theorem 1 as a mixing theorem for the left and right actions of the ultra approximate group {G} on itself, which roughly speaking is the assertion that

\displaystyle  \int_G f_1(x) L_g f_2(x) L_g R_g f_3(x)\ d\mu_G(x) \approx 0 \ \ \ \ \ (2)

for “almost all” {g \in G}, if {f_1, f_2, f_3} are bounded measurable functions on {G}, with {f_3} having zero mean on all conjugacy classes of {G}, where {L_g, R_g} are the left and right translation operators

\displaystyle  L_g f(x) := f(g^{-1} x); \quad R_g f(x) := f(xg).

To establish this mixing theorem, we use the machinery of idempotent ultrafilters, which is a particularly useful tool for understanding the ergodic theory of actions of countable groups {G} that need not be amenable; in the non-amenable setting the classical ergodic averages do not make much sense, but ultrafilter-based averages are still available. To oversimplify substantially, the idempotent ultrafilter arguments let one establish mixing estimates of the form (2) for “many” elements {g} of an infinite-dimensional parallelopiped known as an IP system (provided that the actions {L_g,R_g} of this IP system obey some technical mixing hypotheses, but let’s ignore that for sake of this discussion). The claim then follows by using the quasirandomness hypothesis to show that if the estimate (2) failed for a large set of {g \in G}, then this large set would then contain an IP system, contradicting the previous claim.

Idempotent ultrafilters are an extremely infinitary type of mathematical object (one has to use Zorn’s lemma no fewer than three times just to construct one of these objects!). So it is quite remarkable that they can be used to establish a finitary theorem such as Theorem 1, though as is often the case with such infinitary arguments, one gets absolutely no quantitative control whatsoever on the error terms {o(1)} appearing in that theorem. (It is also mildly amusing to note that our arguments involve the use of ultrafilters in two completely different ways: firstly in order to set up the ultraproduct that converts the finitary mixing problem to an infinitary one, and secondly to solve the infinitary mixing problem. Despite some superficial similarities, there appear to be no substantial commonalities between these two usages of ultrafilters.) There is already a fair amount of literature on using idempotent ultrafilter methods in infinitary ergodic theory, and perhaps by further development of ultraproduct correspondence principles, one can use such methods to obtain further finitary consequences (although the state of the art for idempotent ultrafilter ergodic theory has not advanced much beyond the analysis of two commuting shifts {L_g, R_g} currently, which is the main reason why our arguments only handle the pattern {(g,x,xg,gx)} and not more sophisticated patterns).

We also have some miscellaneous other results in the paper. It turns out that by using the triangle removal lemma from graph theory, one can obtain a recurrence result that asserts that whenever {A} is a dense subset of a finite group {G} (not necessarily quasirandom), then there are {\gg |G|^2} pairs {(x,g)} such that {x, gx, xg} all lie in {A}. Using a hypergraph generalisation of the triangle removal lemma known as the hypergraph removal lemma, one can obtain more complicated versions of this statement; for instance, if {A} is a dense subset of {G^2}, then one can find {\gg |G|^2} triples {(x,y,g)} such that {(x,y), (gx, y), (gx, gy), (gxg^{-1}, gyg^{-1})} all lie in {A}. But the method is tailored to the specific types of patterns given here, and we do not have a general method for obtaining recurrence or mixing properties for arbitrary patterns of words in some finite alphabet such as {g,x,y}.

We also give some properties of a model example of an ultra quasirandom group, namely the ultraproduct {SL_2(F)} of {SL_2(F_{p_n})} where {p_n} is a sequence of primes going off to infinity. Thanks to the substantial recent progress (by Helfgott, Bourgain, Gamburd, Breuillard, and others) on understanding the expansion properties of the finite groups {SL_2(F_{p_n})}, we have a fair amount of knowledge on the ultraproduct {SL_2(F)} as well; for instance any two elements of {SL_2(F)} will almost surely generate a spectral gap. We don’t have any direct application of this particular ultra quasirandom group, but it might be interesting to study it further.

Two weeks ago I was at Oberwolfach, for the Arbeitsgemeinschaft in Ergodic Theory and Combinatorial Number Theory that I was one of the organisers for. At this workshop, I learned the details of a very nice recent convergence result of Miguel Walsh (who, incidentally, is an informal grandstudent of mine, as his advisor, Roman Sasyk, was my informal student), which considerably strengthens and generalises a number of previous convergence results in ergodic theory (including one of my own), with a remarkably simple proof. Walsh’s argument is phrased in a finitary language (somewhat similar, in fact, to the approach used in my paper mentioned previously), and (among other things) relies on the concept of metastability of sequences, a variant of the notion of convergence which is useful in situations in which one does not expect a uniform convergence rate; see this previous blog post for some discussion of metastability. When interpreted in a finitary setting, this concept requires a fair amount of “epsilon management” to manipulate; also, Walsh’s argument uses some other epsilon-intensive finitary arguments, such as a decomposition lemma of Gowers based on the Hahn-Banach theorem. As such, I was tempted to try to rewrite Walsh’s argument in the language of nonstandard analysis to see the extent to which these sorts of issues could be managed. As it turns out, the argument gets cleaned up rather nicely, with the notion of metastability being replaced with the simpler notion of external Cauchy convergence (which we will define below the fold).

Let’s first state Walsh’s theorem. This theorem is a norm convergence theorem in ergodic theory, and can be viewed as a substantial generalisation of one of the most fundamental theorems of this type, namely the mean ergodic theorem:

Theorem 1 (Mean ergodic theorem) Let {(X,\mu,T)} be a measure-preserving system (a probability space {(X,\mu)} with an invertible measure-preserving transformation {T}). Then for any {f \in L^2(X,\mu)}, the averages {\frac{1}{N} \sum_{n=1}^N T^n f} converge in {L^2(X,\mu)} norm as {N \rightarrow \infty}, where {T^n f(x) := f(T^{-n} x)}.

In this post, all functions in {L^2(X,\mu)} and similar spaces will be taken to be real instead of complex-valued for simplicity, though the extension to the complex setting is routine.

Actually, we have a precise description of the limit of these averages, namely the orthogonal projection of {f} to the {T}-invariant factors. (See for instance my lecture notes on this theorem.) While this theorem ostensibly involves measure theory, it can be abstracted to the more general setting of unitary operators on a Hilbert space:

Theorem 2 (von Neumann mean ergodic theorem) Let {H} be a Hilbert space, and let {U: H \rightarrow H} be a unitary operator on {H}. Then for any {f \in H}, the averages {\frac{1}{N} \sum_{n=1}^N U^n f} converge strongly in {H} as {N \rightarrow \infty}.

Again, see my lecture notes (or just about any text in ergodic theory) for a proof.

Now we turn to Walsh’s theorem.

Theorem 3 (Walsh’s convergence theorem) Let {(X,\mu)} be a measure space with a measure-preserving action of a nilpotent group {G}. Let {g_1,\ldots,g_k: {\bf Z} \rightarrow G} be polynomial sequences in {G} (i.e. each {g_i} takes the form {g_i(n) = a_{i,1}^{p_{i,1}(n)} \ldots a_{i,j}^{p_{i,j}(n)}} for some {a_{i,1},\ldots,a_{i,j} \in G} and polynomials {p_{i,1},\ldots,p_{i,j}: {\bf Z} \rightarrow {\bf Z}}). Then for any {f_1,\ldots,f_k \in L^\infty(X,\mu)}, the averages {\frac{1}{N} \sum_{n=1}^N (g_1(n) f_1) \ldots (g_k(n) f_k)} converge in {L^2(X,\mu)} norm as {N \rightarrow \infty}, where {g(n) f(x) := f(g(n)^{-1} x)}.

It turns out that this theorem can also be abstracted to some extent, although due to the multiplication in the summand {(g_1(n) f_1) \ldots (g_k(n) f_k)}, one cannot work purely with Hilbert spaces as in the von Neumann mean ergodic theorem, but must also work with something like the Banach algebra {L^\infty(X,\mu)}. There are a number of ways to formulate this abstraction (which will be of some minor convenience to us, as it will allow us to reduce the need to invoke the nonstandard measure theory of Loeb, discussed for instance in this blog post); we will use the notion of a (real) commutative probability space {({\mathcal A},\tau)}, which for us will be a commutative unital algebra {{\mathcal A}} over the reals together with a linear functional {\tau: {\mathcal A} \rightarrow {\bf R}} which maps {1} to {1} and obeys the non-negativity axiom {\tau(f^2) \ge 0} for all {f}. The key example to keep in mind here is {{\mathcal A} = L^\infty(X,\mu)} of essentially bounded real-valued measurable functions with the supremum norm, and with the trace {\tau(f) := \int_X f\ d\mu}. We will also assume in our definition of commutative probability spaces that all elements {f} of {{\mathcal A}} are bounded in the sense that the spectral radius {\rho(f) := \lim_{k \rightarrow \infty} \tau(f^{2k})^{1/2k}} is finite. (In the concrete case of {L^\infty(X,\mu)}, the spectral radius is just the {L^\infty} norm.)

Given a commutative probability space, we can form an inner product {\langle, \rangle_{L^2(\tau)}} on it by the formula

\displaystyle  \langle f, g \rangle_{L^2(\tau)} := \tau(fg).

This is a positive semi-definite form, and gives a (possibly degenerate) inner product structure on {{\mathcal A}}. We could complete this structure into a Hilbert space {L^2(\tau)} (after quotienting out the elements of zero norm), but we will not do so here, instead just viewing {L^2(\tau)} as providing a semi-metric on {{\mathcal A}}. For future reference we record the inequalities

\displaystyle  \rho(fg) \leq \rho(f) \rho(g)

\displaystyle  \rho(f+g) \leq \rho(f) + \rho(g)

\displaystyle  \| fg\|_{L^2(\tau)} \leq \|f\|_{L^2(\tau)} \rho(g)

for any {f,g}, which we will use in the sequel without further comment; see e.g. these previous blog notes for proofs. (Actually, for the purposes of proving Theorem 3, one can specialise to the {L^\infty(X,\mu)} case (and ultraproducts thereof), in which case these inequalities are just the triangle and Hölder inequalities.)

The abstract version of Theorem 3 is then

Theorem 4 (Walsh’s theorem, abstract version) Let {({\mathcal A},\tau)} be a commutative probability space, and let {G} be a nilpotent group acting on {{\mathcal A}} by isomorphisms (preserving the algebra, conjugation, and trace structure, and thus also preserving the spectral radius and {L^2(\tau)} norm). Let {g_1,\ldots,g_k: {\bf Z} \rightarrow G} be polynomial sequences. Then for any {f_1,\ldots,f_k \in {\mathcal A}}, the averages {\frac{1}{N} \sum_{n=1}^N (g_1(n) f_1) \ldots (g_k(n) f_k)} form a Cauchy sequence in {L^2(\tau)} (semi-)norm as {N \rightarrow \infty}.

It is easy to see that this theorem generalises Theorem 3. Conversely, one can use the commutative Gelfand-Naimark theorem to deduce Theorem 4 from Theorem 3, although we will not need this implication. Note how we are abandoning all attempts to discern what the limit of the sequence actually is, instead contenting ourselves with demonstrating that it is merely a Cauchy sequence. With this phrasing, it is tempting to ask whether there is any analogue of Walsh’s theorem for noncommutative probability spaces, but unfortunately the answer to that question is negative for all but the simplest of averages, as was worked out in this paper of Austin, Eisner, and myself.

Our proof of Theorem 4 will proceed as follows. Firstly, in order to avoid the epsilon management alluded to earlier, we will take an ultraproduct to rephrase the theorem in the language of nonstandard analysis; for reasons that will be clearer later, we will also convert the convergence problem to a problem of obtaining metastability (external Cauchy convergence). Then, we observe that (the nonstandard counterpart of) the expression {\|\frac{1}{N} \sum_{n=1}^N (g_1(n) f_1) \ldots (g_k(n) f_k)\|_{L^2(\tau)}^2} can be viewed as the inner product of (say) {f_k} with a certain type of expression, which we call a dual function. By performing an orthogonal projection to the span of the dual functions, we can split {f_k} into the sum of an expression orthogonal to all dual functions (the “pseudorandom” component), and a function that can be well approximated by finite linear combinations of dual functions (the “structured” component). The contribution of the pseudorandom component is asymptotically negligible, so we can reduce to consideration of the structured component. But by a little bit of rearrangement, this can be viewed as an average of expressions similar to the initial average {\frac{1}{N} \sum_{n=1}^N (g_1(n) f_1) \ldots (g_k(n) f_k)}, except with the polynomials {g_1,\ldots,g_k} replaced by a “lower complexity” set of such polynomials, which can be greater in number, but which have slightly lower degrees in some sense. One can iterate this (using “PET induction”) until all the polynomials become trivial, at which point the claim follows.

Read the rest of this entry »

One of the most notorious problems in elementary mathematics that remains unsolved is the Collatz conjecture, concerning the function {f_0: {\bf N} \rightarrow {\bf N}} defined by setting {f_0(n) := 3n+1} when {n} is odd, and {f_0(n) := n/2} when {n} is even. (Here, {{\bf N}} is understood to be the positive natural numbers {\{1,2,3,\ldots\}}.)

Conjecture 1 (Collatz conjecture) For any given natural number {n}, the orbit {n, f_0(n), f^2_0(n), f^3_0(n), \ldots} passes through {1} (i.e. {f^k_0(n)=1} for some {k}).

Open questions with this level of notoriety can lead to what Richard Lipton calls “mathematical diseases” (and what I termed an unhealthy amount of obsession on a single famous problem). (See also this xkcd comic regarding the Collatz conjecture.) As such, most practicing mathematicians tend to spend the majority of their time on more productive research areas that are only just beyond the range of current techniques. Nevertheless, it can still be diverting to spend a day or two each year on these sorts of questions, before returning to other matters; so I recently had a go at the problem. Needless to say, I didn’t solve the problem, but I have a better appreciation of why the conjecture is (a) plausible, and (b) unlikely be proven by current technology, and I thought I would share what I had found out here on this blog.

Let me begin with some very well known facts. If {n} is odd, then {f_0(n) = 3n+1} is even, and so {f_0^2(n) = \frac{3n+1}{2}}. Because of this, one could replace {f_0} by the function {f_1: {\bf N} \rightarrow {\bf N}}, defined by {f_1(n) = \frac{3n+1}{2}} when {n} is odd, and {f_1(n)=n/2} when {n} is even, and obtain an equivalent conjecture. Now we see that if one chooses {n} “at random”, in the sense that it is odd with probability {1/2} and even with probability {1/2}, then {f_1} increases {n} by a factor of roughly {3/2} half the time, and decreases it by a factor of {1/2} half the time. Furthermore, if {n} is uniformly distributed modulo {4}, one easily verifies that {f_1(n)} is uniformly distributed modulo {2}, and so {f_1^2(n)} should be roughly {3/2} times as large as {f_1(n)} half the time, and roughly {1/2} times as large as {f_1(n)} the other half of the time. Continuing this at a heuristic level, we expect generically that {f_1^{k+1}(n) \approx \frac{3}{2} f_1^k(n)} half the time, and {f_1^{k+1}(n) \approx \frac{1}{2} f_1^k(n)} the other half of the time. The logarithm {\log f_1^k(n)} of this orbit can then be modeled heuristically by a random walk with steps {\log \frac{3}{2}} and {\log \frac{1}{2}} occuring with equal probability. The expectation

\displaystyle  \frac{1}{2} \log \frac{3}{2} + \frac{1}{2} \log \frac{1}{2} = \frac{1}{2} \log \frac{3}{4}

is negative, and so (by the classic gambler’s ruin) we expect the orbit to decrease over the long term. This can be viewed as heuristic justification of the Collatz conjecture, at least in the “average case” scenario in which {n} is chosen uniform at random (e.g. in some large interval {\{1,\ldots,N\}}). (It also suggests that if one modifies the problem, e.g. by replacing {3n+1} to {5n+1}, then one can obtain orbits that tend to increase over time, and indeed numerically for this variant one sees orbits that appear to escape to infinity.) Unfortunately, one can only rigorously keep the orbit uniformly distributed modulo {2} for time about {O(\log N)} or so; after that, the system is too complicated for naive methods to control at anything other than a heuristic level.

Remark 1 One can obtain a rigorous analogue of the above arguments by extending {f_1} from the integers {{\bf Z}} to the {2}-adics {{\bf Z}_2}. This compact abelian group comes with a Haar probability measure, and one can verify that this measure is invariant with respect to {f_1}; with a bit more effort one can verify that it is ergodic. This suggests the introduction of ergodic theory methods. For instance, using the pointwise ergodic theorem, we see that if {n} is a random {2}-adic integer, then almost surely the orbit {n, f_1(n), f_1^2(n), \ldots} will be even half the time and odd half the time asymptotically, thus supporting the above heuristics. Unfortunately, this does not directly tell us much about the dynamics on {{\bf Z}}, as this is a measure zero subset of {{\bf Z}_2}. More generally, unless a dynamical system is somehow “polynomial”, “nilpotent”, or “unipotent” in nature, the current state of ergodic theory is usually only able to say something meaningful about generic orbits, but not about all orbits. For instance, the very simple system {x \rightarrow 10x} on the unit circle {{\bf R}/{\bf Z}} is well understood from ergodic theory (in particular, almost all orbits will be uniformly distributed), but the orbit of a specific point, e.g. {\pi\hbox{ mod } 1}, is still nearly impossible to understand (this particular problem being equivalent to the notorious unsolved question of whether the digits of {\pi} are uniformly distributed).

The above heuristic argument only suggests decreasing orbits for almost all {n} (though even this remains unproven, the state of the art is that the number of {n} in {\{1,\ldots,N\}} that eventually go to {1} is {\gg N^{0.84}}, a result of Krasikov and Lagarias). It leaves open the possibility of some very rare exceptional {n} for which the orbit goes to infinity, or gets trapped in a periodic loop. Since the only loop that {1} lies in is {1,4,2} (for {f_0}) or {1,2} (for {f_1}), we thus may isolate a weaker consequence of the Collatz conjecture:

Conjecture 2 (Weak Collatz conjecture) Suppose that {n} is a natural number such that {f^k_0(n)=n} for some {k \geq 1}. Then {n} is equal to {1}, {2}, or {4}.

Of course, we may replace {f_0} with {f_1} (and delete “{4}“) and obtain an equivalent conjecture.

This weaker version of the Collatz conjecture is also unproven. However, it was observed by Bohm and Sontacchi that this weak conjecture is equivalent to a divisibility problem involving powers of {2} and {3}:

Conjecture 3 (Reformulated weak Collatz conjecture) There does not exist {k \geq 1} and integers

\displaystyle 0 = a_1 < a_2 < \ldots < a_{k+1}

such that {2^{a_{k+1}}-3^k} is a positive integer that is a proper divisor of

\displaystyle  3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k},

i.e.

\displaystyle  (2^{a_{k+1}} - 3^k) n = 3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k} \ \ \ \ \ (1)

for some natural number {n > 1}.

Proposition 4 Conjecture 2 and Conjecture 3 are equivalent.

Proof: To see this, it is convenient to reformulate Conjecture 2 slightly. Define an equivalence relation {\sim} on {{\bf N}} by declaring {a \sim b} if {a/b = 2^m} for some integer {m}, thus giving rise to the quotient space {{\bf N}/\sim} of equivalence classes {[n]} (which can be placed, if one wishes, in one-to-one correspondence with the odd natural numbers). We can then define a function {f_2: {\bf N}/\sim \rightarrow {\bf N}/\sim} by declaring

\displaystyle  f_2( [n] ) := [3n + 2^a] \ \ \ \ \ (2)

for any {n \in {\bf N}}, where {2^a} is the largest power of {2} that divides {n}. It is easy to see that {f_2} is well-defined (it is essentially the Syracuse function, after identifying {{\bf N}/\sim} with the odd natural numbers), and that periodic orbits of {f_2} correspond to periodic orbits of {f_1} or {f_0}. Thus, Conjecture 2 is equivalent to the conjecture that {[1]} is the only periodic orbit of {f_2}.

Now suppose that Conjecture 2 failed, thus there exists {[n] \neq [1]} such that {f_2^k([n])=[n]} for some {k \geq 1}. Without loss of generality we may take {n} to be odd, then {n>1}. It is easy to see that {[1]} is the only fixed point of {f_2}, and so {k>1}. An easy induction using (2) shows that

\displaystyle  f_2^k([n]) = [3^k n + 3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k}]

where, for each {1 \leq i \leq k}, {2^{a_i}} is the largest power of {2} that divides

\displaystyle  n_i := 3^{i-1} n + 3^{i-2} 2^{a_1} + \ldots + 2^{a_{i-1}}. \ \ \ \ \ (3)

In particular, as {n_1 = n} is odd, {a_1=0}. Using the recursion

\displaystyle  n_{i+1} = 3n_i + 2^{a_i}, \ \ \ \ \ (4)

we see from induction that {2^{a_i+1}} divides {n_{i+1}}, and thus {a_{i+1}>a_i}:

\displaystyle  0 = a_1 < a_2 < \ldots < a_k.

Since {f_2^k([n]) = [n]}, we have

\displaystyle  2^{a_{k+1}} n = 3^k n + 3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k} = 3 n_k + 2^{a_k}

for some integer {a_{k+1}}. Since {3 n_k + 2^{a_k}} is divisible by {2^{a_k+1}}, and {n} is odd, we conclude {a_{k+1} > a_k}; if we rearrange the above equation as (1), then we obtain a counterexample to Conjecture 3.

Conversely, suppose that Conjecture 3 failed. Then we have {k \geq 1}, integers

\displaystyle 0 = a_1 < a_2 < \ldots < a_{k+1}

and a natural number {n > 1} such that (1) holds. As {a_1=0}, we see that the right-hand side of (1) is odd, so {n} is odd also. If we then introduce the natural numbers {n_i} by the formula (3), then an easy induction using (4) shows that

\displaystyle  (2^{a_{k+1}} - 3^k) n_i = 3^{k-1} 2^{a_i} + 3^{k-2} 2^{a_{i+1}} + \ldots + 2^{a_{i+k-1}} \ \ \ \ \ (5)

with the periodic convention {a_{k+j} := a_j + a_{k+1}} for {j>1}. As the {a_i} are increasing in {i} (even for {i \geq k+1}), we see that {2^{a_i}} is the largest power of {2} that divides the right-hand side of (5); as {2^{a_{k+1}}-3^k} is odd, we conclude that {2^{a_i}} is also the largest power of {2} that divides {n_i}. We conclude that

\displaystyle f_2([n_i]) = [3n_i + 2^{a_i}] = [n_{i+1}]

and thus {[n]} is a periodic orbit of {f_2}. Since {n} is an odd number larger than {1}, this contradicts Conjecture 3. \Box

Call a counterexample a tuple {(k,a_1,\ldots,a_{k+1})} that contradicts Conjecture 3, i.e. an integer {k \geq 1} and an increasing set of integers

\displaystyle 0 = a_1 < a_2 < \ldots < a_{k+1}

such that (1) holds for some {n \geq 1}. We record a simple bound on such counterexamples, due to Terras and to Garner :

Lemma 5 (Exponent bounds) Let {N \geq 1}, and suppose that the Collatz conjecture is true for all {n < N}. Let {(k,a_1,\ldots,a_{k+1})} be a counterexample. Then

\displaystyle  \frac{\log 3}{\log 2} k < a_{k+1} < \frac{\log(3+\frac{1}{N})}{\log 2} k.

Proof: The first bound is immediate from the positivity of {2^{a_{k+1}}-3^k}. To prove the second bound, observe from the proof of Proposition 4 that the counterexample {(k,a_1,\ldots,a_{k+1})} will generate a counterexample to Conjecture 2, i.e. a non-trivial periodic orbit {n, f(n), \ldots, f^K(n) = n}. As the conjecture is true for all {n < N}, all terms in this orbit must be at least {N}. An inspection of the proof of Proposition 4 reveals that this orbit consists of {k} steps of the form {x \mapsto 3x+1}, and {a_{k+1}} steps of the form {x \mapsto x/2}. As all terms are at least {N}, the former steps can increase magnitude by a multiplicative factor of at most {3+\frac{1}{N}}. As the orbit returns to where it started, we conclude that

\displaystyle  1 \leq (3+\frac{1}{N})^k (\frac{1}{2})^{a_{k+1}}

whence the claim. \Box

The Collatz conjecture has already been verified for many values of {n} (up to at least {N = 5 \times 10^{18}}, according to this web site). Inserting this into the above lemma, one can get lower bounds on {k}. For instance, by methods such as this, it is known that any non-trivial periodic orbit has length at least {105{,}000}, as shown in Garner’s paper (and this bound, which uses the much smaller value {N = 2 \times 10^9} that was available in 1981, can surely be improved using the most recent computational bounds).

Now we can perform a heuristic count on the number of counterexamples. If we fix {k} and {a := a_{k+1}}, then {2^a > 3^k}, and from basic combinatorics we see that there are {\binom{a-1}{k-1}} different ways to choose the remaining integers

\displaystyle 0 = a_1 < a_2 < \ldots < a_{k+1}

to form a potential counterexample {(k,a_1,\ldots,a_{k+1})}. As a crude heuristic, one expects that for a “random” such choice of integers, the expression (1) has a probability {1/q} of holding for some integer {n}. (Note that {q} is not divisible by {2} or {3}, and so one does not expect the special structure of the right-hand side of (1) with respect to those moduli to be relevant. There will be some choices of {a_1,\ldots,a_k} where the right-hand side in (1) is too small to be divisible by {q}, but using the estimates in Lemma 5, one expects this to occur very infrequently.) Thus, the total expected number of solutions for this choice of {a, k} is

\displaystyle  \frac{1}{q} \binom{a-1}{k-1}.

The heuristic number of solutions overall is then expected to be

\displaystyle  \sum_{a,k} \frac{1}{q} \binom{a-1}{k-1}, \ \ \ \ \ (6)

where, in view of Lemma 5, one should restrict the double summation to the heuristic regime {a \approx \frac{\log 3}{\log 2} k}, with the approximation here accurate to many decimal places.

We need a lower bound on {q}. Here, we will use Baker’s theorem (as discussed in this previous post), which among other things gives the lower bound

\displaystyle  q = 2^a - 3^k \gg 2^a / a^C \ \ \ \ \ (7)

for some absolute constant {C}. Meanwhile, Stirling’s formula (as discussed in this previous post) combined with the approximation {k \approx \frac{\log 2}{\log 3} a} gives

\displaystyle  \binom{a-1}{k-1} \approx \exp(h(\frac{\log 2}{\log 3}))^a

where {h} is the entropy function

\displaystyle  h(x) := - x \log x - (1-x) \log (1-x).

A brief computation shows that

\displaystyle  \exp(h(\frac{\log 2}{\log 3})) \approx 1.9318 \ldots

and so (ignoring all subexponential terms)

\displaystyle  \frac{1}{q} \binom{a-1}{k-1} \approx (0.9659\ldots)^a

which makes the series (6) convergent. (Actually, one does not need the full strength of Lemma 5 here; anything that kept {k} well away from {a/2} would suffice. In particular, one does not need an enormous value of {N}; even {N=5} (say) would be more than sufficient to obtain the heuristic that there are finitely many counterexamples.) Heuristically applying the Borel-Cantelli lemma, we thus expect that there are only a finite number of counterexamples to the weak Collatz conjecture (and inserting a bound such as {k \geq 105{,}000}, one in fact expects it to be extremely likely that there are no counterexamples at all).

This, of course, is far short of any rigorous proof of Conjecture 2. In order to make rigorous progress on this conjecture, it seems that one would need to somehow exploit the structural properties of numbers of the form

\displaystyle  3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k}. \ \ \ \ \ (8)

In some very special cases, this can be done. For instance, suppose that one had {a_{i+1}=a_i+1} with at most one exception (this is essentially what is called a {1}-cycle by Steiner). Then (8) simplifies via the geometric series formula to a combination of just a bounded number of powers of {2} and {3}, rather than an unbounded number. In that case, one can start using tools from transcendence theory such as Baker’s theorem to obtain good results; for instance, in the above-referenced paper of Steiner, it was shown that {1}-cycles cannot actually occur, and similar methods have been used to show that {m}-cycles (in which there are at most {m} exceptions to {a_{i+1}=a_i+1}) do not occur for any {m \leq 63}, as was shown by Simons and de Weger. However, for general increasing tuples of integers {a_1,\ldots,a_k}, there is no such representation by bounded numbers of powers, and it does not seem that methods from transcendence theory will be sufficient to control the expressions (8) to the extent that one can understand their divisibility properties by quantities such as {2^a-3^k}.

Amusingly, there is a slight connection to Littlewood-Offord theory in additive combinatorics – the study of the {2^n} random sums

\displaystyle  \pm v_1 \pm v_2 \pm \ldots \pm v_n

generated by some elements {v_1,\ldots,v_n} of an additive group {G}, or equivalently, the vertices of an {n}-dimensional parallelepiped inside {G}. Here, the relevant group is {{\bf Z}/q{\bf Z}}. The point is that if one fixes {k} and {a_{k+1}} (and hence {q}), and lets {a_1,\ldots,a_k} vary inside the simplex

\displaystyle  \Delta := \{ (a_1,\ldots,a_k) \in {\bf N}^k: 0 = a_1 < \ldots < a_k < a_{k+1}\}

then the set {S} of all sums of the form (8) (viewed as an element of {{\bf Z}/q{\bf Z}}) contains many large parallelepipeds. (Note, incidentally, that once one fixes {k}, all the sums of the form (8) are distinct; because given (8) and {k}, one can read off {2^{a_1}} as the largest power of {2} that divides (8), and then subtracting off {3^{k-1} 2^{a_1}} one can then read off {2^{a_2}}, and so forth.) This is because the simplex {\Delta} contains many large cubes. Indeed, if one picks a typical element {(a_1,\ldots,a_k)} of {\Delta}, then one expects (thanks to Lemma 5) that there there will be {\gg k} indices {1 \leq i_1 < \ldots < i_m \leq k} such that {a_{i_j+1} > a_{i_j}+1} for {j=1,\ldots,m}, which allows one to adjust each of the {a_{i_j}} independently by {1} if desired and still remain inside {\Delta}. This gives a cube in {\Delta} of dimension {\gg k}, which then induces a parallelepiped of the same dimension in {S}. A short computation shows that the generators of this parallelepiped consist of products of a power of {2} and a power of {3}, and in particular will be coprime to {q}.

If the weak Collatz conjecture is true, then the set {S} must avoid the residue class {0} in {{\bf Z}/q{\bf Z}}. Let us suppose temporarily that we did not know about Baker’s theorem (and the associated bound (7)), so that {q} could potentially be quite small. Then we would have a large parallelepiped inside a small cyclic group {{\bf Z}/q{\bf Z}} that did not cover all of {{\bf Z}/q{\bf Z}}, which would not be possible for {q} small enough. Indeed, an easy induction shows that a {d}-dimensional parallelepiped in {{\bf Z}/q{\bf Z}}, with all generators coprime to {q}, has cardinality at least {\min(q, d+1)}. This argument already shows the lower bound {q \gg k}. In other words, we have

Proposition 6 Suppose the weak Collatz conjecture is true. Then for any natural numbers {a, k} with {2^a > 3^k}, one has {2^a-3^k \gg k}.

This bound is very weak when compared against the unconditional bound (7). However, I know of no way to get a nontrivial separation property between powers of {2} and powers of {3} other than via transcendence theory methods. Thus, this result strongly suggests that any proof of the Collatz conjecture must either use existing results in transcendence theory, or else must contribute a new method to give non-trivial results in transcendence theory. (This already rules out a lot of possible approaches to solve the Collatz conjecture.)

By using more sophisticated tools in additive combinatorics, one can improve the above proposition (though it is still well short of the transcendence theory bound (7)):

Proposition 7 Suppose the weak Collatz conjecture is true. Then for any natural numbers {a, k} with {2^a > 3^k}, one has {2^a-3^k \gg (1+\epsilon)^k} for some absolute constant {\epsilon > 0}.

Proof: (Informal sketch only) Suppose not, then we can find {a, k} with {q := 2^a-3^k} of size {(1+o(1))^k = \exp(o(k))}. We form the set {S} as before, which contains parallelepipeds in {{\bf Z}/q{\bf Z}} of large dimension {d \gg k} that avoid {0}. We can count the number of times {0} occurs in one of these parallelepipeds by a standard Fourier-analytic computation involving Riesz products (see Chapter 7 of my book with Van Vu, or this recent preprint of Maples). Using this Fourier representation, the fact that this parallelepiped avoids {0} (and the fact that {q = \exp(o(k)) = \exp(o(d))}) forces the generators {v_1,\ldots,v_d} to be concentrated in a Bohr set, in that one can find a non-zero frequency {\xi \in {\bf Z}/q{\bf Z}} such that {(1-o(1))d} of the {d} generators lie in the set {\{ v: \xi v = o(q) \hbox{ mod } q \}}. However, one can choose the generators to essentially have the structure of a (generalised) geometric progression (up to scaling, it resembles something like {2^i 3^{\lfloor \alpha i \rfloor}} for {i} ranging over a generalised arithmetic progression, and {\alpha} a fixed irrational), and one can show that such progressions cannot be concentrated in Bohr sets (this is similar in spirit to the exponential sum estimates of Bourgain on approximate multiplicative subgroups of {{\bf Z}/q{\bf Z}}, though one can use more elementary methods here due to the very strong nature of the Bohr set concentration (being of the “{99\%} concentration” variety rather than the “{1\%} concentration”).). This furnishes the required contradiction. \Box

Thus we see that any proposed proof of the Collatz conjecture must either use transcendence theory, or introduce new techniques that are powerful enough to create exponential separation between powers of {2} and powers of {3}.

Unfortunately, once one uses the transcendence theory bound (7), the size {q} of the cyclic group {{\bf Z}/q{\bf Z}} becomes larger than the volume of any cube in {S}, and Littlewood-Offord techniques are no longer of much use (they can be used to show that {S} is highly equidistributed in {{\bf Z}/q{\bf Z}}, but this does not directly give any way to prevent {S} from containing {0}).

One possible toy model problem for the (weak) Collatz conjecture is a conjecture of Erdos asserting that for {n>8}, the base {3} representation of {2^n} contains at least one {2}. (See this paper of Lagarias for some work on this conjecture and on related problems.) To put it another way, the conjecture asserts that there are no integer solutions to

\displaystyle  2^n = 3^{a_1} + 3^{a_2} + \ldots + 3^{a_k}

with {n > 8} and {0 \leq a_1 < \ldots < a_k}. (When {n=8}, of course, one has {2^8 = 3^0 + 3^1 + 3^2 + 3^5}.) In this form we see a resemblance to Conjecture 3, but it looks like a simpler problem to attack (though one which is still a fair distance beyond what one can do with current technology). Note that one has a similar heuristic support for this conjecture as one does for Proposition 3; a number of magnitude {2^n} has about {n \frac{\log 2}{\log 3}} base {3} digits, so the heuristic probability that none of these digits are equal to {2} is {3^{-n\frac{\log 2}{\log 3}} = 2^{-n}}, which is absolutely summable.

In 1977, Furstenberg established his multiple recurrence theorem:

Theorem 1 (Furstenberg multiple recurrence) Let {(X, {\mathcal B}, \mu, T)} be a measure-preserving system, thus {(X,{\mathcal B},\mu)} is a probability space and {T: X \rightarrow X} is a measure-preserving bijection such that {T} and {T^{-1}} are both measurable. Let {E} be a measurable subset of {X} of positive measure {\mu(E) > 0}. Then for any {k \geq 1}, there exists {n > 0} such that

\displaystyle  E \cap T^{-n} E \cap \ldots \cap T^{-(k-1)n} E \neq \emptyset.

Equivalently, there exists {n > 0} and {x \in X} such that

\displaystyle  x, T^n x, \ldots, T^{(k-1)n} x \in E.

As is well known, the Furstenberg multiple recurrence theorem is equivalent to Szemerédi’s theorem, thanks to the Furstenberg correspondence principle; see for instance these lecture notes of mine.

The multiple recurrence theorem is proven, roughly speaking, by an induction on the “complexity” of the system {(X,{\mathcal X},\mu,T)}. Indeed, for very simple systems, such as periodic systems (in which {T^n} is the identity for some {n>0}, which is for instance the case for the circle shift {X = {\bf R}/{\bf Z}}, {Tx := x+\alpha} with a rational shift {\alpha}), the theorem is trivial; at a slightly more advanced level, almost periodic (or compact) systems (in which {\{ T^n f: n \in {\bf Z} \}} is a precompact subset of {L^2(X)} for every {f \in L^2(X)}, which is for instance the case for irrational circle shifts), is also quite easy. One then shows that the multiple recurrence property is preserved under various extension operations (specifically, compact extensions, weakly mixing extensions, and limits of chains of extensions), which then gives the multiple recurrence theorem as a consequence of the Furstenberg-Zimmer structure theorem for measure-preserving systems. See these lecture notes for further discussion.

From a high-level perspective, this is still one of the most conceptual proofs known of Szemerédi’s theorem. However, the individual components of the proof are still somewhat intricate. Perhaps the most difficult step is the demonstration that the multiple recurrence property is preserved under compact extensions; see for instance these lecture notes, which is devoted entirely to this step. This step requires quite a bit of measure-theoretic and/or functional analytic machinery, such as the theory of disintegrations, relatively almost periodic functions, or Hilbert modules.

However, I recently realised that there is a special case of the compact extension step – namely that of finite extensions – which avoids almost all of these technical issues while still capturing the essence of the argument (and in particular, the key idea of using van der Waerden’s theorem). As such, this may serve as a pedagogical device for motivating this step of the proof of the multiple recurrence theorem.

Let us first explain what a finite extension is. Given a measure-preserving system {X = (X,{\mathcal X},\mu,T)}, a finite set {Y}, and a measurable map {\rho: X \rightarrow \hbox{Sym}(Y)} from {X} to the permutation group of {Y}, one can form the finite extension

\displaystyle X \ltimes_\rho Y = (X \times Y, {\mathcal X} \times {\mathcal Y}, \mu \times \nu, S),

which as a probability space is the product of {(X,{\mathcal X},\mu)} with the finite probability space {Y = (Y, {\mathcal Y},\nu)} (with the discrete {\sigma}-algebra and uniform probability measure), and with shift map

\displaystyle  S(x, y) := (Tx, \rho(x) y). \ \ \ \ \ (1)

One easily verifies that this is indeed a measure-preserving system. We refer to {\rho} as the cocycle of the system.

An example of finite extensions comes from group theory. Suppose we have a short exact sequence

\displaystyle  0 \rightarrow K \rightarrow G \rightarrow H \rightarrow 0

of finite groups. Let {g} be a group element of {G}, and let {h} be its projection in {H}. Then the shift map {x \mapsto gx} on {G} (with the discrete {\sigma}-algebra and uniform probability measure) can be viewed as a finite extension of the shift map {y \mapsto hy} on {H} (again with the discrete {\sigma}-algebra and uniform probability measure), by arbitrarily selecting a section {\phi: H \rightarrow G} that inverts the projection map, identifying {G} with {H \times K} by identifying {k \phi(y)} with {(y,k)} for {y \in H, k \in K}, and using the cocycle

\displaystyle  \rho(y) := \phi(hy)^{-1} g \phi(y).

Thus, for instance, the unit shift {x \mapsto x+1} on {{\bf Z}/N{\bf Z}} can be thought of as a finite extension of the unit shift {x \mapsto x+1} on {{\bf Z}/M{\bf Z}} whenever {N} is a multiple of {M}.

Another example comes from Riemannian geometry. If {M} is a Riemannian manifold that is a finite cover of another Riemannian manifold {N} (with the metric on {M} being the pullback of that on {N}), then (unit time) geodesic flow on the cosphere bundle of {M} is a finite extension of the corresponding flow on {N}.

Here, then, is the finite extension special case of the compact extension step in the proof of the multiple recurrence theorem:

Proposition 2 (Finite extensions) Let {X \rtimes_\rho Y} be a finite extension of a measure-preserving system {X}. If {X} obeys the conclusion of the Furstenberg multiple recurrence theorem, then so does {X \rtimes_\rho Y}.

Before we prove this proposition, let us first give the combinatorial analogue.

Lemma 3 Let {A} be a subset of the integers that contains arbitrarily long arithmetic progressions, and let {A = A_1 \cup \ldots \cup A_M} be a colouring of {A} by {M} colours (or equivalently, a partition of {A} into {M} colour classes {A_i}). Then at least one of the {A_i} contains arbitrarily long arithmetic progressions.

Proof: By the infinite pigeonhole principle, it suffices to show that for each {k \geq 1}, one of the colour classes {A_i} contains an arithmetic progression of length {k}.

Let {N} be a large integer (depending on {k} and {M}) to be chosen later. Then {A} contains an arithmetic progression of length {N}, which may be identified with {\{0,\ldots,N-1\}}. The colouring of {A} then induces a colouring on {\{0,\ldots,N-1\}} into {M} colour classes. Applying (the finitary form of) van der Waerden’s theorem, we conclude that if {N} is sufficiently large depending on {M} and {k}, then one of these colouring classes contains an arithmetic progression of length {k}; undoing the identification, we conclude that one of the {A_i} contains an arithmetic progression of length {k}, as desired. \Box

Of course, by specialising to the case {A={\bf Z}}, we see that the above Lemma is in fact equivalent to van der Waerden’s theorem.

Now we prove Proposition 2.

Proof: Fix {k}. Let {E} be a positive measure subset of {X \rtimes_\rho Y = (X \times Y, {\mathcal X} \times {\mathcal Y}, \mu \times \nu, S)}. By Fubini’s theorem, we have

\displaystyle  \mu \times \nu(E) = \int_X f(x)\ d\mu(x)

where {f(x) := \nu(E_x)} and {E_x := \{ y \in Y: (x,y) \in E \}} is the fibre of {E} at {x}. Since {\mu \times \nu(E)} is positive, we conclude that the set

\displaystyle F := \{ x \in X: f(x) > 0 \} = \{ x \in X: E_x \neq \emptyset \}

is a positive measure subset of {X}. Note for each {x \in F}, we can find an element {g(x) \in Y} such that {(x,g(x)) \in E}. While not strictly necessary for this argument, one can ensure if one wishes that the function {g} is measurable by totally ordering {Y}, and then letting {g(x)} the minimal element of {Y} for which {(x,g(x)) \in E}.

Let {N} be a large integer (which will depend on {k} and the cardinality {M} of {Y}) to be chosen later. Because {X} obeys the multiple recurrence theorem, we can find a positive integer {n} and {x \in X} such that

\displaystyle  x, T^n x, T^{2n} x, \ldots, T^{(N-1) n} x \in F.

Now consider the sequence of {N} points

\displaystyle  S^{-mn}( T^{mn} x, g(T^{mn} x) )

for {m = 0,\ldots,N-1}. From (1), we see that

\displaystyle  S^{-mn}( T^{mn} x, g(T^{mn} x) ) = (x, c(m)) \ \ \ \ \ (2)

for some sequence {c(0),\ldots,c(N-1) \in Y}. This can be viewed as a colouring of {\{0,\ldots,N-1\}} by {M} colours, where {M} is the cardinality of {Y}. Applying van der Waerden’s theorem, we conclude (if {N} is sufficiently large depending on {k} and {|Y|}) that there is an arithmetic progression {a, a+r,\ldots,a+(k-1)r} in {\{0,\ldots,N-1\}} with {r>0} such that

\displaystyle  c(a) = c(a+r) = \ldots = c(a+(k-1)r) = c

for some {c \in Y}. If we then let {y = (x,c)}, we see from (2) that

\displaystyle  S^{an + irn} y = ( T^{(a+ir)n} x, g(T^{(a+ir)n} x) ) \in E

for all {i=0,\ldots,k-1}, and the claim follows. \Box

Remark 1 The precise connection between Lemma 3 and Proposition 2 arises from the following observation: with {E, F, g} as in the proof of Proposition 2, and {x \in X}, the set

\displaystyle  A := \{ n \in {\bf Z}: T^n x \in F \}

can be partitioned into the classes

\displaystyle  A_i := \{ n \in {\bf Z}: S^n (x,i) \in E' \}

where {E' := \{ (x,g(x)): x \in F \} \subset E} is the graph of {g}. The multiple recurrence property for {X} ensures that {A} contains arbitrarily long arithmetic progressions, and so therefore one of the {A_i} must also, which gives the multiple recurrence property for {Y}.

Remark 2 Compact extensions can be viewed as a generalisation of finite extensions, in which the fibres are no longer finite sets, but are themselves measure spaces obeying an additional property, which roughly speaking asserts that for many functions {f} on the extension, the shifts {T^n f} of {f} behave in an almost periodic fashion on most fibres, so that the orbits {T^n f} become totally bounded on each fibre. This total boundedness allows one to obtain an analogue of the above colouring map {c()} to which van der Waerden’s theorem can be applied.

Let {G = (G,+)} be an abelian countable discrete group. A measure-preserving {G}-system {X = (X, {\mathcal X}, \mu, (T_g)_{g \in G})} (or {G}-system for short) is a probability space {(X, {\mathcal X}, \mu)}, equipped with a measure-preserving action {T_g: X \rightarrow X} of the group {G}, thus

\displaystyle  \mu( T_g(E) ) = \mu(E)

for all {E \in {\mathcal X}} and {g \in G}, and

\displaystyle  T_g T_h = T_{g+h}

for all {g, h \in G}, with {T_0} equal to the identity map. Classically, ergodic theory has focused on the cyclic case {G={\bf Z}} (in which the {T_g} are iterates of a single map {T = T_1}, with elements of {G} being interpreted as a time parameter), but one can certainly consider actions of other groups {G} also (including continuous or non-abelian groups).

A {G}-system is said to be strongly {2}-mixing, or strongly mixing for short, if one has

\displaystyle  \lim_{g \rightarrow \infty} \mu( A \cap T_g B ) = \mu(A) \mu(B)

for all {A, B \in {\mathcal X}}, where the convergence is with respect to the one-point compactification of {G} (thus, for every {\epsilon > 0}, there exists a compact (hence finite) subset {K} of {G} such that {|\mu(A \cap T_g B) - \mu(A)\mu(B)| \leq \epsilon} for all {g \not \in K}).

Similarly, we say that a {G}-system is strongly {3}-mixing if one has

\displaystyle  \lim_{g,h,h-g \rightarrow \infty} \mu( A \cap T_g B \cap T_h C ) = \mu(A) \mu(B) \mu(C)

for all {A,B,C \in {\mathcal X}}, thus for every {\epsilon > 0}, there exists a finite subset {K} of {G} such that

\displaystyle  |\mu( A \cap T_g B \cap T_h C ) - \mu(A) \mu(B) \mu(C)| \leq \epsilon

whenever {g, h, h-g} all lie outside {K}.

It is obvious that a strongly {3}-mixing system is necessarily strong {2}-mixing. In the case of {{\bf Z}}-systems, it has been an open problem for some time, due to Rohlin, whether the converse is true:

Problem 1 (Rohlin’s problem) Is every strongly mixing {{\bf Z}}-system necessarily strongly {3}-mixing?

This is a surprisingly difficult problem. In the positive direction, a routine application of the Cauchy-Schwarz inequality (via van der Corput’s inequality) shows that every strongly mixing system is weakly {3}-mixing, which roughly speaking means that {\mu(A \cap T_g B \cap T_h C)} converges to {\mu(A) \mu(B) \mu(C)} for most {g, h \in {\bf Z}}. Indeed, every weakly mixing system is in fact weakly mixing of all orders; see for instance this blog post of Carlos Matheus, or these lecture notes of myself. So the problem is to exclude the possibility of correlation between {A}, {T_g B}, and {T_h C} for a small but non-trivial number of pairs {(g,h)}.

It is also known that the answer to Rohlin’s problem is affirmative for rank one transformations (a result of Kalikow) and for shifts with purely singular continuous spectrum (a result of Host; note that strongly mixing systems cannot have any non-trivial point spectrum). Indeed, any counterexample to the problem, if it exists, is likely to be highly pathological.

In the other direction, Rohlin’s problem is known to have a negative answer for {{\bf Z}^2}-systems, by a well-known counterexample of Ledrappier which can be described as follows. One can view a {{\bf Z}^2}-system as being essentially equivalent to a stationary process {(x_{n,m})_{(n,m) \in {\bf Z}^2}} of random variables {x_{n,m}} in some range space {\Omega} indexed by {{\bf Z}^2}, with {X} being {\Omega^{{\bf Z}^2}} with the obvious shift map

\displaystyle  T_{(g,h)} (x_{n,m})_{(n,m) \in {\bf Z}^2} := (x_{n-g,m-h})_{(n,m) \in {\bf Z}^2}.

In Ledrappier’s example, the {x_{n,m}} take values in the finite field {{\bf F}_2} of two elements, and are selected at uniformly random subject to the “Pascal’s triangle” linear constraints

\displaystyle  x_{n,m} = x_{n-1,m} + x_{n,m-1}.

A routine application of the Kolmogorov extension theorem allows one to build such a process. The point is that due to the properties of Pascal’s triangle modulo {2} (known as Sierpinski’s triangle), one has

\displaystyle  x_{n,m} = x_{n-2^k,m} + x_{n,m-2^k}

for all powers of two {2^k}. This is enough to destroy strong {3}-mixing, because it shows a strong correlation between {x}, {T_{(2^k,0)} x}, and {T_{(0,2^k)} x} for arbitrarily large {k} and randomly chosen {x \in X}. On the other hand, one can still show that {x} and {T_g x} are asymptotically uncorrelated for large {g}, giving strong {2}-mixing. Unfortunately, there are significant obstructions to converting Ledrappier’s example from a {{\bf Z}^2}-system to a {{\bf Z}}-system, as pointed out by de la Rue.

In this post, I would like to record a “finite field” variant of Ledrappier’s construction, in which {{\bf Z}^2} is replaced by the function field ring {{\bf F}_3[t]}, which is a “dyadic” (or more precisely, “triadic”) model for the integers (cf. this earlier blog post of mine). In other words:

Theorem 2 There exists a {{\bf F}_3[t]}-system that is strongly {2}-mixing but not strongly {3}-mixing.

The idea is much the same as that of Ledrappier; one builds a stationary {{\bf F}_3[t]}-process {(x_n)_{n \in {\bf F}_3[t]}} in which {x_n \in {\bf F}_3} are chosen uniformly at random subject to the constraints

\displaystyle  x_n + x_{n + t^k} + x_{n + 2t^k} = 0 \ \ \ \ \ (1)

for all {n \in {\bf F}_3[t]} and all {k \geq 0}. Again, this system is manifestly not strongly {3}-mixing, but can be shown to be strongly {2}-mixing; I give details below the fold.

As I discussed in this previous post, in many cases the dyadic model serves as a good guide for the non-dyadic model. However, in this case there is a curious rigidity phenomenon that seems to prevent Ledrappier-type examples from being transferable to the one-dimensional non-dyadic setting; once one restores the Archimedean nature of the underlying group, the constraints (1) not only reinforce each other strongly, but also force so much linearity on the system that one loses the strong mixing property.

Read the rest of this entry »



Tanja Eisner and I have just uploaded to the arXiv our paper “Large values of the Gowers-Host-Kra seminorms“, submitted to Journal d’Analyse Mathematique. This paper is concerned with the properties of three closely related families of (semi)norms, indexed by a positive integer {k}:

  • The Gowers uniformity norms {\|f\|_{U^k(G)}} of a (bounded, measurable, compactly supported) function {f: G \rightarrow {\bf C}} taking values on a locally compact abelian group {G}, equipped with a Haar measure {\mu};
  • The Gowers uniformity norms {\|f\|_{U^k([N])}} of a function {f: [N] \rightarrow {\bf C}} on a discrete interval {\{1,\ldots,N\}}; and
  • The Gowers-Host-Kra seminorms {\|f\|_{U^k(X)}} of a function {f \in L^\infty(X)} on an ergodic measure-preserving system {X = (X,{\mathcal X},\mu,T)}.

These norms have been discussed in depth in previous blog posts, so I will just quickly review the definition of the first norm here (the other two (semi)norms are defined similarly). The {U^k(G)} norm is defined recursively by setting

\displaystyle  \| f \|_{U^1(G)} := |\int_G f\ d\mu|

and

\displaystyle  \|f\|_{U^k(G)}^{2^k} := \int_G \| \Delta_h f \|_{U^{k-1}(G)}^{2^{k-1}}\ d\mu(h)

where {\Delta_h f(x) := f(x+h) \overline{f(x)}}. Equivalently, one has

\displaystyle  \|f\|_{U^k(G)} := (\int_G \ldots \int_G \Delta_{h_1} \ldots \Delta_{h_k} f(x)\ d\mu(x) d\mu(h_1) \ldots d\mu(h_k))^{1/2^k}.

Informally, the Gowers uniformity norm {\|f\|_{U^k(G)}} measures the extent to which (the phase of {f}) behaves like a polynomial of degree less than {k}. Indeed, if {\|f\|_{L^\infty(G)} \leq 1} and {G} is compact with normalised Haar measure {\mu(G)=1}, it is not difficult to show that {\|f\|_{U^k(G)}} is at most {1}, with equality if and only if {f} takes the form {f = e(P) := e^{2\pi iP}} almost everywhere, where {P: G \rightarrow {\bf R}/{\bf Z}} is a polynomial of degree less than {k} (which means that {\partial_{h_1} \ldots \partial_{h_k} P(x) = 0} for all {x,h_1,\ldots,h_k \in G}).

Our first result is to show that this result is robust, uniformly over all choices of group {G}:

Theorem 1 ({L^\infty}-near extremisers) Let {G} be a compact abelian group with normalised Haar measure {\mu(G)=1}, and let {f \in L^\infty(G)} be such that {\|f\|_{L^\infty(G)} \leq 1} and {\|f\|_{U^k(G)} \geq 1-\epsilon} for some {\epsilon > 0} and {k \geq 1}. Then there exists a polynomial {P: G \rightarrow {\bf R}/{\bf Z}} of degree at most {k-1} such that {\|f-e(P)\|_{L^1(G)} = o(1)}, where {o(1)} is bounded by a quantity {c_k(\epsilon)} that goes to zero as {\epsilon \rightarrow 0} for fixed {k}.

The quantity {o(1)} can be described effectively (it is of polynomial size in {\epsilon}), but we did not seek to optimise it here. This result was already known in the case of vector spaces {G = {\bf F}_p^n} over a fixed finite field {{\bf F}_p} (where it is essentially equivalent to the assertion that the property of being a polynomial of degree at most {k-1} is locally testable); the extension to general groups {G} turns out to fairly routine. The basic idea is to use the recursive structure of the Gowers norms, which tells us in particular that if {\|f\|_{U^k(G)}} is close to one, then {\|\Delta_h f\|_{U^{k-1}(G)}} is close to one for most {h}, which by induction implies that {\Delta_h f} is close to {e(Q_h)} for some polynomials {Q_h} of degree at most {k-2} and for most {h}. (Actually, it is not difficult to use cocycle equations such as {\Delta_{h+k} f = \Delta_h f \times T^h \Delta_k f} (when {|f|=1}) to upgrade “for most {h}” to “for all {h}“.) To finish the job, one would like to express the {Q_h} as derivatives {Q_h = \partial_h P} of a polynomial {P} of degree at most {k-1}. This turns out to be equivalent to requiring that the {Q_h} obey the cocycle equation

\displaystyle  Q_{h+k} = Q_h + T^h Q_k

where {T^h F(x) := F(x+h)} is the translate of {F} by {h}. (In the paper, the sign conventions are reversed, so that {T^h F(x) := F(x-h)}, in order to be compatible with ergodic theory notation, but this makes no substantial difference to the arguments or results.) However, one does not quite get this right away; instead, by using some separation properties of polynomials, one can show the weaker statement that

\displaystyle  Q_{h+k} = Q_h + T^h Q_k + c_{h,k} \ \ \ \ \ (1)

where the {c_{h,k}} are small real constants. To eliminate these constants, one exploits the trivial cohomology of the real line. From (1) one soon concludes that the {c_{h,k}} obey the {2}-cocycle equation

\displaystyle  c_{h,k} + c_{h+k,l} = c_{h,k+l} + c_{k,l}

and an averaging argument then shows that {c_{h,k}} is a {2}-coboundary in the sense that

\displaystyle  c_{h,k} = b_{h+k} - b_h - b_k

for some small scalar {b_h} depending on {h}. Subtracting {b_h} from {Q_h} then gives the claim.

Similar results and arguments also hold for the {U^k([N])} and {U^k(X)} norms, which we will not detail here.

Dimensional analysis reveals that the {L^\infty} norm is not actually the most natural norm with which to compare the {U^k} norms against. An application of Young’s convolution inequality in fact reveals that one has the inequality

\displaystyle  \|f\|_{U^k(G)} \leq \|f\|_{L^{p_k}(G)} \ \ \ \ \ (2)

where {p_k} is the critical exponent {p_k := 2^k/(k+1)}, without any compactness or normalisation hypothesis on the group {G} and the Haar measure {\mu}. This allows us to extend the {U^k(G)} norm to all of {L^{p_k}(G)}. There is then a stronger inverse theorem available:

Theorem 2 ({L^{p_k}}-near extremisers) Let {G} be a locally compact abelian group, and let {f \in L^{p_k}(G)} be such that {\|f\|_{L^{p_k}(G)} \leq 1} and {\|f\|_{U^k(G)} \geq 1-\epsilon} for some {\epsilon > 0} and {k \geq 1}. Then there exists a coset {H} of a compact open subgroup {H} of {G}, and a polynomial {P: H to {\bf R}/{\bf Z}} of degree at most {k-1} such that {\|f-e(P) 1_H\|_{L^{p_k}(G)} = o(1)}.

Conversely, it is not difficult to show that equality in (2) is attained when {f} takes the form {e(P) 1_H} as above. The main idea of proof is to use an inverse theorem for Young’s inequality due to Fournier to reduce matters to the {L^\infty} case that was already established. An analogous result is also obtained for the {U^k(X)} norm on an ergodic system; but for technical reasons, the methods do not seem to apply easily to the {U^k([N])} norm. (This norm is essentially equivalent to the {U^k({\bf Z}/\tilde N{\bf Z})} norm up to constants, with {\tilde N} comparable to {N}, but when working with near-extremisers, norms that are only equivalent up to constants can have quite different near-extremal behaviour.)

In the case when {G} is a Euclidean group {{\bf R}^d}, it is possible to use the sharp Young inequality of Beckner and of Brascamp-Lieb to improve (2) somewhat. For instance, when {k=3}, one has

\displaystyle  \|f\|_{U^3({\bf R}^d)} \leq 2^{-d/8} \|f\|_{L^2({\bf R}^d)}

with equality attained if and only if {f} is a gaussian modulated by a quadratic polynomial phase. This additional gain of {2^{-d/8}} allows one to pinpoint the threshold {1-\epsilon} for the previous near-extremiser results in the case of {U^3} norms. For instance, by using the Host-Kra machinery of characteristic factors for the {U^3(X)} norm, combined with an explicit and concrete analysis of the {2}-step nilsystems generated by that machinery, we can show that

\displaystyle  \|f\|_{U^3(X)} \leq 2^{-1/8} \|f\|_{L^2(X)}

whenever {X} is a totally ergodic system and {f} is orthogonal to all linear and quadratic eigenfunctions (which would otherwise form immediate counterexamples to the above inequality), with the factor {2^{-1/8}} being best possible. We can also establish analogous results for the {U^3([N])} and {U^3({\bf Z}/N{\bf Z})} norms (using the inverse {U^3} theorem of Ben Green and myself, in place of the Host-Kra machinery), although it is not clear to us whether the {2^{-1/8}} threshold remains best possible in this case.

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,307 other followers