You are currently browsing the tag archive for the ‘pointwise ergodic theorem’ tag.

Ben Krause, Mariusz Mirek, and I have uploaded to the arXiv our paper Pointwise ergodic theorems for non-conventional bilinear polynomial averages. This paper is a contribution to the decades-long program of extending the classical ergodic theorems to “non-conventional” ergodic averages. Here, the focus is on pointwise convergence theorems, and in particular looking for extensions of the pointwise ergodic theorem of Birkhoff:

Theorem 1 (Birkhoff ergodic theorem) Let ${(X,\mu,T)}$ be a measure-preserving system (by which we mean ${(X,\mu)}$ is a ${\sigma}$-finite measure space, and ${T: X \rightarrow X}$ is invertible and measure-preserving), and let ${f \in L^p(X)}$ for any ${1 \leq p < \infty}$. Then the averages ${\frac{1}{N} \sum_{n=1}^N f(T^n x)}$ converge pointwise for ${\mu}$-almost every ${x \in X}$.

Pointwise ergodic theorems have an inherently harmonic analysis content to them, as they are closely tied to maximal inequalities. For instance, the Birkhoff ergodic theorem is closely tied to the Hardy-Littlewood maximal inequality.

The above theorem was generalized by Bourgain (conceding the endpoint ${p=1}$, where pointwise almost everywhere convergence is now known to fail) to polynomial averages:

Theorem 2 (Pointwise ergodic theorem for polynomial averages) Let ${(X,\mu,T)}$ be a measure-preserving system, and let ${f \in L^p(X)}$ for any ${1 < p < \infty}$. Let ${P \in {\bf Z}[{\mathrm n}]}$ be a polynomial with integer coefficients. Then the averages ${\frac{1}{N} \sum_{n=1}^N f(T^{P(n)} x)}$ converge pointwise for ${\mu}$-almost every ${x \in X}$.

For bilinear averages, we have a separate 1990 result of Bourgain (for ${L^\infty}$ functions), extended to other ${L^p}$ spaces by Lacey, and with an alternate proof given, by Demeter:

Theorem 3 (Pointwise ergodic theorem for two linear polynomials) Let ${(X,\mu,T)}$ be a measure-preserving system with finite measure, and let ${f \in L^{p_1}(X)}$, ${g \in L^{p_2}}$ for some ${1 < p_1,p_2 \leq \infty}$ with ${\frac{1}{p_1}+\frac{1}{p_2} < \frac{3}{2}}$. Then for any integers ${a,b}$, the averages ${\frac{1}{N} \sum_{n=1}^N f(T^{an} x) g(T^{bn} x)}$ converge pointwise almost everywhere.

It has been an open question for some time (see e.g., Problem 11 of this survey of Frantzikinakis) to extend this result to other bilinear ergodic averages. In our paper we are able to achieve this in the partially linear case:

Theorem 4 (Pointwise ergodic theorem for one linear and one nonlinear polynomial) Let ${(X,\mu,T)}$ be a measure-preserving system, and let ${f \in L^{p_1}(X)}$, ${g \in L^{p_2}}$ for some ${1 < p_1,p_2 < \infty}$ with ${\frac{1}{p_1}+\frac{1}{p_2} \leq 1}$. Then for any polynomial ${P \in {\bf Z}[{\mathrm n}]}$ of degree ${d \geq 2}$, the averages ${\frac{1}{N} \sum_{n=1}^N f(T^{n} x) g(T^{P(n)} x)}$ converge pointwise almost everywhere.

We actually prove a bit more than this, namely a maximal function estimate and a variational estimate, together with some additional estimates that “break duality” by applying in certain ranges with ${\frac{1}{p_1}+\frac{1}{p_2}>1}$, but we will not discuss these extensions here. A good model case to keep in mind is when ${p_1=p_2=2}$ and ${P(n) = n^2}$ (which is the case we started with). We note that norm convergence for these averages was established much earlier by Furstenberg and Weiss (in the ${d=2}$ case at least), and in fact norm convergence for arbitrary polynomial averages is now known thanks to the work of Host-Kra, Leibman, and Walsh.

Our proof of Theorem 4 is much closer in spirit to Theorem 2 than to Theorem 3. The property of the averages shared in common by Theorems 2, 4 is that they have “true complexity zero”, in the sense that they can only be only be large if the functions ${f,g}$ involved are “major arc” or “profinite”, in that they behave periodically over very long intervals (or like a linear combination of such periodic functions). In contrast, the average in Theorem 3 has “true complexity one”, in the sense that they can also be large if ${f,g}$ are “almost periodic” (a linear combination of eigenfunctions, or plane waves), and as such all proofs of the latter theorem have relied (either explicitly or implicitly) on some form of time-frequency analysis. In principle, the true complexity zero property reduces one to study the behaviour of averages on major arcs. However, until recently the available estimates to quantify this true complexity zero property were not strong enough to achieve a good reduction of this form, and even once one was in the major arc setting the bilinear averages in Theorem 4 were still quite complicated, exhibiting a mixture of both continuous and arithmetic aspects, both of which being genuinely bilinear in nature.

After applying standard reductions such as the Calderón transference principle, the key task is to establish a suitably “scale-invariant” maximal (or variational) inequality on the integer shift system (in which ${X = {\bf Z}}$ with counting measure, and ${T(n) = n-1}$). A model problem is to establish the maximal inequality

$\displaystyle \| \sup_N |A_N(f,g)| \|_{\ell^1({\bf Z})} \lesssim \|f\|_{\ell^2({\bf Z})}\|g\|_{\ell^2({\bf Z})} \ \ \ \ \ (1)$

where ${N}$ ranges over powers of two and ${A_N}$ is the bilinear operator

$\displaystyle A_N(f,g)(x) := \frac{1}{N} \sum_{n=1}^N f(x-n) g(x-n^2).$

The single scale estimate

$\displaystyle \| A_N(f,g) \|_{\ell^1({\bf Z})} \lesssim \|f\|_{\ell^2({\bf Z})}\|g\|_{\ell^2({\bf Z})}$

or equivalently (by duality)

$\displaystyle \frac{1}{N} \sum_{n=1}^N \sum_{x \in {\bf Z}} h(x) f(x-n) g(x-n^2) \lesssim \|f\|_{\ell^2({\bf Z})}\|g\|_{\ell^2({\bf Z})} \|h\|_{\ell^\infty({\bf Z})} \ \ \ \ \ (2)$

is immediate from Hölder’s inequality; the difficulty is how to take the supremum over scales ${N}$.

The first step is to understand when the single-scale estimate (2) can come close to equality. A key example to keep in mind is when ${f(x) = e(ax/q) F(x)}$, ${g(x) = e(bx/q) G(x)}$, ${h(x) = e(cx/q) H(x)}$ where ${q=O(1)}$ is a small modulus, ${a,b,c}$ are such that ${a+b+c=0 \hbox{ mod } q}$, ${G}$ is a smooth cutoff to an interval ${I}$ of length ${O(N^2)}$, and ${F=H}$ is also supported on ${I}$ and behaves like a constant on intervals of length ${O(N)}$. Then one can check that (barring some unusual cancellation) (2) is basically sharp for this example. A remarkable result of Peluse and Prendiville (generalised to arbitrary nonlinear polynomials ${P}$ by Peluse) asserts, roughly speaking, that this example basically the only way in which (2) can be saturated, at least when ${f,g,h}$ are supported on a common interval ${I}$ of length ${O(N^2)}$ and are normalised in ${\ell^\infty}$ rather than ${\ell^2}$. (Strictly speaking, the above paper of Peluse and Prendiville only says something like this regarding the ${f,h}$ factors; the corresponding statement for ${g}$ was established in a subsequent paper of Peluse and Prendiville.) The argument requires tools from additive combinatorics such as the Gowers uniformity norms, and hinges in particular on the “degree lowering argument” of Peluse and Prendiville, which I discussed in this previous blog post. Crucially for our application, the estimates are very quantitative, with all bounds being polynomial in the ratio between the left and right hand sides of (2) (or more precisely, the ${\ell^\infty}$-normalized version of (2)).

For our applications we had to extend the ${\ell^\infty}$ inverse theory of Peluse and Prendiville to an ${\ell^2}$ theory. This turned out to require a certain amount of “sleight of hand”. Firstly, one can dualise the theorem of Peluse and Prendiville to show that the “dual function”

$\displaystyle A^*_N(h,g)(x) = \frac{1}{N} \sum_{n=1}^N h(x+n) g(x+n-n^2)$

can be well approximated in ${\ell^1}$ by a function that has Fourier support on “major arcs” if ${g,h}$ enjoy ${\ell^\infty}$ control. To get the required extension to ${\ell^2}$ in the ${f}$ aspect one has to improve the control on the error from ${\ell^1}$ to ${\ell^2}$; this can be done by some interpolation theory combined with the useful Fourier multiplier theory of Ionescu and Wainger on major arcs. Then, by further interpolation using recent ${\ell^p({\bf Z})}$ improving estimates of Han, Kovac, Lacey, Madrid, and Yang for linear averages such as ${x \mapsto \frac{1}{N} \sum_{n=1}^N g(x+n-n^2)}$, one can relax the ${\ell^\infty}$ hypothesis on ${g}$ to an ${\ell^2}$ hypothesis, and then by undoing the duality one obtains a good inverse theorem for (2) for the function ${f}$; a modification of the arguments also gives something similar for ${g}$.

Using these inverse theorems (and the Ionescu-Wainger multiplier theory) one still has to understand the “major arc” portion of (1); a model case arises when ${f,g}$ are supported near rational numbers ${a/q}$ with ${q \sim 2^l}$ for some moderately large ${l}$. The inverse theory gives good control (with an exponential decay in ${l}$) on individual scales ${N}$, and one can leverage this with a Rademacher-Menshov type argument (see e.g., this blog post) and some closer analysis of the bilinear Fourier symbol of ${A_N}$ to eventually handle all “small” scales, with ${N}$ ranging up to say ${2^{2^u}}$ where ${u = C 2^{\rho l}}$ for some small constant ${\rho}$ and large constant ${C}$. For the “large” scales, it becomes feasible to place all the major arcs simultaneously under a single common denominator ${Q}$, and then a quantitative version of the Shannon sampling theorem allows one to transfer the problem from the integers ${{\bf Z}}$ to the locally compact abelian group ${{\bf R} \times {\bf Z}/Q{\bf Z}}$. Actually it was conceptually clearer for us to work instead with the adelic integers ${{\mathbf A}_{\bf Z} ={\bf R} \times \hat {\bf Z}}$, which is the inverse limit of the ${{\bf R} \times {\bf Z}/Q{\bf Z}}$. Once one transfers to the adelic integers, the bilinear operators involved split up as tensor products of the “continuous” bilinear operator

$\displaystyle A_{N,{\bf R}}(f,g)(x) := \frac{1}{N} \int_0^N f(x-t) g(x-t^2)\ dt$

on ${{\bf R}}$, and the “arithmetic” bilinear operator

$\displaystyle A_{\hat Z}(f,g)(x) := \int_{\hat {\bf Z}} f(x-y) g(x-y^2) d\mu_{\hat {\bf Z}}(y)$

on the profinite integers ${\hat {\bf Z}}$, equipped with probability Haar measure ${\mu_{\hat {\bf Z}}}$. After a number of standard manipulations (interpolation, Fubini’s theorem, Hölder’s inequality, variational inequalities, etc.) the task of estimating this tensor product boils down to establishing an ${L^q}$ improving estimate

$\displaystyle \| A_{\hat {\bf Z}}(f,g) \|_{L^q(\hat {\bf Z})} \lesssim \|f\|_{L^2(\hat {\bf Z})} \|g\|_{L^2(\hat {\bf Z})}$

for some ${q>2}$. Splitting the profinite integers ${\hat {\bf Z}}$ into the product of the ${p}$-adic integers ${{\bf Z}_p}$, it suffices to establish this claim for each ${{\bf Z}_p}$ separately (so long as we keep the implied constant equal to ${1}$ for sufficiently large ${p}$). This turns out to be possible using an arithmetic version of the Peluse-Prendiville inverse theorem as well as an arithmetic ${L^q}$ improving estimate for linear averaging operators which ultimately arises from some estimates on the distribution of polynomials on the ${p}$-adic field ${{\bf Q}_p}$, which are a variant of some estimates of Kowalski and Wright.

I’ve just uploaded to the arXiv my paper “Failure of the ${L^1}$ pointwise and maximal ergodic theorems for the free group“, submitted to Forum of Mathematics, Sigma. This paper concerns a variant of the pointwise ergodic theorem of Birkhoff, which asserts that if one has a measure-preserving shift map ${T: X \rightarrow X}$ on a probability space ${X = (X,\mu)}$, then for any ${f \in L^1(X)}$, the averages ${\frac{1}{N} \sum_{n=1}^N f \circ T^{-n}}$ converge pointwise almost everywhere. (In the important case when the shift map ${T}$ is ergodic, the pointwise limit is simply the mean ${\int_X f\ d\mu}$ of the original function ${f}$.)

The pointwise ergodic theorem can be extended to measure-preserving actions of other amenable groups, if one uses a suitably “tempered” Folner sequence of averages; see this paper of Lindenstrauss for more details. (I also wrote up some notes on that paper here, back in 2006 before I had started this blog.) But the arguments used to handle the amenable case break down completely for non-amenable groups, and in particular for the free non-abelian group ${F_2}$ on two generators.

Nevo and Stein studied this problem and obtained a number of pointwise ergodic theorems for ${F_2}$-actions ${(T_g)_{g \in F_2}}$ on probability spaces ${(X,\mu)}$. For instance, for the spherical averaging operators

$\displaystyle {\mathcal A}_n f := \frac{1}{4 \times 3^{n-1}} \sum_{g \in F_2: |g| = n} f \circ T_g^{-1}$

(where ${|g|}$ denotes the length of the reduced word that forms ${g}$), they showed that ${{\mathcal A}_{2n} f}$ converged pointwise almost everywhere provided that ${f}$ was in ${L^p(X)}$ for some ${p>1}$. (The need to restrict to spheres of even radius can be seen by considering the action of ${F_2}$ on the two-element set ${\{0,1\}}$ in which both generators of ${F_2}$ act by interchanging the elements, in which case ${{\mathcal A}_n}$ is determined by the parity of ${n}$.) This result was reproven with a different and simpler proof by Bufetov, who also managed to relax the condition ${f \in L^p(X)}$ to the weaker condition ${f \in L \log L(X)}$.

The question remained open as to whether the pointwise ergodic theorem for ${F_2}$-actions held if one only assumed that ${f}$ was in ${L^1(X)}$. Nevo and Stein were able to establish this for the Cesáro averages ${\frac{1}{N} \sum_{n=1}^N {\mathcal A}_n}$, but not for ${{\mathcal A}_n}$ itself. About six years ago, Assaf Naor and I tried our hand at this problem, and was able to show an associated maximal inequality on ${\ell^1(F_2)}$, but due to the non-amenability of ${F_2}$, this inequality did not transfer to ${L^1(X)}$ and did not have any direct impact on this question, despite a fair amount of effort on our part to attack it.

Inspired by some recent conversations with Lewis Bowen, I returned to this problem. This time around, I tried to construct a counterexample to the ${L^1}$ pointwise ergodic theorem – something Assaf and I had not seriously attempted to do (perhaps due to being a bit too enamoured of our ${\ell^1(F_2)}$ maximal inequality). I knew of an existing counterexample of Ornstein regarding a failure of an ${L^1}$ ergodic theorem for iterates ${P^n}$ of a self-adjoint Markov operator – in fact, I had written some notes on this example back in 2007. Upon revisiting my notes, I soon discovered that the Ornstein construction was adaptable to the ${F_2}$ setting, thus settling the problem in the negative:

Theorem 1 (Failure of ${L^1}$ pointwise ergodic theorem) There exists a measure-preserving ${F_2}$-action on a probability space ${X}$ and a non-negative function ${f \in L^1(X)}$ such that ${\sup_n {\mathcal A}_{2n} f(x) = +\infty}$ for almost every ${x}$.

To describe the proof of this theorem, let me first briefly sketch the main ideas of Ornstein’s construction, which gave an example of a self-adjoint Markov operator ${P}$ on a probability space ${X}$ and a non-negative ${f \in L^1(X)}$ such that ${\sup_n P^n f(x) = +\infty}$ for almost every ${x}$. By some standard manipulations, it suffices to show that for any given ${\alpha > 0}$ and ${\varepsilon>0}$, there exists a self-adjoint Markov operator ${P}$ on a probability space ${X}$ and a non-negative ${f \in L^1(X)}$ with ${\|f\|_{L^1(X)} \leq \alpha}$, such that ${\sup_n P^n f \geq 1-\varepsilon}$ on a set of measure at least ${1-\varepsilon}$. Actually, it will be convenient to replace the Markov chain ${(P^n f)_{n \geq 0}}$ with an ancient Markov chain ${(f_n)_{n \in {\bf Z}}}$ – that is to say, a sequence of non-negative functions ${f_n}$ for both positive and negative ${f}$, such that ${f_{n+1} = P f_n}$ for all ${n \in {\bf Z}}$. The purpose of requiring the Markov chain to be ancient (that is, to extend infinitely far back in time) is to allow for the Markov chain to be shifted arbitrarily in time, which is key to Ornstein’s construction. (Technically, Ornstein’s original argument only uses functions that go back to a large negative time, rather than being infinitely ancient, but I will gloss over this point for sake of discussion, as it turns out that the ${F_2}$ version of the argument can be run using infinitely ancient chains.)

For any ${\alpha>0}$, let ${P(\alpha)}$ denote the claim that for any ${\varepsilon>0}$, there exists an ancient Markov chain ${(f_n)_{n \in {\bf Z}}}$ with ${\|f_n\|_{L^1(X)} = \alpha}$ such that ${\sup_{n \in {\bf Z}} f_n \geq 1-\varepsilon}$ on a set of measure at least ${1-\varepsilon}$. Clearly ${P(1)}$ holds since we can just take ${f_n=1}$ for all ${n}$. Our objective is to show that ${P(\alpha)}$ holds for arbitrarily small ${\alpha}$. The heart of Ornstein’s argument is then the implication

$\displaystyle P(\alpha) \implies P( \alpha (1 - \frac{\alpha}{4}) ) \ \ \ \ \ (1)$

for any ${0 < \alpha \leq 1}$, which upon iteration quickly gives the desired claim.

Let’s see informally how (1) works. By hypothesis, and ignoring epsilons, we can find an ancient Markov chain ${(f_n)_{n \in {\bf Z}}}$ on some probability space ${X}$ of total mass ${\|f_n\|_{L^1(X)} = \alpha}$, such that ${\sup_n f_n}$ attains the value of ${1}$ or greater almost everywhere. Assuming that the Markov process is irreducible, the ${f_n}$ will eventually converge as ${n \rightarrow \infty}$ to the constant value of ${\|f_n\|_{L^1(X)}}$, in particular its final state will essentially stay above ${\alpha}$ (up to small errors).

Now suppose we duplicate the Markov process by replacing ${X}$ with a double copy ${X \times \{1,2\}}$ (giving ${\{1,2\}}$ the uniform probability measure), and using the disjoint sum of the Markov operators on ${X \times \{1\}}$ and ${X \times \{2\}}$ as the propagator, so that there is no interaction between the two components of this new system. Then the functions ${f'_n(x,i) := f_n(x) 1_{i=1}}$ form an ancient Markov chain of mass at most ${\alpha/2}$ that lives solely in the first half ${X \times \{1\}}$ of this copy, and ${\sup_n f'_n}$ attains the value of ${1}$ or greater on almost all of the first half ${X \times \{1\}}$, but is zero on the second half. The final state of ${f'_n}$ will be to stay above ${\alpha}$ in the first half ${X \times \{1\}}$, but be zero on the second half.

Now we modify the above example by allowing an infinitesimal amount of interaction between the two halves ${X \times \{1\}}$, ${X \times \{2\}}$ of the system (I mentally think of ${X \times \{1\}}$ and ${X \times \{2\}}$ as two identical boxes that a particle can bounce around in, and now we wish to connect the boxes by a tiny tube). The precise way in which this interaction is inserted is not terribly important so long as the new Markov process is irreducible. Once one does so, then the ancient Markov chain ${(f'_n)_{n \in {\bf Z}}}$ in the previous example gets replaced by a slightly different ancient Markov chain ${(f''_n)_{n \in {\bf Z}}}$ which is more or less identical with ${f'_n}$ for negative times ${n}$, or for bounded positive times ${n}$, but for very large values of ${n}$ the final state is now constant across the entire state space ${X \times \{1,2\}}$, and will stay above ${\alpha/2}$ on this space.

Finally, we consider an ancient Markov chain ${F_n}$ which is basically of the form

$\displaystyle F_n(x,i) \approx f''_n(x,i) + (1 - \frac{\alpha}{2}) f_{n-M}(x) 1_{i=2}$

for some large parameter ${M}$ and for all ${n \leq M}$ (the approximation becomes increasingly inaccurate for ${n}$ much larger than ${M}$, but never mind this for now). This is basically two copies of the original Markov process in separate, barely interacting state spaces ${X \times \{1\}, X \times \{2\}}$, but with the second copy delayed by a large time delay ${M}$, and also attenuated in amplitude by a factor of ${1-\frac{\alpha}{2}}$. The total mass of this process is now ${\frac{\alpha}{2} + \frac{\alpha}{2} (1 -\frac{\alpha}{2}) = \alpha (1 - \alpha/4)}$. Because of the ${f''_n}$ component of ${F_n}$, we see that ${\sup_n F_n}$ basically attains the value of ${1}$ or greater on the first half ${X \times \{1\}}$. On the second half ${X \times \{2\}}$, we work with times ${n}$ close to ${M}$. If ${M}$ is large enough, ${f''_n}$ would have averaged out to about ${\alpha/2}$ at such times, but the ${(1 - \frac{\alpha}{2}) f_{n-M}(x)}$ component can get as large as ${1-\alpha/2}$ here. Summing (and continuing to ignore various epsilon losses), we see that ${\sup_n F_n}$ can get as large as ${1}$ on almost all of the second half of ${X \times \{2\}}$. This concludes the rough sketch of how one establishes the implication (1).

It was observed by Bufetov that the spherical averages ${{\mathcal A}_n}$ for a free group action can be lifted up to become powers ${P^n}$ of a Markov operator, basically by randomly assigning a “velocity vector” ${s \in \{a,b,a^{-1},b^{-1}\}}$ to one’s base point ${x}$ and then applying the Markov process that moves ${x}$ along that velocity vector (and then randomly changing the velocity vector at each time step to the “reduced word” condition that the velocity never flips from ${s}$ to ${s^{-1}}$). Thus the spherical average problem has a Markov operator interpretation, which opens the door to adapting the Ornstein construction to the setting of ${F_2}$ systems. This turns out to be doable after a certain amount of technical artifice; the main thing is to work with ${F_2}$-measure preserving systems that admit ancient Markov chains that are initially supported in a very small region in the “interior” of the state space, so that one can couple such systems to each other “at the boundary” in the fashion needed to establish the analogue of (1) without disrupting the ancient dynamics of such chains. The initial such system (used to establish the base case ${P(1)}$) comes from basically considering the action of ${F_2}$ on a (suitably renormalised) “infinitely large ball” in the Cayley graph, after suitably gluing together the boundary of this ball to complete the action. The ancient Markov chain associated to this system starts at the centre of this infinitely large ball at infinite negative time ${n=-\infty}$, and only reaches the boundary of this ball at the time ${n=0}$.