You are currently browsing the tag archive for the ‘Loeb measure’ tag.

In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence ${G_n = (V_n, E_n)}$ of finite graphs, one can extract a subsequence ${G_{n_j} = (V_{n_j}, E_{n_j})}$ which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function ${p\colon [0,1] \times [0,1] \rightarrow [0,1]}$. What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon ${p}$. For instance, the edge density

$\displaystyle \frac{1}{|V_{n_j}|^2} |E_{n_j}|$

converge to the integral

$\displaystyle \int_0^1 \int_0^1 p(x,y)\ dx dy,$

the triangle density

$\displaystyle \frac{1}{|V_{n_j}|^3} \lvert \{ (v_1,v_2,v_3) \in V_{n_j}^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_{n_j} \} \rvert$

converges to the integral

$\displaystyle \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ dx_1 dx_2 dx_3,$

the four-cycle density

$\displaystyle \frac{1}{|V_{n_j}|^4} \lvert \{ (v_1,v_2,v_3,v_4) \in V_{n_j}^4: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_4\}, \{v_4,v_1\} \in E_{n_j} \} \rvert$

converges to the integral

$\displaystyle \int_0^1 \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_4) p(x_4,x_1)\ dx_1 dx_2 dx_3 dx_4,$

and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.

One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence ${G_n = (V_n,E_n)}$ of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter ${\alpha \in\beta {\bf N} \backslash {\bf N}}$) to obtain a nonstandard graph ${G_\alpha = (V_\alpha,E_\alpha)}$, where ${V_\alpha = \prod_{n\rightarrow \alpha} V_n}$ is the ultraproduct of the ${V_n}$, and similarly for the ${E_\alpha}$. The set ${E_\alpha}$ can then be viewed as a symmetric subset of ${V_\alpha \times V_\alpha}$ which is measurable with respect to the Loeb ${\sigma}$-algebra ${{\mathcal L}_{V_\alpha \times V_\alpha}}$ of the product ${V_\alpha \times V_\alpha}$ (see this previous blog post for the construction of Loeb measure). A crucial point is that this ${\sigma}$-algebra is larger than the product ${{\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha}}$ of the Loeb ${\sigma}$-algebra of the individual vertex set ${V_\alpha}$. This leads to a decomposition

$\displaystyle 1_{E_\alpha} = p + e$

where the “graphon” ${p}$ is the orthogonal projection of ${1_{E_\alpha}}$ onto ${L^2( {\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha} )}$, and the “regular error” ${e}$ is orthogonal to all product sets ${A \times B}$ for ${A, B \in {\mathcal L}_{V_\alpha}}$. The graphon ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ then captures the statistics of the nonstandard graph ${G_\alpha}$, in exact analogy with the more traditional graph limits: for instance, the edge density

$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^2} |E_\alpha|$

(or equivalently, the limit of the ${\frac{1}{|V_n|^2} |E_n|}$ along the ultrafilter ${\alpha}$) is equal to the integral

$\displaystyle \int_{V_\alpha} \int_{V_\alpha} p(x,y)\ d\mu_{V_\alpha}(x) d\mu_{V_\alpha}(y)$

where ${d\mu_V}$ denotes Loeb measure on a nonstandard finite set ${V}$; the triangle density

$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^3} \lvert \{ (v_1,v_2,v_3) \in V_\alpha^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_\alpha \} \rvert$

(or equivalently, the limit along ${\alpha}$ of the triangle densities of ${E_n}$) is equal to the integral

$\displaystyle \int_{V_\alpha} \int_{V_\alpha} \int_{V_\alpha} p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ d\mu_{V_\alpha}(x_1) d\mu_{V_\alpha}(x_2) d\mu_{V_\alpha}(x_3),$

and so forth. Note that with this construction, the graphon ${p}$ is living on the Cartesian square of an abstract probability space ${V_\alpha}$, which is likely to be inseparable; but it is possible to cut down the Loeb ${\sigma}$-algebra on ${V_\alpha}$ to minimal countable ${\sigma}$-algebra for which ${p}$ remains measurable (up to null sets), and then one can identify ${V_\alpha}$ with ${[0,1]}$, bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)

Additive combinatorics, which studies things like the additive structure of finite subsets ${A}$ of an abelian group ${G = (G,+)}$, has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.

It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group ${A_\alpha}$ in a nonstandard group ${G_\alpha = \prod_{n \rightarrow \alpha} G_n}$, defined as the ultraproduct of finite ${K}$-approximate groups ${A_n \subset G_n}$ for some standard ${K}$. (A ${K}$-approximate group ${A_n}$ is a symmetric set containing the origin such that ${A_n+A_n}$ can be covered by ${K}$ or fewer translates of ${A_n}$.) We then let ${O(A_\alpha)}$ be the external subgroup of ${G_\alpha}$ generated by ${A_\alpha}$; equivalently, ${A_\alpha}$ is the union of ${A_\alpha^m}$ over all standard ${m}$. This space has a Loeb measure ${\mu_{O(A_\alpha)}}$, defined by setting

$\displaystyle \mu_{O(A_\alpha)}(E_\alpha) := \hbox{st} \frac{|E_\alpha|}{|A_\alpha|}$

whenever ${E_\alpha}$ is an internal subset of ${A_\alpha^m}$ for any standard ${m}$, and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.

The Loeb measure ${\mu_{O(A_\alpha)}}$ is a translation invariant measure on ${O(A_{\alpha})}$, normalised so that ${A_\alpha}$ has Loeb measure one. As such, one should think of ${O(A_\alpha)}$ as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that ${O(A_\alpha)}$ is not actually a locally compact group with Haar measure, for two reasons:

• There is not an obvious topology on ${O(A_\alpha)}$ that makes it simultaneously locally compact, Hausdorff, and ${\sigma}$-compact. (One can get one or two out of three without difficulty, though.)
• The addition operation ${+\colon O(A_\alpha) \times O(A_\alpha) \rightarrow O(A_\alpha)}$ is not measurable from the product Loeb algebra ${{\mathcal L}_{O(A_\alpha)} \times {\mathcal L}_{O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$. Instead, it is measurable from the coarser Loeb algebra ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$ (compare with the analogous situation for nonstandard graphs).

Nevertheless, the analogy is a useful guide for the arguments that follow.

Let ${L(O(A_\alpha))}$ denote the space of bounded Loeb measurable functions ${f\colon O(A_\alpha) \rightarrow {\bf C}}$ (modulo almost everywhere equivalence) that are supported on ${A_\alpha^m}$ for some standard ${m}$; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation ${\star\colon L(O(A_\alpha)) \times L(O(A_\alpha)) \rightarrow L(O(A_\alpha))}$, defined by setting

$\displaystyle \hbox{st} f \star \hbox{st} g(x) := \hbox{st} \frac{1}{|A_\alpha|} \sum_{y \in A_\alpha^m} f(y) g(x-y)$

whenever ${f\colon A_\alpha^m \rightarrow {}^* {\bf C}}$, ${g\colon A_\alpha^l \rightarrow {}^* {\bf C}}$ are bounded nonstandard functions (extended by zero to all of ${O(A_\alpha)}$), and then extending to arbitrary elements of ${L(O(A_\alpha))}$ by density. Equivalently, ${f \star g}$ is the pushforward of the ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$-measurable function ${(x,y) \mapsto f(x) g(y)}$ under the map ${(x,y) \mapsto x+y}$.

The basic structural theorem is then as follows.

Theorem 1 (Kronecker factor) Let ${A_\alpha}$ be an ultra approximate group. Then there exists a (standard) locally compact abelian group ${G}$ of the form

$\displaystyle G = {\bf R}^d \times {\bf Z}^m \times T$

for some standard ${d,m}$ and some compact abelian group ${T}$, equipped with a Haar measure ${\mu_G}$ and a measurable homomorphism ${\pi\colon O(A_\alpha) \rightarrow G}$ (using the Loeb ${\sigma}$-algebra on ${O(A_\alpha)}$ and the Baire ${\sigma}$-algebra on ${G}$), with the following properties:

• (i) ${\pi}$ has dense image, and ${\mu_G}$ is the pushforward of Loeb measure ${\mu_{O(A_\alpha)}}$ by ${\pi}$.
• (ii) There exists sets ${\{0\} \subset U_0 \subset K_0 \subset G}$ with ${U_0}$ open and ${K_0}$ compact, such that

$\displaystyle \pi^{-1}(U_0) \subset 4A_\alpha \subset \pi^{-1}(K_0). \ \ \ \ \ (1)$

• (iii) Whenever ${K \subset U \subset G}$ with ${K}$ compact and ${U}$ open, there exists a nonstandard finite set ${B}$ such that

$\displaystyle \pi^{-1}(K) \subset B \subset \pi^{-1}(U). \ \ \ \ \ (2)$

• (iv) If ${f, g \in L}$, then we have the convolution formula

$\displaystyle f \star g = \pi^*( (\pi_* f) \star (\pi_* g) ) \ \ \ \ \ (3)$

where ${\pi_* f,\pi_* g}$ are the pushforwards of ${f,g}$ to ${L^2(G, \mu_G)}$, the convolution ${\star}$ on the right-hand side is convolution using ${\mu_G}$, and ${\pi^*}$ is the pullback map from ${L^2(G,\mu_G)}$ to ${L^2(O(A_\alpha), \mu_{O(A_\alpha)})}$. In particular, if ${\pi_* f = 0}$, then ${f*g=0}$ for all ${g \in L}$.

One can view the locally compact abelian group ${G}$ as a “model “or “Kronecker factor” for the ultra approximate group ${A_\alpha}$ (in close analogy with the Kronecker factor from ergodic theory). In the case that ${A_\alpha}$ is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components ${{\bf R}^d \times {\bf Z}^m}$ of the Kronecker group ${G}$ are trivial, and this theorem was implicitly established by Szegedy. The compact group ${T}$ is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions ${f}$, one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor ${G}$. Once one is in the separable case, the Baire sigma algebra is identical with the more familiar Borel sigma algebra.

Given any sequence of uniformly bounded functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ for some fixed ${m}$, we can view the function ${f \in L}$ defined by

$\displaystyle f := \pi_* \hbox{st} \lim_{n \rightarrow \alpha} f_n \ \ \ \ \ (4)$

as an “additive limit” of the ${f_n}$, in much the same way that graphons ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ are limits of the indicator functions ${1_{E_n}\colon V_n \times V_n \rightarrow \{0,1\}}$. The additive limits capture some of the statistics of the ${f_n}$, for instance the normalised means

$\displaystyle \frac{1}{|A_n|} \sum_{x \in A_n^m} f_n(x)$

converge (along the ultrafilter ${\alpha}$) to the mean

$\displaystyle \int_G f(x)\ d\mu_G(x),$

and for three sequences ${f_n,g_n,h_n\colon A_n^m \rightarrow {\bf C}}$ of functions, the normalised correlation

$\displaystyle \frac{1}{|A_n|^2} \sum_{x,y \in A_n^m} f_n(x) g_n(y) h_n(x+y)$

converges along ${\alpha}$ to the correlation

$\displaystyle \int_G \int_G f(x) g(y) h(x+y)\ d\mu_G(x) d\mu_G(y),$

the normalised ${U^2}$ Gowers norm

$\displaystyle ( \frac{1}{|A_n|^3} \sum_{x,y,z,w \in A_n^m: x+w=y+z} f_n(x) \overline{f_n(y)} \overline{f_n(z)} f_n(w))^{1/4}$

converges along ${\alpha}$ to the ${U^2}$ Gowers norm

$\displaystyle ( \int_{G \times G \times G} f(x) \overline{f(y)} \overline{f(z)} f_n(x+y-z)\ d\mu_G(x) d\mu_G(y) d\mu_G(z))^{1/4}$

and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised ${\ell^2}$ norm

$\displaystyle (\frac{1}{|A_n|} \sum_{x \in A_n^m} |f_n(x)|^2)^{1/2}$

does not necessarily converge to the ${L^2}$ norm

$\displaystyle (\int_G |f(x)|^2\ d\mu_G(x))^{1/2},$

but can converge instead to a larger quantity, due to the presence of the orthogonal projection ${\pi_*}$ in the definition (4) of ${f}$.

An important special case of an additive limit occurs when the functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ involved are indicator functions ${f_n = 1_{E_n}}$ of some subsets ${E_n}$ of ${A_n^m}$. The additive limit ${f \in L}$ does not necessarily remain an indicator function, but instead takes values in ${[0,1]}$ (much as a graphon ${p}$ takes values in ${[0,1]}$ even though the original indicators ${1_{E_n}}$ take values in ${\{0,1\}}$). The convolution ${f \star f\colon G \rightarrow [0,1]}$ is then the ultralimit of the normalised convolutions ${\frac{1}{|A_n|} 1_{E_n} \star 1_{E_n}}$; in particular, the measure of the support of ${f \star f}$ provides a lower bound on the limiting normalised cardinality ${\frac{1}{|A_n|} |E_n + E_n|}$ of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset ${2E_n = E_n + E_n}$ could contain a large number of elements which have very few (${o(|A_n|)}$) representations as the sum of two elements of ${E_n}$, and in the limit these portions of the sumset fall outside of the support of ${f \star f}$. (One can think of the support of ${f \star f}$ as describing the “essential” sumset of ${2E_n = E_n + E_n}$, discarding those elements that have only very few representations.) Similarly for higher convolutions of ${f}$. Thus one can use additive limits to partially control the growth ${k E_n}$ of iterated sumsets of subsets ${E_n}$ of approximate groups ${A_n}$, in the regime where ${k}$ stays bounded and ${n}$ goes to infinity.

Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.

Example 2 (Bohr sets) We take ${A_n}$ to be the intervals ${A_n := \{ x \in {\bf Z}: |x| \leq N_n \}}$, where ${N_n}$ is a sequence going to infinity; these are ${2}$-approximate groups for all ${n}$. Let ${\theta}$ be an irrational real number, let ${I}$ be an interval in ${{\bf R}/{\bf Z}}$, and for each natural number ${n}$ let ${B_n}$ be the Bohr set

$\displaystyle B_n := \{ x \in A^{(n)}: \theta x \hbox{ mod } 1 \in I \}.$

In this case, the (reduced) Kronecker factor ${G}$ can be taken to be the infinite cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ with the usual Lebesgue measure ${\mu_G}$. The additive limits of ${1_{A_n}}$ and ${1_{B_n}}$ end up being ${1_A}$ and ${1_B}$, where ${A}$ is the finite cylinder

$\displaystyle A := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]\}$

and ${B}$ is the rectangle

$\displaystyle B := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]; t \in I \}.$

Geometrically, one should think of ${A_n}$ and ${B_n}$ as being wrapped around the cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ via the homomorphism ${x \mapsto (\frac{x}{N_n}, \theta x \hbox{ mod } 1)}$, and then one sees that ${B_n}$ is converging in some normalised weak sense to ${B}$, and similarly for ${A_n}$ and ${A}$. In particular, the additive limit predicts the growth rate of the iterated sumsets ${kB_n}$ to be quadratic in ${k}$ until ${k|I|}$ becomes comparable to ${1}$, at which point the growth transitions to linear growth, in the regime where ${k}$ is bounded and ${n}$ is large.

If ${\theta = \frac{p}{q}}$ were rational instead of irrational, then one would need to replace ${{\bf R}/{\bf Z}}$ by the finite subgroup ${\frac{1}{q}{\bf Z}/{\bf Z}}$ here.

Example 3 (Structured subsets of progressions) We take ${A_n}$ be the rank two progression

$\displaystyle A_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|, |b| \leq N_n \},$

where ${N_n}$ is a sequence going to infinity; these are ${4}$-approximate groups for all ${n}$. Let ${B_n}$ be the subset

$\displaystyle B_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|^2 + |b|^2 \leq N_n^2 \}.$

Then the (reduced) Kronecker factor can be taken to be ${G = {\bf R}^2}$ with Lebesgue measure ${\mu_G}$, and the additive limits of the ${1_{A_n}}$ and ${1_{B_n}}$ are then ${1_A}$ and ${1_B}$, where ${A}$ is the square

$\displaystyle A := \{ (a,b) \in {\bf R}^2: |a|, |b| \leq 1 \}$

and ${B}$ is the circle

$\displaystyle B := \{ (a,b) \in {\bf R}^2: a^2+b^2 \leq 1 \}.$

Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism ${a + b N_n^2 \mapsto (\frac{a}{N_n}, \frac{b}{N_n})}$ for ${a,b = O( N_n )}$ to embed the original sets ${A_n, B_n}$ into the plane ${{\bf R}^2}$. In particular, one now expects the growth rate of the iterated sumsets ${k A_n}$ and ${k B_n}$ to be quadratic in ${k}$, in the regime where ${k}$ is bounded and ${n}$ is large.

Example 4 (Dissociated sets) Let ${d}$ be a fixed natural number, and take

$\displaystyle A_n = \{0, v_1,\dots,v_d,-v_1,\dots,-v_d \}$

where ${v_1,\dots,v_d}$ are randomly chosen elements of a large cyclic group ${{\bf Z}/p_n{\bf Z}}$, where ${p_n}$ is a sequence of primes going to infinity. These are ${O(d)}$-approximate groups. The (reduced) Kronecker factor ${G}$ can (almost surely) then be taken to be ${{\bf Z}^d}$ with counting measure, and the additive limit of ${1_{A_n}}$ is ${1_A}$, where ${A = \{ 0, e_1,\dots,e_d,-e_1,\dots,-e_d\}}$ and ${e_1,\dots,e_d}$ is the standard basis of ${{\bf Z}^d}$. In particular, the growth rates of ${k A_n}$ should grow approximately like ${k^d}$ for ${k}$ bounded and ${n}$ large.

Example 5 (Random subsets of groups) Let ${A_n = G_n}$ be a sequence of finite additive groups whose order is going to infinity. Let ${B_n}$ be a random subset of ${G_n}$ of some fixed density ${0 \leq \lambda \leq 1}$. Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group ${\{0\}}$, and the additive limit of the ${1_{B_n}}$ is the constant function ${\lambda}$. The convolutions ${\frac{1}{|G_n|} 1_{B_n} * 1_{B_n}}$ then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of ${\lambda^2}$; this reflects the fact that ${(1-o(1))|G_n|}$ of the elements of ${G_n}$ can be represented as the sum of two elements of ${B_n}$ in ${(\lambda^2 + o(1)) |G_n|}$ ways. In particular, ${B_n+B_n}$ occupies a proportion ${1-o(1)}$ of ${G_n}$.

Example 6 (Trigonometric series) Take ${A_n = G_n = {\bf Z}/p_n {\bf C}}$ for a sequence ${p_n}$ of primes going to infinity, and for each ${n}$ let ${\xi_{n,1},\xi_{n,2},\dots}$ be an infinite sequence of frequencies chosen uniformly and independently from ${{\bf Z}/p_n{\bf Z}}$. Let ${f_n\colon {\bf Z}/p_n{\bf Z} \rightarrow {\bf C}}$ denote the random trigonometric series

$\displaystyle f_n(x) := \sum_{j=1}^\infty 2^{-j} e^{2\pi i \xi_{n,j} x / p_n }.$

Then (almost surely) we can take the reduced Kronecker factor ${G}$ to be the infinite torus ${({\bf R}/{\bf Z})^{\bf N}}$ (with the Haar probability measure ${\mu_G}$), and the additive limit of the ${f_n}$ then becomes the function ${f\colon ({\bf R}/{\bf Z})^{\bf N} \rightarrow {\bf R}}$ defined by the formula

$\displaystyle f( (x_j)_{j=1}^\infty ) := \sum_{j=1}^\infty e^{2\pi i x_j}.$

In fact, the pullback ${\pi^* f}$ is the ultralimit of the ${f_n}$. As such, for any standard exponent ${1 \leq q < \infty}$, the normalised ${l^q}$ norm

$\displaystyle (\frac{1}{p_n} \sum_{x \in {\bf Z}/p_n{\bf Z}} |f_n(x)|^q)^{1/q}$

can be seen to converge to the limit

$\displaystyle (\int_{({\bf R}/{\bf Z})^{\bf N}} |f(x)|^q\ d\mu_G(x))^{1/q}.$

The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.

It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.

Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.

There are a number of ways to construct the real numbers ${{\bf R}}$, for instance

• as the metric completion of ${{\bf Q}}$ (thus, ${{\bf R}}$ is defined as the set of Cauchy sequences of rationals, modulo Cauchy equivalence);
• as the space of Dedekind cuts on the rationals ${{\bf Q}}$;
• as the space of quasimorphisms ${\phi: {\bf Z} \rightarrow {\bf Z}}$ on the integers, quotiented by bounded functions. (I believe this construction first appears in this paper of Street, who credits the idea to Schanuel, though the germ of this construction arguably goes all the way back to Eudoxus.)

There is also a fourth family of constructions that proceeds via nonstandard analysis, as a special case of what is known as the nonstandard hull construction. (Here I will assume some basic familiarity with nonstandard analysis and ultraproducts, as covered for instance in this previous blog post.) Given an unbounded nonstandard natural number ${N \in {}^* {\bf N} \backslash {\bf N}}$, one can define two external additive subgroups of the nonstandard integers ${{}^* {\bf Z}}$:

• The group ${O(N) := \{ n \in {}^* {\bf Z}: |n| \leq CN \hbox{ for some } C \in {\bf N} \}}$ of all nonstandard integers of magnitude less than or comparable to ${N}$; and
• The group ${o(N) := \{ n \in {}^* {\bf Z}: |n| \leq C^{-1} N \hbox{ for all } C \in {\bf N} \}}$ of nonstandard integers of magnitude infinitesimally smaller than ${N}$.

The group ${o(N)}$ is a subgroup of ${O(N)}$, so we may form the quotient group ${O(N)/o(N)}$. This space is isomorphic to the reals ${{\bf R}}$, and can in fact be used to construct the reals:

Proposition 1 For any coset ${n + o(N)}$ of ${O(N)/o(N)}$, there is a unique real number ${\hbox{st} \frac{n}{N}}$ with the property that ${\frac{n}{N} = \hbox{st} \frac{n}{N} + o(1)}$. The map ${n + o(N) \mapsto \hbox{st} \frac{n}{N}}$ is then an isomorphism between the additive groups ${O(N)/o(N)}$ and ${{\bf R}}$.

Proof: Uniqueness is clear. For existence, observe that the set ${\{ x \in {\bf R}: Nx \leq n + o(N) \}}$ is a Dedekind cut, and its supremum can be verified to have the required properties for ${\hbox{st} \frac{n}{N}}$. $\Box$

In a similar vein, we can view the unit interval ${[0,1]}$ in the reals as the quotient

$\displaystyle [0,1] \equiv [N] / o(N) \ \ \ \ \ (1)$

where ${[N]}$ is the nonstandard (i.e. internal) set ${\{ n \in {\bf N}: n \leq N \}}$; of course, ${[N]}$ is not a group, so one should interpret ${[N]/o(N)}$ as the image of ${[N]}$ under the quotient map ${{}^* {\bf Z} \rightarrow {}^* {\bf Z} / o(N)}$ (or ${O(N) \rightarrow O(N)/o(N)}$, if one prefers). Or to put it another way, (1) asserts that ${[0,1]}$ is the image of ${[N]}$ with respect to the map ${\pi: n \mapsto \hbox{st} \frac{n}{N}}$.

In this post I would like to record a nice measure-theoretic version of the equivalence (1), which essentially appears already in standard texts on Loeb measure (see e.g. this text of Cutland). To describe the results, we must first quickly recall the construction of Loeb measure on ${[N]}$. Given an internal subset ${A}$ of ${[N]}$, we may define the elementary measure ${\mu_0(A)}$ of ${A}$ by the formula

$\displaystyle \mu_0(A) := \hbox{st} \frac{|A|}{N}.$

This is a finitely additive probability measure on the Boolean algebra of internal subsets of ${[N]}$. We can then construct the Loeb outer measure ${\mu^*(A)}$ of any subset ${A \subset [N]}$ in complete analogy with Lebesgue outer measure by the formula

$\displaystyle \mu^*(A) := \inf \sum_{n=1}^\infty \mu_0(A_n)$

where ${(A_n)_{n=1}^\infty}$ ranges over all sequences of internal subsets of ${[N]}$ that cover ${A}$. We say that a subset ${A}$ of ${[N]}$ is Loeb measurable if, for any (standard) ${\epsilon>0}$, one can find an internal subset ${B}$ of ${[N]}$ which differs from ${A}$ by a set of Loeb outer measure at most ${\epsilon}$, and in that case we define the Loeb measure ${\mu(A)}$ of ${A}$ to be ${\mu^*(A)}$. It is a routine matter to show (e.g. using the Carathéodory extension theorem) that the space ${{\mathcal L}}$ of Loeb measurable sets is a ${\sigma}$-algebra, and that ${\mu}$ is a countably additive probability measure on this space that extends the elementary measure ${\mu_0}$. Thus ${[N]}$ now has the structure of a probability space ${([N], {\mathcal L}, \mu)}$.

Now, the group ${o(N)}$ acts (Loeb-almost everywhere) on the probability space ${[N]}$ by the addition map, thus ${T^h n := n+h}$ for ${n \in [N]}$ and ${h \in o(N)}$ (excluding a set of Loeb measure zero where ${n+h}$ exits ${[N]}$). This action is clearly seen to be measure-preserving. As such, we can form the invariant factor ${Z^0_{o(N)}([N]) = ([N], {\mathcal L}^{o(N)}, \mu\downharpoonright_{{\mathcal L}^{o(N)}})}$, defined by restricting attention to those Loeb measurable sets ${A \subset [N]}$ with the property that ${T^h A}$ is equal ${\mu}$-almost everywhere to ${A}$ for each ${h \in o(N)}$.

The claim is then that this invariant factor is equivalent (up to almost everywhere equivalence) to the unit interval ${[0,1]}$ with Lebesgue measure ${m}$ (and the trivial action of ${o(N)}$), by the same factor map ${\pi: n \mapsto \hbox{st} \frac{n}{N}}$ used in (1). More precisely:

Theorem 2 Given a set ${A \in {\mathcal L}^{o(N)}}$, there exists a Lebesgue measurable set ${B \subset [0,1]}$, unique up to ${m}$-a.e. equivalence, such that ${A}$ is ${\mu}$-a.e. equivalent to the set ${\pi^{-1}(B) := \{ n \in [N]: \hbox{st} \frac{n}{N} \in B \}}$. Conversely, if ${B \in [0,1]}$ is Lebesgue measurable, then ${\pi^{-1}(B)}$ is in ${{\mathcal L}^{o(N)}}$, and ${\mu( \pi^{-1}(B) ) = m( B )}$.

$\displaystyle [0,1] \equiv Z^0_{o(N)}( [N] )$

of (1).

Proof: We first prove the converse. It is clear that ${\pi^{-1}(B)}$ is ${o(N)}$-invariant, so it suffices to show that ${\pi^{-1}(B)}$ is Loeb measurable with Loeb measure ${m(B)}$. This is easily verified when ${B}$ is an elementary set (a finite union of intervals). By countable subadditivity of outer measure, this implies that Loeb outer measure of ${\pi^{-1}(E)}$ is bounded by the Lebesgue outer measure of ${E}$ for any set ${E \subset [0,1]}$; since every Lebesgue measurable set differs from an elementary set by a set of arbitrarily small Lebesgue outer measure, the claim follows.

Now we establish the forward claim. Uniqueness is clear from the converse claim, so it suffices to show existence. Let ${A \in {\mathcal L}^{o(N)}}$. Let ${\epsilon>0}$ be an arbitrary standard real number, then we can find an internal set ${A_\epsilon \subset [N]}$ which differs from ${A}$ by a set of Loeb measure at most ${\epsilon}$. As ${A}$ is ${o(N)}$-invariant, we conclude that for every ${h \in o(N)}$, ${A_\epsilon}$ and ${T^h A_\epsilon}$ differ by a set of Loeb measure (and hence elementary measure) at most ${2\epsilon}$. By the (contrapositive of the) underspill principle, there must exist a standard ${\delta>0}$ such that ${A_\epsilon}$ and ${T^h A_\epsilon}$ differ by a set of elementary measure at most ${2\epsilon}$ for all ${|h| \leq \delta N}$. If we then define the nonstandard function ${f_\epsilon: [N] \rightarrow {}^* {\bf R}}$ by the formula

$\displaystyle f(n) := \hbox{st} \frac{1}{\delta N} \sum_{m \in [N]: m \leq n \leq m+\delta N} 1_{A_\epsilon}(m),$

then from the (nonstandard) triangle inequality we have

$\displaystyle \frac{1}{N} \sum_{n \in [N]} |f(n) - 1_{A_\epsilon}(n)| \leq 3\epsilon$

(say). On the other hand, ${f}$ has the Lipschitz continuity property

$\displaystyle |f(n)-f(m)| \leq \frac{2|n-m|}{\delta N}$

and so in particular we see that

$\displaystyle \hbox{st} f(n) = \tilde f( \hbox{st} \frac{n}{N} )$

for some Lipschitz continuous function ${\tilde f: [0,1] \rightarrow [0,1]}$. If we then let ${E_\epsilon}$ be the set where ${\tilde f \geq 1 - \sqrt{\epsilon}}$, one can check that ${A_\epsilon}$ differs from ${\pi^{-1}(E_\epsilon)}$ by a set of Loeb outer measure ${O(\sqrt{\epsilon})}$, and hence ${A}$ does so also. Sending ${\epsilon}$ to zero, we see (from the converse claim) that ${1_{E_\epsilon}}$ is a Cauchy sequence in ${L^1}$ and thus converges in ${L^1}$ for some Lebesgue measurable ${E}$. The sets ${A_\epsilon}$ then converge in Loeb outer measure to ${\pi^{-1}(E)}$, giving the claim. $\Box$

Thanks to the Lebesgue differentiation theorem, the conditional expectation ${{\bf E}( f | Z^0_{o(N)}([N]))}$ of a bounded Loeb-measurable function ${f: [N] \rightarrow {\bf R}}$ can be expressed (as a function on ${[0,1]}$, defined ${m}$-a.e.) as

$\displaystyle {\bf E}( f | Z^0_{o(N)}([N]))(x) := \lim_{\epsilon \rightarrow 0} \frac{1}{2\epsilon} \int_{[x-\epsilon N,x+\epsilon N]} f\ d\mu.$

By the abstract ergodic theorem from the previous post, one can also view this conditional expectation as the element in the closed convex hull of the shifts ${T^h f}$, ${h = o(N)}$ of minimal ${L^2}$ norm. In particular, we obtain a form of the von Neumann ergodic theorem in this context: the averages ${\frac{1}{H} \sum_{h=1}^H T^h f}$ for ${H=O(N)}$ converge (as a net, rather than a sequence) in ${L^2}$ to ${{\bf E}( f | Z^0_{o(N)}([N]))}$.

If ${f: [N] \rightarrow [-1,1]}$ is (the standard part of) an internal function, that is to say the ultralimit of a sequence ${f_n: [N_n] \rightarrow [-1,1]}$ of finitary bounded functions, one can view the measurable function ${F := {\bf E}( f | Z^0_{o(N)}([N]))}$ as a limit of the ${f_n}$ that is analogous to the “graphons” that emerge as limits of graphs (see e.g. the recent text of Lovasz on graph limits). Indeed, the measurable function ${F: [0,1] \rightarrow [-1,1]}$ is related to the discrete functions ${f_n: [N_n] \rightarrow [-1,1]}$ by the formula

$\displaystyle \int_a^b F(x)\ dx = \hbox{st} \lim_{n \rightarrow p} \frac{1}{N_n} \sum_{a N_n \leq m \leq b N_n} f_n(m)$

for all ${0 \leq a < b \leq 1}$, where ${p}$ is the nonprincipal ultrafilter used to define the nonstandard universe. In particular, from the Arzela-Ascoli diagonalisation argument there is a subsequence ${n_j}$ such that

$\displaystyle \int_a^b F(x)\ dx = \lim_{j \rightarrow \infty} \frac{1}{N_{n_j}} \sum_{a N_{n_j} \leq m \leq b N_{n_j}} f_n(m),$

thus ${F}$ is the asymptotic density function of the ${f_n}$. For instance, if ${f_n}$ is the indicator function of a randomly chosen subset of ${[N_n]}$, then the asymptotic density function would equal ${1/2}$ (almost everywhere, at least).

I’m continuing to look into understanding the ergodic theory of ${o(N)}$ actions, as I believe this may allow one to apply ergodic theory methods to the “single-scale” or “non-asymptotic” setting (in which one averages only over scales comparable to a large parameter ${N}$, rather than the traditional asymptotic approach of letting the scale go to infinity). I’m planning some further posts in this direction, though this is still a work in progress.

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

 (Discrete) (Continuous) (Limit method) Ramsey theory Topological dynamics Compactness Density Ramsey theory Ergodic theory Furstenberg correspondence principle Graph/hypergraph regularity Measure theory Graph limits Polynomial regularity Linear algebra Ultralimits Structural decompositions Hilbert space geometry Ultralimits Fourier analysis Spectral theory Direct and inverse limits Quantitative algebraic geometry Algebraic geometry Schemes Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

• Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects ${x_n}$ in a common space ${X}$, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object ${\lim_{n \rightarrow \infty} x_n}$, which remains in the same space, and is “close” to many of the original objects ${x_n}$ with respect to the given metric or topology.
• Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects ${x_n}$ in a category ${X}$, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit ${\varinjlim x_n}$ or the inverse limit ${\varprojlim x_n}$ of these objects, which is another object in the same category ${X}$, and is connected to the original objects ${x_n}$ by various morphisms.
• Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects ${x_{\bf n}}$ or of spaces ${X_{\bf n}}$, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, ${X_{\bf n}}$ might be groups and ${x_{\bf n}}$ might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$ or a new space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$, which is still a model of the same language (e.g. if the spaces ${X_{\bf n}}$ were all groups, then the limiting space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ is an abelian group, then the ${X_{\bf n}}$ will also be abelian groups for many ${{\bf n}}$.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects ${x_{\bf n}}$ to all lie in a common space ${X}$ in order to form an ultralimit ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; they are permitted to lie in different spaces ${X_{\bf n}}$; this is more natural in many discrete contexts, e.g. when considering graphs on ${{\bf n}}$ vertices in the limit when ${{\bf n}}$ goes to infinity. Also, no convergence properties on the ${x_{\bf n}}$ are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces ${X_{\bf n}}$ involved are required in order to construct the ultraproduct.

With so few requirements on the objects ${x_{\bf n}}$ or spaces ${X_{\bf n}}$, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the ${x_{\bf n}}$, will be exactly obeyed by the limit object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.