You are currently browsing the category archive for the ‘math.LO’ category.

In orthodox first-order logic, variables and expressions are only allowed to take one value at a time; a variable {x}, for instance, is not allowed to equal {+3} and {-3} simultaneously. We will call such variables completely specified. If one really wants to deal with multiple values of objects simultaneously, one is encouraged to use the language of set theory and/or logical quantifiers to do so.

However, the ability to allow expressions to become only partially specified is undeniably convenient, and also rather intuitive. A classic example here is that of the quadratic formula:

\displaystyle  \hbox{If } x,a,b,c \in {\bf R} \hbox{ with } a \neq 0, \hbox{ then }

\displaystyle  ax^2+bx+c=0 \hbox{ if and only if } x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}. \ \ \ \ \ (1)

Strictly speaking, the expression {x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}} is not well-formed according to the grammar of first-order logic; one should instead use something like

\displaystyle x = \frac{-b - \sqrt{b^2-4ac}}{2a} \hbox{ or } x = \frac{-b + \sqrt{b^2-4ac}}{2a}

or

\displaystyle x \in \left\{ \frac{-b - \sqrt{b^2-4ac}}{2a}, \frac{-b + \sqrt{b^2-4ac}}{2a} \right\}

or

\displaystyle x = \frac{-b + \epsilon \sqrt{b^2-4ac}}{2a} \hbox{ for some } \epsilon \in \{-1,+1\}

in order to strictly adhere to this grammar. But none of these three reformulations are as compact or as conceptually clear as the original one. In a similar spirit, a mathematical English sentence such as

\displaystyle  \hbox{The sum of two odd numbers is an even number} \ \ \ \ \ (2)

is also not a first-order sentence; one would instead have to write something like

\displaystyle  \hbox{For all odd numbers } x, y, \hbox{ the number } x+y \hbox{ is even} \ \ \ \ \ (3)

or

\displaystyle  \hbox{For all odd numbers } x,y \hbox{ there exists an even number } z \ \ \ \ \ (4)

\displaystyle  \hbox{ such that } x+y=z

instead. These reformulations are not all that hard to decipher, but they do have the aesthetically displeasing effect of cluttering an argument with temporary variables such as {x,y,z} which are used once and then discarded.

Another example of partially specified notation is the innocuous {\ldots} notation. For instance, the assertion

\displaystyle \pi=3.14\ldots,

when written formally using first-order logic, would become something like

\displaystyle \pi = 3 + \frac{1}{10} + \frac{4}{10^2} + \sum_{n=3}^\infty \frac{a_n}{10^n} \hbox{ for some sequence } (a_n)_{n=3}^\infty

\displaystyle  \hbox{ with } a_n \in \{0,1,2,3,4,5,6,7,8,9\} \hbox{ for all } n,

which is not exactly an elegant reformulation. Similarly with statements such as

\displaystyle \tan x = x + \frac{x^3}{3} + \ldots \hbox{ for } |x| < \pi/2

or

\displaystyle \tan x = x + \frac{x^3}{3} + O(|x|^5) \hbox{ for } |x| < \pi/2.

Below the fold I’ll try to assign a formal meaning to partially specified expressions such as (1), for instance allowing one to condense (2), (3), (4) to just

\displaystyle  \hbox{odd} + \hbox{odd} = \hbox{even}.

When combined with another common (but often implicit) extension of first-order logic, namely the ability to reason using ambient parameters, we become able to formally introduce asymptotic notation such as the big-O notation {O()} or the little-o notation {o()}. We will explain how to do this at the end of this post.

Read the rest of this entry »

Rachel Greenfeld and I have just uploaded to the arXiv our preprint “Undecidable translational tilings with only two tiles, or one nonabelian tile“. This paper studies the following question: given a finitely generated group {G}, a (periodic) subset {E} of {G}, and finite sets {F_1,\dots,F_J} in {G}, is it possible to tile {E} by translations {a_j+F_j} of the tiles {F_1,\dots,F_J}? That is to say, is there a solution {\mathrm{X}_1 = A_1, \dots, \mathrm{X}_J = A_J} to the (translational) tiling equation

\displaystyle  (\mathrm{X}_1 \oplus F_1) \uplus \dots \uplus (\mathrm{X}_J \oplus F_J) = E \ \ \ \ \ (1)

for some subsets {A_1,\dots,A_J} of {G}, where {A \oplus F} denotes the set of sums {\{a+f: a \in A, f \in F \}} if the sums {a+f} are all disjoint (and is undefined otherwise), and {\uplus} denotes disjoint union. (One can also write the tiling equation in the language of convolutions as {1_{\mathrm{X}_1} * 1_{F_1} + \dots + 1_{\mathrm{X}_J} * 1_{F_J} = 1_E}.)

A bit more specifically, the paper studies the decidability of the above question. There are two slightly different types of decidability one could consider here:

  • Logical decidability. For a given {G, E, J, F_1,\dots,F_J}, one can ask whether the solvability of the tiling equation (1) is provable or disprovable in ZFC (where we encode all the data {G, E, F_1,\dots,F_J} by appropriate constructions in ZFC). If this is the case we say that the tiling equation (1) (or more precisely, the solvability of this equation) is logically decidable, otherwise it is logically undecidable.
  • Algorithmic decidability. For data {G,E,J, F_1,\dots,F_J} in some specified class (and encoded somehow as binary strings), one can ask whether the solvability of the tiling equation (1) can be correctly determined for all choices of data in this class by the output of some Turing machine that takes the data as input (encoded as a binary string) and halts in finite time, returning either YES if the equation can be solved or NO otherwise. If this is the case, we say the tiling problem of solving (1) for data in the given class is algorithmically decidable, otherwise it is algorithmically undecidable.

Note that the notion of logical decidability is “pointwise” in the sense that it pertains to a single choice of data {G,E,J,F_1,\dots,F_J}, whereas the notion of algorithmic decidability pertains instead to classes of data, and is only interesting when this class is infinite. Indeed, any tiling problem with a finite class of data is trivially decidable because one could simply code a Turing machine that is basically a lookup table that returns the correct answer for each choice of data in the class. (This is akin to how a student with a good memory could pass any exam if the questions are drawn from a finite list, merely by memorising an answer key for that list of questions.)

The two notions are related as follows: if a tiling problem (1) is algorithmically undecidable for some class of data, then the tiling equation must be logically undecidable for at least one choice of data for this class. For if this is not the case, one could algorithmically decide the tiling problem by searching for proofs or disproofs that the equation (1) is solvable for a given choice of data; the logical decidability of all such solvability questions will ensure that this algorithm always terminates in finite time.

One can use the Gödel completeness theorem to interpret logical decidability in terms of universes (also known as structures or models) of ZFC. In addition to the “standard” universe {{\mathfrak U}} of sets that we believe satisfies the axioms of ZFC, there are also other “nonstandard” universes {{\mathfrak U}^*} that also obey the axioms of ZFC. If the solvability of a tiling equation (1) is logically undecidable, this means that such a tiling exists in some universes of ZFC, but not in others.

(To continue the exam analogy, we thus see that a yes-no exam question is logically undecidable if the answer to the question is yes in some parallel universes, but not in others. A course syllabus is algorithmically undecidable if there is no way to prepare for the final exam for the course in a way that guarantees a perfect score (in the standard universe).)

Questions of decidability are also related to the notion of aperiodicity. For a given {G, E, J, F_1,\dots,F_J}, a tiling equation (1) is said to be aperiodic if the equation (1) is solvable (in the standard universe {{\mathfrak U}} of ZFC), but none of the solutions (in that universe) are completely periodic (i.e., there are no solutions {\mathrm{X}_1 = A_1,\dots, \mathrm{X}_J = A_J} where all of the {A_1,\dots,A_J} are periodic). Perhaps the most well-known example of an aperiodic tiling (in the context of {{\bf R}^2}, and using rotations as well as translations) come from the Penrose tilings, but there are many others besides.

It was (essentially) observed by Hao Wang in the 1960s that if a tiling equation is logically undecidable, then it must necessarily be aperiodic. Indeed, if a tiling equation fails to be aperiodic, then (in the standard universe) either there is a periodic tiling, or there are no tilings whatsoever. In the former case, the periodic tiling can be used to give a finite proof that the tiling equation is solvable; in the latter case, the compactness theorem implies that there is some finite fragment of {E} that is not compatible with being tiled by {F_1,\dots,F_J}, and this provides a finite proof that the tiling equation is unsolvable. Thus in either case the tiling equation is logically decidable.

This observation of Wang clarifies somewhat how logically undecidable tiling equations behave in the various universes of ZFC. In the standard universe, tilings exist, but none of them will be periodic. In nonstandard universes, tilings may or may not exist, and the tilings that do exist may be periodic (albeit with a nonstandard period); but there must be at least one universe in which no tiling exists at all.

In one dimension when {G={\bf Z}} (or more generally {G = {\bf Z} \times G_0} with {G_0} a finite group), a simple pigeonholing argument shows that no tiling equations are aperiodic, and hence all tiling equations are decidable. However the situation changes in two dimensions. In 1966, Berger (a student of Wang) famously showed that there exist tiling equations (1) in the discrete plane {E = G = {\bf Z}^2} that are aperiodic, or even logically undecidable; in fact he showed that the tiling problem in this case (with arbitrary choices of data {J, F_1,\dots,F_J}) was algorithmically undecidable. (Strictly speaking, Berger established this for a variant of the tiling problem known as the domino problem, but later work of Golomb showed that the domino problem could be easily encoded within the tiling problem.) This was accomplished by encoding the halting problem for Turing machines into the tiling problem (or domino problem); the latter is well known to be algorithmically undecidable (and thus have logically undecidable instances), and so the latter does also. However, the number of tiles {J} required for Berger’s construction was quite large: his construction of an aperiodic tiling required {J = 20426} tiles, and his construction of a logically undecidable tiling required an even larger (and not explicitly specified) collection of tiles. Subsequent work by many authors did reduce the number of tiles required; in the {E=G={\bf Z}^2} setting, the current world record for the fewest number of tiles in an aperiodic tiling is {J=8} (due to Amman, Grunbaum, and Shephard) and for a logically undecidable tiling is {J=11} (due to Ollinger). On the other hand, it is conjectured (see Grunbaum-Shephard and Lagarias-Wang) that one cannot lower {J} all the way to {1}:

Conjecture 1 (Periodic tiling conjecture) If {E} is a periodic subset of a finitely generated abelian group {G}, and {F} is a finite subset of {G}, then the tiling equation {\mathrm{X} \oplus F = E} is not aperiodic.

This conjecture is known to be true in two dimensions (by work of Bhattacharya when {G=E={\bf Z}^2}, and more recently by us when {E \subset G = {\bf Z}^2}), but remains open in higher dimensions. By the preceding discussion, the conjecture implies that every tiling equation with a single tile is logically decidable, and the problem of whether a given periodic set can be tiled by a single tile is algorithmically decidable.

In this paper we show on the other hand that aperiodic and undecidable tilings exist when {J=2}, at least if one is permitted to enlarge the group {G} a bit:

Theorem 2 (Logically undecidable tilings)
  • (i) There exists a group {G} of the form {G = {\bf Z}^2 \times G_0} for some finite abelian {G_0}, a subset {E_0} of {G_0}, and finite sets {F_1, F_2 \subset G} such that the tiling equation {(\mathbf{X}_1 \oplus F_1) \uplus (\mathbf{X}_2 \oplus F_2) = {\bf Z}^2 \times E_0} is logically undecidable (and hence also aperiodic).
  • (ii) There exists a dimension {d}, a periodic subset {E} of {{\bf Z}^d}, and finite sets {F_1, F_2 \subset G} such that tiling equation {(\mathbf{X}_1 \oplus F_1) \uplus (\mathbf{X}_2 \oplus F_2) = E} is logically undecidable (and hence also aperiodic).
  • (iii) There exists a non-abelian finite group {G_0} (with the group law still written additively), a subset {E_0} of {G_0}, and a finite set {F \subset {\bf Z}^2 \times G_0} such that the nonabelian tiling equation {\mathbf{X} \oplus F = {\bf Z}^2 \times E_0} is logically undecidable (and hence also aperiodic).

We also have algorithmic versions of this theorem. For instance, the algorithmic version of (i) is that the problem of determining solvability of the tiling equation {(\mathbf{X}_1 \oplus F_1) \uplus (\mathbf{X}_2 \oplus F_2) = {\bf Z}^2 \times E_0} for a given choice of finite abelian group {G_0}, subset {E_0} of {G_0}, and finite sets {F_1, F_2 \subset {\bf Z}^2 \times G_0} is algorithmically undecidable. Similarly for (ii), (iii).

This result (together with a negative result discussed below) suggest to us that there is a significant qualitative difference in the {J=1} theory of tiling by a single (abelian) tile, and the {J \geq 2} theory of tiling with multiple tiles (or one non-abelian tile). (The positive results on the periodic tiling conjecture certainly rely heavily on the fact that there is only one tile, in particular there is a “dilation lemma” that is only available in this setting that is of key importance in the two dimensional theory.) It would be nice to eliminate the group {G_0} from (i) (or to set {d=2} in (ii)), but I think this would require a fairly significant modification of our methods.

Like many other undecidability results, the proof of Theorem 2 proceeds by a sequence of reductions, in which the undecidability of one problem is shown to follow from the undecidability of another, more “expressive” problem that can be encoded inside the original problem, until one reaches a problem that is so expressive that it encodes a problem already known to be undecidable. Indeed, all three undecidability results are ultimately obtained from Berger’s undecidability result on the domino problem.

The first step in increasing expressiveness is to observe that the undecidability of a single tiling equation follows from the undecidability of a system of tiling equations. More precisely, suppose we have non-empty finite subsets {F_j^{(m)}} of a finitely generated group {G} for {j=1,\dots,J} and {m=1,\dots,M}, as well as periodic sets {E^{(m)}} of {G} for {m=1,\dots,M}, such that it is logically undecidable whether the system of tiling equations

\displaystyle  (\mathrm{X}_1 \oplus F_1^{(m)}) \uplus \dots \uplus (\mathrm{X}_J \oplus F_J^{(m)}) = E^{(m)} \ \ \ \ \ (2)

for {m=1,\dots,M} has no solution {\mathrm{X}_1 = A_1,\dots, \mathrm{X}_J = A_J} in {G}. Then, for any {N>M}, we can “stack” these equations into a single tiling equation in the larger group {G \times {\bf Z}/N{\bf Z}}, and specifically to the equation

\displaystyle  (\mathrm{X}_1 \oplus F_1) \uplus \dots \uplus (\mathrm{X}_J \oplus F_J) = E \ \ \ \ \ (3)

where

\displaystyle  F_j := \biguplus_{m=1}^M F_j^{(m)} \times \{m\}

and

\displaystyle  E := \biguplus_{m=1}^M E^{(m)} \times \{m\}.

It is a routine exercise to check that the system of equations (2) admits a solution in {G} if and only if the single equation (3) admits a equation in {G \times {\bf Z}/N{\bf Z}}. Thus, to prove the undecidability of a single equation of the form (3) it suffices to establish undecidability of a system of the form (2); note here how the freedom to select the auxiliary group {G_0} is important here.

We view systems of the form (2) as belonging to a kind of “language” in which each equation in the system is a “sentence” in the language imposing additional constraints on a tiling. One can now pick and choose various sentences in this language to try to encode various interesting problems. For instance, one can encode the concept of a function {f: {\bf Z}^2 \rightarrow G_0} taking values in a finite group {G_0} as a single tiling equation

\displaystyle  \mathrm{X} \oplus (\{0\} \times G_0) = {\bf Z}^2 \times G_0 \ \ \ \ \ (4)

since the solutions to this equation are precisely the graphs

\displaystyle  \mathrm{X} = \{ (n, f(n)): n \in {\bf Z}^2 \}

of a function {f: {\bf Z}^2 \rightarrow G_0}. By adding more tiling equations to this equation to form a larger system, we can start imposing additional constraints on this function {f}. For instance, if {x+H} is a coset of some subgroup {H} of {G_0}, we can impose the additional equation

\displaystyle  \mathrm{X} \oplus (\{0\} \times H) = {\bf Z}^2 \times (x+H) \ \ \ \ \ (5)

to impose the additional constraint that {f(n) \in x+H} for all {n \in {\bf Z}^2}, if we desire. If {G_0} happens to contain two distinct elements {1, -1}, and {h \in {\bf Z}^2}, then the additional equation

\displaystyle  \mathrm{X} \oplus (\{0,h\} \times \{0\}) = {\bf Z}^2 \times \{-1,1\} \ \ \ \ \ (6)

imposes the additional constraints that {f(n) \in \{-1,1\}} for all {n \in {\bf Z}^2}, and additionally that

\displaystyle  f(n+h) = -f(n)

for all {n \in {\bf Z}^2}.

This begins to resemble the equations that come up in the domino problem. Here one has a finite set of Wang tiles – unit squares {T} where each of the four sides is colored with a color {c_N(T), c_S(T), c_E(T), c_W(T)} (corresponding to the four cardinal directions North, South, East, and West) from some finite set {{\mathcal C}} of colors. The domino problem is then to tile the plane with copies of these tiles in such a way that adjacent sides match. In terms of equations, one is seeking to find functions {c_N, c_S, c_E, c_W: {\bf Z}^2 \rightarrow {\mathcal C}} obeying the pointwise constraint

\displaystyle  (c_N(n), c_S(n), c_E(n), c_W(n)) \in {\mathcal W} \ \ \ \ \ (7)

for all {n \in {\bf Z}^2} where {{\mathcal W}} is the set of colors associated to the set of Wang tiles being used, and the matching constraints

\displaystyle  c_S(n+(0,1)) = c_N(n); \quad c_W(n+(1,0)) = c_E(n) \ \ \ \ \ (8)

for all {{\bf Z}^2}. As it turns out, the pointwise constraint (7) can be encoded by tiling equations that are fancier versions of (4), (5), (6) that involve only one unknown tiling set {{\mathrm X}}, but in order to encode the matching constraints (8) we were forced to introduce a second tile (or work with nonabelian tiling equations). This appears to be an inherent feature of the method, since we found a partial rigidity result for tilings of one tile in one dimension that obstructs this encoding strategy from working when one only has one tile available. The result is as follows:

Proposition 3 (Swapping property) Consider the solutions to a tiling equation

\displaystyle  \mathrm{X} \oplus F = E \ \ \ \ \ (9)

in a one-dimensional group {G = {\bf Z} \times G_0} (with {G_0} a finite abelian group, {F} finite, and {E} periodic). Suppose there are two solutions {\mathrm{X} = A_0, \mathrm{X} = A_1} to this equation that agree on the left in the sense that

\displaystyle A_0 \cap (\{0, -1, -2, \dots\} \times G_0) = A_1 \cap (\{0, -1, -2, \dots\} \times G_0).

For any function {\omega: {\bf Z} \rightarrow \{0,1\}}, define the “swap” {A_\omega} of {A_0} and {A_1} to be the set

\displaystyle  A_\omega := \{ (n, g): n \in {\bf Z}, (n,g) \in A_{\omega(n)} \}

Then {A_\omega} also solves the equation (9).

One can think of {A_0} and {A_1} as “genes” with “nucleotides” {\{ g \in G_0: (n,g) \in A_0\}}, {\{ g \in G_0: (n,g) \in A_1\}} at each position {n \in {\bf Z}}, and {A_\omega} is a new gene formed by choosing one of the nucleotides from the “parent” genes {A_0}, {A_1} at each position. The above proposition then says that the solutions to the equation (9) must be closed under “genetic transfer” among any pair of genes that agree on the left. This seems to present an obstruction to trying to encode equation such as

\displaystyle  c(n+1) = c'(n)

for two functions {c, c': {\bf Z} \rightarrow \{-1,1\}} (say), which is a toy version of the matching constraint (8), since the class of solutions to this equation turns out not to obey this swapping property. On the other hand, it is easy to encode such equations using two tiles instead of one, and an elaboration of this construction is used to prove our main theorem.

Abdul Basit, Artem Chernikov, Sergei Starchenko, Chiu-Minh Tran and I have uploaded to the arXiv our paper Zarankiewicz’s problem for semilinear hypergraphs. This paper is in the spirit of a number of results in extremal graph theory in which the bounds for various graph-theoretic problems or results can be greatly improved if one makes some additional hypotheses regarding the structure of the graph, for instance by requiring that the graph be “definable” with respect to some theory with good model-theoretic properties.

A basic motivating example is the question of counting the number of incidences between points and lines (or between points and other geometric objects). Suppose one has {n} points and {n} lines in a space. How many incidences can there be between these points and lines? The utterly trivial bound is {n^2}, but by using the basic fact that two points determine a line (or two lines intersect in at most one point), a simple application of Cauchy-Schwarz improves this bound to {n^{3/2}}. In graph theoretic terms, the point is that the bipartite incidence graph between points and lines does not contain a copy of {K_{2,2}} (there does not exist two points and two lines that are all incident to each other). Without any other further hypotheses, this bound is basically sharp: consider for instance the collection of {p^2} points and {p^2+p} lines in a finite plane {{\bf F}_p^2}, that has {p^3+p^2} incidences (one can make the situation more symmetric by working with a projective plane rather than an affine plane). If however one considers lines in the real plane {{\bf R}^2}, the famous Szemerédi-Trotter theorem improves the incidence bound further from {n^{3/2}} to {O(n^{4/3})}. Thus the incidence graph between real points and lines contains more structure than merely the absence of {K_{2,2}}.

More generally, bounding on the size of bipartite graphs (or multipartite hypergraphs) not containing a copy of some complete bipartite subgraph {K_{k,k}} (or {K_{k,\dots,k}} in the hypergraph case) is known as Zarankiewicz’s problem. We have results for all {k} and all orders of hypergraph, but for sake of this post I will focus on the bipartite {k=2} case.

In our paper we improve the {n^{3/2}} bound to a near-linear bound in the case that the incidence graph is “semilinear”. A model case occurs when one considers incidences between points and axis-parallel rectangles in the plane. Now the {K_{2,2}} condition is not automatic (it is of course possible for two distinct points to both lie in two distinct rectangles), so we impose this condition by fiat:

Theorem 1 Suppose one has {n} points and {n} axis-parallel rectangles in the plane, whose incidence graph contains no {K_{2,2}}‘s, for some large {n}.
  • (i) The total number of incidences is {O(n \log^4 n)}.
  • (ii) If all the rectangles are dyadic, the bound can be improved to {O( n \frac{\log n}{\log\log n} )}.
  • (iii) The bound in (ii) is best possible (up to the choice of implied constant).

We don’t know whether the bound in (i) is similarly tight for non-dyadic boxes; the usual tricks for reducing the non-dyadic case to the dyadic case strangely fail to apply here. One can generalise to higher dimensions, replacing rectangles by polytopes with faces in some fixed finite set of orientations, at the cost of adding several more logarithmic factors; also, one can replace the reals by other ordered division rings, and replace polytopes by other sets of bounded “semilinear descriptive complexity”, e.g., unions of boundedly many polytopes, or which are cut out by boundedly many functions that enjoy coordinatewise monotonicity properties. For certain specific graphs we can remove the logarithmic factors entirely. We refer to the preprint for precise details.

The proof techniques are combinatorial. The proof of (i) relies primarily on the order structure of {{\bf R}} to implement a “divide and conquer” strategy in which one can efficiently control incidences between {n} points and rectangles by incidences between approximately {n/2} points and boxes. For (ii) there is additional order-theoretic structure one can work with: first there is an easy pruning device to reduce to the case when no rectangle is completely contained inside another, and then one can impose the “tile partial order” in which one dyadic rectangle {I \times J} is less than another {I' \times J'} if {I \subset I'} and {J' \subset J}. The point is that this order is “locally linear” in the sense that for any two dyadic rectangles {R_-, R_+}, the set {[R_-,R_+] := \{ R: R_- \leq R \leq R_+\}} is linearly ordered, and this can be exploited by elementary double counting arguments to obtain a bound which eventually becomes {O( n \frac{\log n}{\log\log n})} after optimising certain parameters in the argument. The proof also suggests how to construct the counterexample in (iii), which is achieved by an elementary iterative construction.

Asgar Jamneshan and I have just uploaded to the arXiv our paper “An uncountable Moore-Schmidt theorem“. This paper revisits a classical theorem of Moore and Schmidt in measurable cohomology of measure-preserving systems. To state the theorem, let {X = (X,{\mathcal X},\mu)} be a probability space, and {\mathrm{Aut}(X, {\mathcal X}, \mu)} be the group of measure-preserving automorphisms of this space, that is to say the invertible bimeasurable maps {T: X \rightarrow X} that preserve the measure {\mu}: {T_* \mu = \mu}. To avoid some ambiguity later in this post when we introduce abstract analogues of measure theory, we will refer to measurable maps as concrete measurable maps, and measurable spaces as concrete measurable spaces. (One could also call {X = (X,{\mathcal X}, \mu)} a concrete probability space, but we will not need to do so here as we will not be working explicitly with abstract probability spaces.)

Let {\Gamma = (\Gamma,\cdot)} be a discrete group. A (concrete) measure-preserving action of {\Gamma} on {X} is a group homomorphism {\gamma \mapsto T^\gamma} from {\Gamma} to {\mathrm{Aut}(X, {\mathcal X}, \mu)}, thus {T^1} is the identity map and {T^{\gamma_1} \circ T^{\gamma_2} = T^{\gamma_1 \gamma_2}} for all {\gamma_1,\gamma_2 \in \Gamma}. A large portion of ergodic theory is concerned with the study of such measure-preserving actions, especially in the classical case when {\Gamma} is the integers (with the additive group law).

Let {K = (K,+)} be a compact Hausdorff abelian group, which we can endow with the Borel {\sigma}-algebra {{\mathcal B}(K)}. A (concrete measurable) {K}cocycle is a collection {\rho = (\rho_\gamma)_{\gamma \in \Gamma}} of concrete measurable maps {\rho_\gamma: X \rightarrow K} obeying the cocycle equation

\displaystyle \rho_{\gamma_1 \gamma_2}(x) = \rho_{\gamma_1} \circ T^{\gamma_2}(x) + \rho_{\gamma_2}(x) \ \ \ \ \ (1)

for {\mu}-almost every {x \in X}. (Here we are glossing over a measure-theoretic subtlety that we will return to later in this post – see if you can spot it before then!) Cocycles arise naturally in the theory of group extensions of dynamical systems; in particular (and ignoring the aforementioned subtlety), each cocycle induces a measure-preserving action {\gamma \mapsto S^\gamma} on {X \times K} (which we endow with the product of {\mu} with Haar probability measure on {K}), defined by

\displaystyle S^\gamma( x, k ) := (T^\gamma x, k + \rho_\gamma(x) ).

This connection with group extensions was the original motivation for our study of measurable cohomology, but is not the focus of the current paper.

A special case of a {K}-valued cocycle is a (concrete measurable) {K}-valued coboundary, in which {\rho_\gamma} for each {\gamma \in \Gamma} takes the special form

\displaystyle \rho_\gamma(x) = F \circ T^\gamma(x) - F(x)

for {\mu}-almost every {x \in X}, where {F: X \rightarrow K} is some measurable function; note that (ignoring the aforementioned subtlety), every function of this form is automatically a concrete measurable {K}-valued cocycle. One of the first basic questions in measurable cohomology is to try to characterize which {K}-valued cocycles are in fact {K}-valued coboundaries. This is a difficult question in general. However, there is a general result of Moore and Schmidt that at least allows one to reduce to the model case when {K} is the unit circle {\mathbf{T} = {\bf R}/{\bf Z}}, by taking advantage of the Pontryagin dual group {\hat K} of characters {\hat k: K \rightarrow \mathbf{T}}, that is to say the collection of continuous homomorphisms {\hat k: k \mapsto \langle \hat k, k \rangle} to the unit circle. More precisely, we have

Theorem 1 (Countable Moore-Schmidt theorem) Let {\Gamma} be a discrete group acting in a concrete measure-preserving fashion on a probability space {X}. Let {K} be a compact Hausdorff abelian group. Assume the following additional hypotheses:

Then a {K}-valued concrete measurable cocycle {\rho = (\rho_\gamma)_{\gamma \in \Gamma}} is a concrete coboundary if and only if for each character {\hat k \in \hat K}, the {\mathbf{T}}-valued cocycles {\langle \hat k, \rho \rangle = ( \langle \hat k, \rho_\gamma \rangle )_{\gamma \in \Gamma}} are concrete coboundaries.

The hypotheses (i), (ii), (iii) are saying in some sense that the data {\Gamma, X, K} are not too “large”; in all three cases they are saying in some sense that the data are only “countably complicated”. For instance, (iii) is equivalent to {K} being second countable, and (ii) is equivalent to {X} being modeled by a complete separable metric space. It is because of this restriction that we refer to this result as a “countable” Moore-Schmidt theorem. This theorem is a useful tool in several other applications, such as the Host-Kra structure theorem for ergodic systems; I hope to return to these subsequent applications in a future post.

Let us very briefly sketch the main ideas of the proof of Theorem 1. Ignore for now issues of measurability, and pretend that something that holds almost everywhere in fact holds everywhere. The hard direction is to show that if each {\langle \hat k, \rho \rangle} is a coboundary, then so is {\rho}. By hypothesis, we then have an equation of the form

\displaystyle \langle \hat k, \rho_\gamma(x) \rangle = \alpha_{\hat k} \circ T^\gamma(x) - \alpha_{\hat k}(x) \ \ \ \ \ (2)

for all {\hat k, \gamma, x} and some functions {\alpha_{\hat k}: X \rightarrow {\mathbf T}}, and our task is then to produce a function {F: X \rightarrow K} for which

\displaystyle \rho_\gamma(x) = F \circ T^\gamma(x) - F(x)

for all {\gamma,x}.

Comparing the two equations, the task would be easy if we could find an {F: X \rightarrow K} for which

\displaystyle \langle \hat k, F(x) \rangle = \alpha_{\hat k}(x) \ \ \ \ \ (3)

for all {\hat k, x}. However there is an obstruction to this: the left-hand side of (3) is additive in {\hat k}, so the right-hand side would have to be also in order to obtain such a representation. In other words, for this strategy to work, one would have to first establish the identity

\displaystyle \alpha_{\hat k_1 + \hat k_2}(x) - \alpha_{\hat k_1}(x) - \alpha_{\hat k_2}(x) = 0 \ \ \ \ \ (4)

for all {\hat k_1, \hat k_2, x}. On the other hand, the good news is that if we somehow manage to obtain the equation, then we can obtain a function {F} obeying (3), thanks to Pontryagin duality, which gives a one-to-one correspondence between {K} and the homomorphisms of the (discrete) group {\hat K} to {\mathbf{T}}.

Now, it turns out that one cannot derive the equation (4) directly from the given information (2). However, the left-hand side of (2) is additive in {\hat k}, so the right-hand side must be also. Manipulating this fact, we eventually arrive at

\displaystyle (\alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2}) \circ T^\gamma(x) = (\alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2})(x).

In other words, we don’t get to show that the left-hand side of (4) vanishes, but we do at least get to show that it is {\Gamma}-invariant. Now let us assume for sake of argument that the action of {\Gamma} is ergodic, which (ignoring issues about sets of measure zero) basically asserts that the only {\Gamma}-invariant functions are constant. So now we get a weaker version of (4), namely

\displaystyle \alpha_{\hat k_1 + \hat k_2}(x) - \alpha_{\hat k_1}(x) - \alpha_{\hat k_2}(x) = c_{\hat k_1, \hat k_2} \ \ \ \ \ (5)

for some constants {c_{\hat k_1, \hat k_2} \in \mathbf{T}}.

Now we need to eliminate the constants. This can be done by the following group-theoretic projection. Let {L^0({\bf X} \rightarrow {\bf T})} denote the space of concrete measurable maps {\alpha} from {{\bf X}} to {{\bf T}}, up to almost everywhere equivalence; this is an abelian group where the various terms in (5) naturally live. Inside this group we have the subgroup {{\bf T}} of constant functions (up to almost everywhere equivalence); this is where the right-hand side of (5) lives. Because {{\bf T}} is a divisible group, there is an application of Zorn’s lemma (a good exercise for those who are not acquainted with these things) to show that there exists a retraction {w: L^0({\bf X} \rightarrow {\bf T}) \rightarrow {\bf T}}, that is to say a group homomorphism that is the identity on the subgroup {{\bf T}}. We can use this retraction, or more precisely the complement {\alpha \mapsto \alpha - w(\alpha)}, to eliminate the constant in (5). Indeed, if we set

\displaystyle \tilde \alpha_{\hat k}(x) := \alpha_{\hat k}(x) - w(\alpha_{\hat k})

then from (5) we see that

\displaystyle \tilde \alpha_{\hat k_1 + \hat k_2}(x) - \tilde \alpha_{\hat k_1}(x) - \tilde \alpha_{\hat k_2}(x) = 0

while from (2) one has

\displaystyle \langle \hat k, \rho_\gamma(x) \rangle = \tilde \alpha_{\hat k} \circ T^\gamma(x) - \tilde \alpha_{\hat k}(x)

and now the previous strategy works with {\alpha_{\hat k}} replaced by {\tilde \alpha_{\hat k}}. This concludes the sketch of proof of Theorem 1.

In making the above argument rigorous, the hypotheses (i)-(iii) are used in several places. For instance, to reduce to the ergodic case one relies on the ergodic decomposition, which requires the hypothesis (ii). Also, most of the above equations only hold outside of a set of measure zero, and the hypothesis (i) and the hypothesis (iii) (which is equivalent to {\hat K} being at most countable) to avoid the problem that an uncountable union of sets of measure zero could have positive measure (or fail to be measurable at all).

My co-author Asgar Jamneshan and I are working on a long-term project to extend many results in ergodic theory (such as the aforementioned Host-Kra structure theorem) to “uncountable” settings in which hypotheses analogous to (i)-(iii) are omitted; thus we wish to consider actions on uncountable groups, on spaces that are not standard Borel, and cocycles taking values in groups that are not metrisable. Such uncountable contexts naturally arise when trying to apply ergodic theory techniques to combinatorial problems (such as the inverse conjecture for the Gowers norms), as one often relies on the ultraproduct construction (or something similar) to generate an ergodic theory translation of these problems, and these constructions usually give “uncountable” objects rather than “countable” ones. (For instance, the ultraproduct of finite groups is a hyperfinite group, which is usually uncountable.). This paper marks the first step in this project by extending the Moore-Schmidt theorem to the uncountable setting.

If one simply drops the hypotheses (i)-(iii) and tries to prove the Moore-Schmidt theorem, several serious difficulties arise. We have already mentioned the loss of the ergodic decomposition and the possibility that one has to control an uncountable union of null sets. But there is in fact a more basic problem when one deletes (iii): the addition operation {+: K \times K \rightarrow K}, while still continuous, can fail to be measurable as a map from {(K \times K, {\mathcal B}(K) \otimes {\mathcal B}(K))} to {(K, {\mathcal B}(K))}! Thus for instance the sum of two measurable functions {F: X \rightarrow K} need not remain measurable, which makes even the very definition of a measurable cocycle or measurable coboundary problematic (or at least unnatural). This phenomenon is known as the Nedoma pathology. A standard example arises when {K} is the uncountable torus {{\mathbf T}^{{\bf R}}}, endowed with the product topology. Crucially, the Borel {\sigma}-algebra {{\mathcal B}(K)} generated by this uncountable product is not the product {{\mathcal B}(\mathbf{T})^{\otimes {\bf R}}} of the factor Borel {\sigma}-algebras (the discrepancy ultimately arises from the fact that topologies permit uncountable unions, but {\sigma}-algebras do not); relating to this, the product {\sigma}-algebra {{\mathcal B}(K) \otimes {\mathcal B}(K)} is not the same as the Borel {\sigma}-algebra {{\mathcal B}(K \times K)}, but is instead a strict sub-algebra. If the group operations on {K} were measurable, then the diagonal set

\displaystyle K^\Delta := \{ (k,k') \in K \times K: k = k' \} = \{ (k,k') \in K \times K: k - k' = 0 \}

would be measurable in {{\mathcal B}(K) \otimes {\mathcal B}(K)}. But it is an easy exercise in manipulation of {\sigma}-algebras to show that if {(X, {\mathcal X}), (Y, {\mathcal Y})} are any two measurable spaces and {E \subset X \times Y} is measurable in {{\mathcal X} \otimes {\mathcal Y}}, then the fibres {E_x := \{ y \in Y: (x,y) \in E \}} of {E} are contained in some countably generated subalgebra of {{\mathcal Y}}. Thus if {K^\Delta} were {{\mathcal B}(K) \otimes {\mathcal B}(K)}-measurable, then all the points of {K} would lie in a single countably generated {\sigma}-algebra. But the cardinality of such an algebra is at most {2^{\alpha_0}} while the cardinality of {K} is {2^{2^{\alpha_0}}}, and Cantor’s theorem then gives a contradiction.

To resolve this problem, we give {K} a coarser {\sigma}-algebra than the Borel {\sigma}-algebra, namely the Baire {\sigma}-algebra {{\mathcal B}^\otimes(K)}, thus coarsening the measurable space structure on {K = (K,{\mathcal B}(K))} to a new measurable space {K_\otimes := (K, {\mathcal B}^\otimes(K))}. In the case of compact Hausdorff abelian groups, {{\mathcal B}^{\otimes}(K)} can be defined as the {\sigma}-algebra generated by the characters {\hat k: K \rightarrow {\mathbf T}}; for more general compact abelian groups, one can define {{\mathcal B}^{\otimes}(K)} as the {\sigma}-algebra generated by all continuous maps into metric spaces. This {\sigma}-algebra is equal to {{\mathcal B}(K)} when {K} is metrisable but can be smaller for other {K}. With this measurable structure, {K_\otimes} becomes a measurable group; it seems that once one leaves the metrisable world that {K_\otimes} is a superior (or at least equally good) space to work with than {K} for analysis, as it avoids the Nedoma pathology. (For instance, from Plancherel’s theorem, we see that if {m_K} is the Haar probability measure on {K}, then {L^2(K,m_K) = L^2(K_\otimes,m_K)} (thus, every {K}-measurable set is equivalent modulo {m_K}-null sets to a {K_\otimes}-measurable set), so there is no damage to Plancherel caused by passing to the Baire {\sigma}-algebra.

Passing to the Baire {\sigma}-algebra {K_\otimes} fixes the most severe problems with an uncountable Moore-Schmidt theorem, but one is still faced with an issue of having to potentially take an uncountable union of null sets. To avoid this sort of problem, we pass to the framework of abstract measure theory, in which we remove explicit mention of “points” and can easily delete all null sets at a very early stage of the formalism. In this setup, the category of concrete measurable spaces is replaced with the larger category of abstract measurable spaces, which we formally define as the opposite category of the category of {\sigma}-algebras (with Boolean algebra homomorphisms). Thus, we define an abstract measurable space to be an object of the form {{\mathcal X}^{\mathrm{op}}}, where {{\mathcal X}} is an (abstract) {\sigma}-algebra and {\mathrm{op}} is a formal placeholder symbol that signifies use of the opposite category, and an abstract measurable map {T: {\mathcal X}^{\mathrm{op}} \rightarrow {\mathcal Y}^{\mathrm{op}}} is an object of the form {(T^*)^{\mathrm{op}}}, where {T^*: {\mathcal Y} \rightarrow {\mathcal X}} is a Boolean algebra homomorphism and {\mathrm{op}} is again used as a formal placeholder; we call {T^*} the pullback map associated to {T}.  [UPDATE: It turns out that this definition of a measurable map led to technical issues.  In a forthcoming revision of the paper we also impose the requirement that the abstract measurable map be \sigma-complete (i.e., it respects countable joins).] The composition {S \circ T: {\mathcal X}^{\mathrm{op}} \rightarrow {\mathcal Z}^{\mathrm{op}}} of two abstract measurable maps {T: {\mathcal X}^{\mathrm{op}} \rightarrow {\mathcal Y}^{\mathrm{op}}}, {S: {\mathcal Y}^{\mathrm{op}} \rightarrow {\mathcal Z}^{\mathrm{op}}} is defined by the formula {S \circ T := (T^* \circ S^*)^{\mathrm{op}}}, or equivalently {(S \circ T)^* = T^* \circ S^*}.

Every concrete measurable space {X = (X,{\mathcal X})} can be identified with an abstract counterpart {{\mathcal X}^{op}}, and similarly every concrete measurable map {T: X \rightarrow Y} can be identified with an abstract counterpart {(T^*)^{op}}, where {T^*: {\mathcal Y} \rightarrow {\mathcal X}} is the pullback map {T^* E := T^{-1}(E)}. Thus the category of concrete measurable spaces can be viewed as a subcategory of the category of abstract measurable spaces. The advantage of working in the abstract setting is that it gives us access to more spaces that could not be directly defined in the concrete setting. Most importantly for us, we have a new abstract space, the opposite measure algebra {X_\mu} of {X}, defined as {({\bf X}/{\bf N})^*} where {{\bf N}} is the ideal of null sets in {{\bf X}}. Informally, {X_\mu} is the space {X} with all the null sets removed; there is a canonical abstract embedding map {\iota: X_\mu \rightarrow X}, which allows one to convert any concrete measurable map {f: X \rightarrow Y} into an abstract one {[f]: X_\mu \rightarrow Y}. One can then define the notion of an abstract action, abstract cocycle, and abstract coboundary by replacing every occurrence of the category of concrete measurable spaces with their abstract counterparts, and replacing {X} with the opposite measure algebra {X_\mu}; see the paper for details. Our main theorem is then

Theorem 2 (Uncountable Moore-Schmidt theorem) Let {\Gamma} be a discrete group acting abstractly on a {\sigma}-finite measure space {X}. Let {K} be a compact Hausdorff abelian group. Then a {K_\otimes}-valued abstract measurable cocycle {\rho = (\rho_\gamma)_{\gamma \in \Gamma}} is an abstract coboundary if and only if for each character {\hat k \in \hat K}, the {\mathbf{T}}-valued cocycles {\langle \hat k, \rho \rangle = ( \langle \hat k, \rho_\gamma \rangle )_{\gamma \in \Gamma}} are abstract coboundaries.

With the abstract formalism, the proof of the uncountable Moore-Schmidt theorem is almost identical to the countable one (in fact we were able to make some simplifications, such as avoiding the use of the ergodic decomposition). A key tool is what we call a “conditional Pontryagin duality” theorem, which asserts that if one has an abstract measurable map {\alpha_{\hat k}: X_\mu \rightarrow {\bf T}} for each {\hat k \in K} obeying the identity { \alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2} = 0} for all {\hat k_1,\hat k_2 \in \hat K}, then there is an abstract measurable map {F: X_\mu \rightarrow K_\otimes} such that {\alpha_{\hat k} = \langle \hat k, F \rangle} for all {\hat k \in \hat K}. This is derived from the usual Pontryagin duality and some other tools, most notably the completeness of the {\sigma}-algebra of {X_\mu}, and the Sikorski extension theorem.

We feel that it is natural to stay within the abstract measure theory formalism whenever dealing with uncountable situations. However, it is still an interesting question as to when one can guarantee that the abstract objects constructed in this formalism are representable by concrete analogues. The basic questions in this regard are:

  • (i) Suppose one has an abstract measurable map {f: X_\mu \rightarrow Y} into a concrete measurable space. Does there exist a representation of {f} by a concrete measurable map {\tilde f: X \rightarrow Y}? Is it unique up to almost everywhere equivalence?
  • (ii) Suppose one has a concrete cocycle that is an abstract coboundary. When can it be represented by a concrete coboundary?

For (i) the answer is somewhat interesting (as I learned after posing this MathOverflow question):

  • If {Y} does not separate points, or is not compact metrisable or Polish, there can be counterexamples to uniqueness. If {Y} is not compact or Polish, there can be counterexamples to existence.
  • If {Y} is a compact metric space or a Polish space, then one always has existence and uniqueness.
  • If {Y} is a compact Hausdorff abelian group, one always has existence.
  • If {X} is a complete measure space, then one always has existence (from a theorem of Maharam).
  • If {X} is the unit interval with the Borel {\sigma}-algebra and Lebesgue measure, then one has existence for all compact Hausdorff {Y} assuming the continuum hypothesis (from a theorem of von Neumann) but existence can fail under other extensions of ZFC (from a theorem of Shelah, using the method of forcing).
  • For more general {X}, existence for all compact Hausdorff {Y} is equivalent to the existence of a lifting from the {\sigma}-algebra {\mathcal{X}/\mathcal{N}} to {\mathcal{X}} (or, in the language of abstract measurable spaces, the existence of an abstract retraction from {X} to {X_\mu}).
  • It is a long-standing open question (posed for instance by Fremlin) whether it is relatively consistent with ZFC that existence holds whenever {Y} is compact Hausdorff.

Our understanding of (ii) is much less complete:

  • If {K} is metrisable, the answer is “always” (which among other things establishes the countable Moore-Schmidt theorem as a corollary of the uncountable one).
  • If {\Gamma} is at most countable and {X} is a complete measure space, then the answer is again “always”.

In view of the answers to (i), I would not be surprised if the full answer to (ii) was also sensitive to axioms of set theory. However, such set theoretic issues seem to be almost completely avoided if one sticks with the abstract formalism throughout; they only arise when trying to pass back and forth between the abstract and concrete categories.

As readers who have followed my previous post will know, I have been spending the last few weeks extending my previous interactive text on propositional logic (entitied “QED”) to also cover first-order logic.  The text has now reached what seems to be a stable form, with a complete set of deductive rules for first-order logic with equality, and no major bugs as far as I can tell (apart from one weird visual bug I can’t eradicate, in that some graphics elements can occasionally temporarily disappear when one clicks on an item).  So it will likely not change much going forward.

I feel though that there could be more that could be done with this sort of framework (e.g., improved GUI, modification to other logics, developing the ability to write one’s own texts and libraries, exploring mathematical theories such as Peano arithmetic, etc.).  But writing this text (particularly the first-order logic sections) has brought me close to the limit of my programming ability, as the number of bugs introduced with each new feature implemented has begun to grow at an alarming rate.  I would like to repackage the code so that it can be re-used by more adept programmers for further possible applications, though I have never done something like this before and would appreciate advice on how to do so.   The code is already available under a Creative Commons licence, but I am not sure how readable and modifiable it will be to others currently.  [Update: it is now on GitHub.]

[One thing I noticed is that I would probably have to make more of a decoupling between the GUI elements, the underlying logical elements, and the interactive text.  For instance, at some point I made the decision (convenient at the time) to use some GUI elements to store some of the state variables of the text, e.g. the exercise buttons are currently storing the status of what exercises are unlocked or not.  This is presumably not an example of good programming practice, though it would be relatively easy to fix.  More seriously, due to my inability to come up with a good general-purpose matching algorithm (or even specification of such an algorithm) for the the laws of first-order logic, many of the laws have to be hard-coded into the matching routine, so one cannot currently remove them from the text.  It may well be that the best thing to do in fact is to rework the entire codebase from scratch using more professional software design methods.]

[Update, Aug 23: links moved to GitHub version.]

 

 

About six years ago on this blog, I started thinking about trying to make a web-based game based around high-school algebra, and ended up using Scratch to write a short but playable puzzle game in which one solves linear equations for an unknown {x} using a restricted set of moves. (At almost the same time, there were a number of more professionally made games released along similar lines, most notably Dragonbox.)

Since then, I have thought a couple times about whether there were other parts of mathematics which could be gamified in a similar fashion. Shortly after my first blog posts on this topic, I experimented with a similar gamification of Lewis Carroll’s classic list of logic puzzles, but the results were quite clunky, and I was never satisfied with the results.

Over the last few weeks I returned to this topic though, thinking in particular about how to gamify the rules of inference of propositional logic, in a manner that at least vaguely resembles how mathematicians actually go about making logical arguments (e.g., splitting into cases, arguing by contradiction, using previous result as lemmas to help with subsequent ones, and so forth). The rules of inference are a list of a dozen or so deductive rules concerning propositional sentences (things like “({A} AND {B}) OR (NOT {C})”, where {A,B,C} are some formulas). A typical such rule is Modus Ponens: if the sentence {A} is known to be true, and the implication “{A} IMPLIES {B}” is also known to be true, then one can deduce that {B} is also true. Furthermore, in this deductive calculus it is possible to temporarily introduce some unproven statements as an assumption, only to discharge them later. In particular, we have the deduction theorem: if, after making an assumption {A}, one is able to derive the statement {B}, then one can conclude that the implication “{A} IMPLIES {B}” is true without any further assumption.

It took a while for me to come up with a workable game-like graphical interface for all of this, but I finally managed to set one up, now using Javascript instead of Scratch (which would be hopelessly inadequate for this task); indeed, part of the motivation of this project was to finally learn how to program in Javascript, which turned out to be not as formidable as I had feared (certainly having experience with other C-like languages like C++, Java, or lua, as well as some prior knowledge of HTML, was very helpful). The main code for this project is available here. Using this code, I have created an interactive textbook in the style of a computer game, which I have titled “QED”. This text contains thirty-odd exercises arranged in twelve sections that function as game “levels”, in which one has to use a given set of rules of inference, together with a given set of hypotheses, to reach a desired conclusion. The set of available rules increases as one advances through the text; in particular, each new section gives one or more rules, and additionally each exercise one solves automatically becomes a new deduction rule one can exploit in later levels, much as lemmas and propositions are used in actual mathematics to prove more difficult theorems. The text automatically tries to match available deduction rules to the sentences one clicks on or drags, to try to minimise the amount of manual input one needs to actually make a deduction.

Most of one’s proof activity takes place in a “root environment” of statements that are known to be true (under the given hypothesis), but for more advanced exercises one has to also work in sub-environments in which additional assumptions are made. I found the graphical metaphor of nested boxes to be useful to depict this tree of sub-environments, and it seems to combine well with the drag-and-drop interface.

The text also logs one’s moves in a more traditional proof format, which shows how the mechanics of the game correspond to a traditional mathematical argument. My hope is that this will give students a way to understand the underlying concept of forming a proof in a manner that is more difficult to achieve using traditional, non-interactive textbooks.

I have tried to organise the exercises in a game-like progression in which one first works with easy levels that train the player on a small number of moves, and then introduce more advanced moves one at a time. As such, the order in which the rules of inference are introduced is a little idiosyncratic. The most powerful rule (the law of the excluded middle, which is what separates classical logic from intuitionistic logic) is saved for the final section of the text.

Anyway, I am now satisfied enough with the state of the code and the interactive text that I am willing to make both available (and open source; I selected a CC-BY licence for both), and would be happy to receive feedback on any aspect of the either. In principle one could extend the game mechanics to other mathematical topics than the propositional calculus – the rules of inference for first-order logic being an obvious next candidate – but it seems to make sense to focus just on propositional logic for now.

In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence {G_n = (V_n, E_n)} of finite graphs, one can extract a subsequence {G_{n_j} = (V_{n_j}, E_{n_j})} which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function {p\colon [0,1] \times [0,1] \rightarrow [0,1]}. What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon {p}. For instance, the edge density

\displaystyle  \frac{1}{|V_{n_j}|^2} |E_{n_j}|

converge to the integral

\displaystyle  \int_0^1 \int_0^1 p(x,y)\ dx dy,

the triangle density

\displaystyle  \frac{1}{|V_{n_j}|^3} \lvert \{ (v_1,v_2,v_3) \in V_{n_j}^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_{n_j} \} \rvert

converges to the integral

\displaystyle  \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ dx_1 dx_2 dx_3,

the four-cycle density

\displaystyle  \frac{1}{|V_{n_j}|^4} \lvert \{ (v_1,v_2,v_3,v_4) \in V_{n_j}^4: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_4\}, \{v_4,v_1\} \in E_{n_j} \} \rvert

converges to the integral

\displaystyle  \int_0^1 \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_4) p(x_4,x_1)\ dx_1 dx_2 dx_3 dx_4,

and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.

One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence {G_n = (V_n,E_n)} of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter {\alpha \in\beta {\bf N} \backslash {\bf N}}) to obtain a nonstandard graph {G_\alpha = (V_\alpha,E_\alpha)}, where {V_\alpha = \prod_{n\rightarrow \alpha} V_n} is the ultraproduct of the {V_n}, and similarly for the {E_\alpha}. The set {E_\alpha} can then be viewed as a symmetric subset of {V_\alpha \times V_\alpha} which is measurable with respect to the Loeb {\sigma}-algebra {{\mathcal L}_{V_\alpha \times V_\alpha}} of the product {V_\alpha \times V_\alpha} (see this previous blog post for the construction of Loeb measure). A crucial point is that this {\sigma}-algebra is larger than the product {{\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha}} of the Loeb {\sigma}-algebra of the individual vertex set {V_\alpha}. This leads to a decomposition

\displaystyle  1_{E_\alpha} = p + e

where the “graphon” {p} is the orthogonal projection of {1_{E_\alpha}} onto {L^2( {\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha} )}, and the “regular error” {e} is orthogonal to all product sets {A \times B} for {A, B \in {\mathcal L}_{V_\alpha}}. The graphon {p\colon V_\alpha \times V_\alpha \rightarrow [0,1]} then captures the statistics of the nonstandard graph {G_\alpha}, in exact analogy with the more traditional graph limits: for instance, the edge density

\displaystyle  \hbox{st} \frac{1}{|V_\alpha|^2} |E_\alpha|

(or equivalently, the limit of the {\frac{1}{|V_n|^2} |E_n|} along the ultrafilter {\alpha}) is equal to the integral

\displaystyle  \int_{V_\alpha} \int_{V_\alpha} p(x,y)\ d\mu_{V_\alpha}(x) d\mu_{V_\alpha}(y)

where {d\mu_V} denotes Loeb measure on a nonstandard finite set {V}; the triangle density

\displaystyle  \hbox{st} \frac{1}{|V_\alpha|^3} \lvert \{ (v_1,v_2,v_3) \in V_\alpha^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_\alpha \} \rvert

(or equivalently, the limit along {\alpha} of the triangle densities of {E_n}) is equal to the integral

\displaystyle  \int_{V_\alpha} \int_{V_\alpha} \int_{V_\alpha} p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ d\mu_{V_\alpha}(x_1) d\mu_{V_\alpha}(x_2) d\mu_{V_\alpha}(x_3),

and so forth. Note that with this construction, the graphon {p} is living on the Cartesian square of an abstract probability space {V_\alpha}, which is likely to be inseparable; but it is possible to cut down the Loeb {\sigma}-algebra on {V_\alpha} to minimal countable {\sigma}-algebra for which {p} remains measurable (up to null sets), and then one can identify {V_\alpha} with {[0,1]}, bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)

Additive combinatorics, which studies things like the additive structure of finite subsets {A} of an abelian group {G = (G,+)}, has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.

It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group {A_\alpha} in a nonstandard group {G_\alpha = \prod_{n \rightarrow \alpha} G_n}, defined as the ultraproduct of finite {K}-approximate groups {A_n \subset G_n} for some standard {K}. (A {K}-approximate group {A_n} is a symmetric set containing the origin such that {A_n+A_n} can be covered by {K} or fewer translates of {A_n}.) We then let {O(A_\alpha)} be the external subgroup of {G_\alpha} generated by {A_\alpha}; equivalently, {A_\alpha} is the union of {A_\alpha^m} over all standard {m}. This space has a Loeb measure {\mu_{O(A_\alpha)}}, defined by setting

\displaystyle \mu_{O(A_\alpha)}(E_\alpha) := \hbox{st} \frac{|E_\alpha|}{|A_\alpha|}

whenever {E_\alpha} is an internal subset of {A_\alpha^m} for any standard {m}, and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.

The Loeb measure {\mu_{O(A_\alpha)}} is a translation invariant measure on {O(A_{\alpha})}, normalised so that {A_\alpha} has Loeb measure one. As such, one should think of {O(A_\alpha)} as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that {O(A_\alpha)} is not actually a locally compact group with Haar measure, for two reasons:

  • There is not an obvious topology on {O(A_\alpha)} that makes it simultaneously locally compact, Hausdorff, and {\sigma}-compact. (One can get one or two out of three without difficulty, though.)
  • The addition operation {+\colon O(A_\alpha) \times O(A_\alpha) \rightarrow O(A_\alpha)} is not measurable from the product Loeb algebra {{\mathcal L}_{O(A_\alpha)} \times {\mathcal L}_{O(A_\alpha)}} to {{\mathcal L}_{O(\alpha)}}. Instead, it is measurable from the coarser Loeb algebra {{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}} to {{\mathcal L}_{O(\alpha)}} (compare with the analogous situation for nonstandard graphs).

Nevertheless, the analogy is a useful guide for the arguments that follow.

Let {L(O(A_\alpha))} denote the space of bounded Loeb measurable functions {f\colon O(A_\alpha) \rightarrow {\bf C}} (modulo almost everywhere equivalence) that are supported on {A_\alpha^m} for some standard {m}; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation {\star\colon L(O(A_\alpha)) \times L(O(A_\alpha)) \rightarrow L(O(A_\alpha))}, defined by setting

\displaystyle  \hbox{st} f \star \hbox{st} g(x) := \hbox{st} \frac{1}{|A_\alpha|} \sum_{y \in A_\alpha^m} f(y) g(x-y)

whenever {f\colon A_\alpha^m \rightarrow {}^* {\bf C}}, {g\colon A_\alpha^l \rightarrow {}^* {\bf C}} are bounded nonstandard functions (extended by zero to all of {O(A_\alpha)}), and then extending to arbitrary elements of {L(O(A_\alpha))} by density. Equivalently, {f \star g} is the pushforward of the {{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}-measurable function {(x,y) \mapsto f(x) g(y)} under the map {(x,y) \mapsto x+y}.

The basic structural theorem is then as follows.

Theorem 1 (Kronecker factor) Let {A_\alpha} be an ultra approximate group. Then there exists a (standard) locally compact abelian group {G} of the form

\displaystyle  G = {\bf R}^d \times {\bf Z}^m \times T

for some standard {d,m} and some compact abelian group {T}, equipped with a Haar measure {\mu_G} and a measurable homomorphism {\pi\colon O(A_\alpha) \rightarrow G} (using the Loeb {\sigma}-algebra on {O(A_\alpha)} and the Baire {\sigma}-algebra on {G}), with the following properties:

  • (i) {\pi} has dense image, and {\mu_G} is the pushforward of Loeb measure {\mu_{O(A_\alpha)}} by {\pi}.
  • (ii) There exists sets {\{0\} \subset U_0 \subset K_0 \subset G} with {U_0} open and {K_0} compact, such that

    \displaystyle  \pi^{-1}(U_0) \subset 4A_\alpha \subset \pi^{-1}(K_0). \ \ \ \ \ (1)

  • (iii) Whenever {K \subset U \subset G} with {K} compact and {U} open, there exists a nonstandard finite set {B} such that

    \displaystyle  \pi^{-1}(K) \subset B \subset \pi^{-1}(U). \ \ \ \ \ (2)

  • (iv) If {f, g \in L}, then we have the convolution formula

    \displaystyle  f \star g = \pi^*( (\pi_* f) \star (\pi_* g) ) \ \ \ \ \ (3)

    where {\pi_* f,\pi_* g} are the pushforwards of {f,g} to {L^2(G, \mu_G)}, the convolution {\star} on the right-hand side is convolution using {\mu_G}, and {\pi^*} is the pullback map from {L^2(G,\mu_G)} to {L^2(O(A_\alpha), \mu_{O(A_\alpha)})}. In particular, if {\pi_* f = 0}, then {f*g=0} for all {g \in L}.

One can view the locally compact abelian group {G} as a “model “or “Kronecker factor” for the ultra approximate group {A_\alpha} (in close analogy with the Kronecker factor from ergodic theory). In the case that {A_\alpha} is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components {{\bf R}^d \times {\bf Z}^m} of the Kronecker group {G} are trivial, and this theorem was implicitly established by Szegedy. The compact group {T} is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions {f}, one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor {G}. Once one is in the separable case, the Baire sigma algebra is identical with the more familiar Borel sigma algebra.

Given any sequence of uniformly bounded functions {f_n\colon A_n^m \rightarrow {\bf C}} for some fixed {m}, we can view the function {f \in L} defined by

\displaystyle  f := \pi_* \hbox{st} \lim_{n \rightarrow \alpha} f_n \ \ \ \ \ (4)

as an “additive limit” of the {f_n}, in much the same way that graphons {p\colon V_\alpha \times V_\alpha \rightarrow [0,1]} are limits of the indicator functions {1_{E_n}\colon V_n \times V_n \rightarrow \{0,1\}}. The additive limits capture some of the statistics of the {f_n}, for instance the normalised means

\displaystyle  \frac{1}{|A_n|} \sum_{x \in A_n^m} f_n(x)

converge (along the ultrafilter {\alpha}) to the mean

\displaystyle  \int_G f(x)\ d\mu_G(x),

and for three sequences {f_n,g_n,h_n\colon A_n^m \rightarrow {\bf C}} of functions, the normalised correlation

\displaystyle  \frac{1}{|A_n|^2} \sum_{x,y \in A_n^m} f_n(x) g_n(y) h_n(x+y)

converges along {\alpha} to the correlation

\displaystyle  \int_G \int_G f(x) g(y) h(x+y)\ d\mu_G(x) d\mu_G(y),

the normalised {U^2} Gowers norm

\displaystyle  ( \frac{1}{|A_n|^3} \sum_{x,y,z,w \in A_n^m: x+w=y+z} f_n(x) \overline{f_n(y)} \overline{f_n(z)} f_n(w))^{1/4}

converges along {\alpha} to the {U^2} Gowers norm

\displaystyle  ( \int_{G \times G \times G} f(x) \overline{f(y)} \overline{f(z)} f_n(x+y-z)\ d\mu_G(x) d\mu_G(y) d\mu_G(z))^{1/4}

and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised {\ell^2} norm

\displaystyle  (\frac{1}{|A_n|} \sum_{x \in A_n^m} |f_n(x)|^2)^{1/2}

does not necessarily converge to the {L^2} norm

\displaystyle  (\int_G |f(x)|^2\ d\mu_G(x))^{1/2},

but can converge instead to a larger quantity, due to the presence of the orthogonal projection {\pi_*} in the definition (4) of {f}.

An important special case of an additive limit occurs when the functions {f_n\colon A_n^m \rightarrow {\bf C}} involved are indicator functions {f_n = 1_{E_n}} of some subsets {E_n} of {A_n^m}. The additive limit {f \in L} does not necessarily remain an indicator function, but instead takes values in {[0,1]} (much as a graphon {p} takes values in {[0,1]} even though the original indicators {1_{E_n}} take values in {\{0,1\}}). The convolution {f \star f\colon G \rightarrow [0,1]} is then the ultralimit of the normalised convolutions {\frac{1}{|A_n|} 1_{E_n} \star 1_{E_n}}; in particular, the measure of the support of {f \star f} provides a lower bound on the limiting normalised cardinality {\frac{1}{|A_n|} |E_n + E_n|} of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset {2E_n = E_n + E_n} could contain a large number of elements which have very few ({o(|A_n|)}) representations as the sum of two elements of {E_n}, and in the limit these portions of the sumset fall outside of the support of {f \star f}. (One can think of the support of {f \star f} as describing the “essential” sumset of {2E_n = E_n + E_n}, discarding those elements that have only very few representations.) Similarly for higher convolutions of {f}. Thus one can use additive limits to partially control the growth {k E_n} of iterated sumsets of subsets {E_n} of approximate groups {A_n}, in the regime where {k} stays bounded and {n} goes to infinity.

Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.

Example 2 (Bohr sets) We take {A_n} to be the intervals {A_n := \{ x \in {\bf Z}: |x| \leq N_n \}}, where {N_n} is a sequence going to infinity; these are {2}-approximate groups for all {n}. Let {\theta} be an irrational real number, let {I} be an interval in {{\bf R}/{\bf Z}}, and for each natural number {n} let {B_n} be the Bohr set

\displaystyle  B_n := \{ x \in A^{(n)}: \theta x \hbox{ mod } 1 \in I \}.

In this case, the (reduced) Kronecker factor {G} can be taken to be the infinite cylinder {{\bf R} \times {\bf R}/{\bf Z}} with the usual Lebesgue measure {\mu_G}. The additive limits of {1_{A_n}} and {1_{B_n}} end up being {1_A} and {1_B}, where {A} is the finite cylinder

\displaystyle  A := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]\}

and {B} is the rectangle

\displaystyle  B := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]; t \in I \}.

Geometrically, one should think of {A_n} and {B_n} as being wrapped around the cylinder {{\bf R} \times {\bf R}/{\bf Z}} via the homomorphism {x \mapsto (\frac{x}{N_n}, \theta x \hbox{ mod } 1)}, and then one sees that {B_n} is converging in some normalised weak sense to {B}, and similarly for {A_n} and {A}. In particular, the additive limit predicts the growth rate of the iterated sumsets {kB_n} to be quadratic in {k} until {k|I|} becomes comparable to {1}, at which point the growth transitions to linear growth, in the regime where {k} is bounded and {n} is large.

If {\theta = \frac{p}{q}} were rational instead of irrational, then one would need to replace {{\bf R}/{\bf Z}} by the finite subgroup {\frac{1}{q}{\bf Z}/{\bf Z}} here.

Example 3 (Structured subsets of progressions) We take {A_n} be the rank two progression

\displaystyle  A_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|, |b| \leq N_n \},

where {N_n} is a sequence going to infinity; these are {4}-approximate groups for all {n}. Let {B_n} be the subset

\displaystyle  B_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|^2 + |b|^2 \leq N_n^2 \}.

Then the (reduced) Kronecker factor can be taken to be {G = {\bf R}^2} with Lebesgue measure {\mu_G}, and the additive limits of the {1_{A_n}} and {1_{B_n}} are then {1_A} and {1_B}, where {A} is the square

\displaystyle  A := \{ (a,b) \in {\bf R}^2: |a|, |b| \leq 1 \}

and {B} is the circle

\displaystyle  B := \{ (a,b) \in {\bf R}^2: a^2+b^2 \leq 1 \}.

Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism {a + b N_n^2 \mapsto (\frac{a}{N_n}, \frac{b}{N_n})} for {a,b = O( N_n )} to embed the original sets {A_n, B_n} into the plane {{\bf R}^2}. In particular, one now expects the growth rate of the iterated sumsets {k A_n} and {k B_n} to be quadratic in {k}, in the regime where {k} is bounded and {n} is large.

Example 4 (Dissociated sets) Let {d} be a fixed natural number, and take

\displaystyle  A_n = \{0, v_1,\dots,v_d,-v_1,\dots,-v_d \}

where {v_1,\dots,v_d} are randomly chosen elements of a large cyclic group {{\bf Z}/p_n{\bf Z}}, where {p_n} is a sequence of primes going to infinity. These are {O(d)}-approximate groups. The (reduced) Kronecker factor {G} can (almost surely) then be taken to be {{\bf Z}^d} with counting measure, and the additive limit of {1_{A_n}} is {1_A}, where {A = \{ 0, e_1,\dots,e_d,-e_1,\dots,-e_d\}} and {e_1,\dots,e_d} is the standard basis of {{\bf Z}^d}. In particular, the growth rates of {k A_n} should grow approximately like {k^d} for {k} bounded and {n} large.

Example 5 (Random subsets of groups) Let {A_n = G_n} be a sequence of finite additive groups whose order is going to infinity. Let {B_n} be a random subset of {G_n} of some fixed density {0 \leq \lambda \leq 1}. Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group {\{0\}}, and the additive limit of the {1_{B_n}} is the constant function {\lambda}. The convolutions {\frac{1}{|G_n|} 1_{B_n} * 1_{B_n}} then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of {\lambda^2}; this reflects the fact that {(1-o(1))|G_n|} of the elements of {G_n} can be represented as the sum of two elements of {B_n} in {(\lambda^2 + o(1)) |G_n|} ways. In particular, {B_n+B_n} occupies a proportion {1-o(1)} of {G_n}.

Example 6 (Trigonometric series) Take {A_n = G_n = {\bf Z}/p_n {\bf C}} for a sequence {p_n} of primes going to infinity, and for each {n} let {\xi_{n,1},\xi_{n,2},\dots} be an infinite sequence of frequencies chosen uniformly and independently from {{\bf Z}/p_n{\bf Z}}. Let {f_n\colon {\bf Z}/p_n{\bf Z} \rightarrow {\bf C}} denote the random trigonometric series

\displaystyle  f_n(x) := \sum_{j=1}^\infty 2^{-j} e^{2\pi i \xi_{n,j} x / p_n }.

Then (almost surely) we can take the reduced Kronecker factor {G} to be the infinite torus {({\bf R}/{\bf Z})^{\bf N}} (with the Haar probability measure {\mu_G}), and the additive limit of the {f_n} then becomes the function {f\colon ({\bf R}/{\bf Z})^{\bf N} \rightarrow {\bf R}} defined by the formula

\displaystyle  f( (x_j)_{j=1}^\infty ) := \sum_{j=1}^\infty e^{2\pi i x_j}.

In fact, the pullback {\pi^* f} is the ultralimit of the {f_n}. As such, for any standard exponent {1 \leq q < \infty}, the normalised {l^q} norm

\displaystyle  (\frac{1}{p_n} \sum_{x \in {\bf Z}/p_n{\bf Z}} |f_n(x)|^q)^{1/q}

can be seen to converge to the limit

\displaystyle  (\int_{({\bf R}/{\bf Z})^{\bf N}} |f(x)|^q\ d\mu_G(x))^{1/q}.

The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.

It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.

Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.

Read the rest of this entry »

Let {\bar{{\bf Q}}} be the algebraic closure of {{\bf Q}}, that is to say the field of algebraic numbers. We fix an embedding of {\bar{{\bf Q}}} into {{\bf C}}, giving rise to a complex absolute value {z \mapsto |z|} for algebraic numbers {z \in \bar{{\bf Q}}}.

Let {\alpha \in \bar{{\bf Q}}} be of degree {D > 1}, so that {\alpha} is irrational. A classical theorem of Liouville gives the quantitative bound

\displaystyle  |\alpha - \frac{p}{q}| \geq c \frac{1}{|q|^D} \ \ \ \ \ (1)

for the irrationality of {\alpha} fails to be approximated by rational numbers {p/q}, where {c>0} depends on {\alpha,D} but not on {p,q}. Indeed, if one lets {\alpha = \alpha_1, \alpha_2, \dots, \alpha_D} be the Galois conjugates of {\alpha}, then the quantity {\prod_{i=1}^D |q \alpha_i - p|} is a non-zero natural number divided by a constant, and so we have the trivial lower bound

\displaystyle  \prod_{i=1}^D |q \alpha_i - p| \geq c

from which the bound (1) easily follows. A well known corollary of the bound (1) is that Liouville numbers are automatically transcendental.

The famous theorem of Thue, Siegel and Roth improves the bound (1) to

\displaystyle  |\alpha - \frac{p}{q}| \geq c \frac{1}{|q|^{2+\epsilon}} \ \ \ \ \ (2)

for any {\epsilon>0} and rationals {\frac{p}{q}}, where {c>0} depends on {\alpha,\epsilon} but not on {p,q}. Apart from the {\epsilon} in the exponent and the implied constant, this bound is optimal, as can be seen from Dirichlet’s theorem. This theorem is a good example of the ineffectivity phenomenon that affects a large portion of modern number theory: the implied constant in the {\gg} notation is known to be finite, but there is no explicit bound for it in terms of the coefficients of the polynomial defining {\alpha} (in contrast to (1), for which an effective bound may be easily established). This is ultimately due to the reliance on the “dueling conspiracy” (or “repulsion phenomenon”) strategy. We do not as yet have a good way to rule out one counterexample to (2), in which {\frac{p}{q}} is far closer to {\alpha} than {\frac{1}{|q|^{2+\epsilon}}}; however we can rule out two such counterexamples, by playing them off of each other.

A powerful strengthening of the Thue-Siegel-Roth theorem is given by the subspace theorem, first proven by Schmidt and then generalised further by several authors. To motivate the theorem, first observe that the Thue-Siegel-Roth theorem may be rephrased as a bound of the form

\displaystyle  | \alpha p - \beta q | \times | \alpha' p - \beta' q | \geq c (1 + |p| + |q|)^{-\epsilon} \ \ \ \ \ (3)

for any algebraic numbers {\alpha,\beta,\alpha',\beta'} with {(\alpha,\beta)} and {(\alpha',\beta')} linearly independent (over the algebraic numbers), and any {(p,q) \in {\bf Z}^2} and {\epsilon>0}, with the exception when {\alpha,\beta} or {\alpha',\beta'} are rationally dependent (i.e. one is a rational multiple of the other), in which case one has to remove some lines (i.e. subspaces in {{\bf Q}^2}) of rational slope from the space {{\bf Z}^2} of pairs {(p,q)} to which the bound (3) does not apply (namely, those lines for which the left-hand side vanishes). Here {c>0} can depend on {\alpha,\beta,\alpha',\beta',\epsilon} but not on {p,q}. More generally, we have

Theorem 1 (Schmidt subspace theorem) Let {d} be a natural number. Let {L_1,\dots,L_d: \bar{{\bf Q}}^d \rightarrow \bar{{\bf Q}}} be linearly independent linear forms. Then for any {\epsilon>0}, one has the bound

\displaystyle  \prod_{i=1}^d |L_i(x)| \geq c (1 + \|x\| )^{-\epsilon}

for all {x \in {\bf Z}^d}, outside of a finite number of proper subspaces of {{\bf Q}^d}, where

\displaystyle  \| (x_1,\dots,x_d) \| := \max( |x_1|, \dots, |x_d| )

and {c>0} depends on {\epsilon, d} and the {\alpha_{i,j}}, but is independent of {x}.

Being a generalisation of the Thue-Siegel-Roth theorem, it is unsurprising that the known proofs of the subspace theorem are also ineffective with regards to the constant {c}. (However, the number of exceptional subspaces may be bounded effectively; cf. the situation with the Skolem-Mahler-Lech theorem, discussed in this previous blog post.) Once again, the lower bound here is basically sharp except for the {\epsilon} factor and the implied constant: given any {\delta_1,\dots,\delta_d > 0} with {\delta_1 \dots \delta_d = 1}, a simple volume packing argument (the same one used to prove the Dirichlet approximation theorem) shows that for any sufficiently large {N \geq 1}, one can find integers {x_1,\dots,x_d \in [-N,N]}, not all zero, such that

\displaystyle  |L_i(x)| \ll \delta_i

for all {i=1,\dots,d}. Thus one can get {\prod_{i=1}^d |L_i(x)|} comparable to {1} in many different ways.

There are important generalisations of the subspace theorem to other number fields than the rationals (and to other valuations than the Archimedean valuation {z \mapsto |z|}); we will develop one such generalisation below.

The subspace theorem is one of many finiteness theorems in Diophantine geometry; in this case, it is the number of exceptional subspaces which is finite. It turns out that finiteness theorems are very compatible with the language of nonstandard analysis. (See this previous blog post for a review of the basics of nonstandard analysis, and in particular for the nonstandard interpretation of asymptotic notation such as {\ll} and {o()}.) The reason for this is that a standard set {X} is finite if and only if it contains no strictly nonstandard elements (that is to say, elements of {{}^* X \backslash X}). This makes for a clean formulation of finiteness theorems in the nonstandard setting. For instance, the standard form of Bezout’s theorem asserts that if {P(x,y), Q(x,y)} are coprime polynomials over some field, then the curves {\{ (x,y): P(x,y) = 0\}} and {\{ (x,y): Q(x,y)=0\}} intersect in only finitely many points. The nonstandard version of this is then

Theorem 2 (Bezout’s theorem, nonstandard form) Let {P(x,y), Q(x,y)} be standard coprime polynomials. Then there are no strictly nonstandard solutions to {P(x,y)=Q(x,y)=0}.

Now we reformulate Theorem 1 in nonstandard language. We need a definition:

Definition 3 (General position) Let {K \subset L} be nested fields. A point {x = (x_1,\dots,x_d)} in {L^d} is said to be in {K}-general position if it is not contained in any hyperplane of {L^d} definable over {K}, or equivalently if one has

\displaystyle  a_1 x_1 + \dots + a_d x_d = 0 \iff a_1=\dots = a_d = 0

for any {a_1,\dots,a_d \in K}.

Theorem 4 (Schmidt subspace theorem, nonstandard version) Let {d} be a standard natural number. Let {L_1,\dots,L_d: \bar{{\bf Q}}^d \rightarrow \bar{{\bf Q}}} be linearly independent standard linear forms. Let {x \in {}^* {\bf Z}^d} be a tuple of nonstandard integers which is in {{\bf Q}}-general position (in particular, this forces {x} to be strictly nonstandard). Then one has

\displaystyle  \prod_{i=1}^d |L_i(x)| \gg \|x\|^{-o(1)},

where we extend {L_i} from {\bar{{\bf Q}}} to {{}^* \bar{{\bf Q}}} (and also similarly extend {\| \|} from {{\bf Z}^d} to {{}^* {\bf Z}^d}) in the usual fashion.

Observe that (as is usual when translating to nonstandard analysis) some of the epsilons and quantifiers that are present in the standard version become hidden in the nonstandard framework, being moved inside concepts such as “strictly nonstandard” or “general position”. We remark that as {x} is in {{\bf Q}}-general position, it is also in {\bar{{\bf Q}}}-general position (as an easy Galois-theoretic argument shows), and the requirement that the {L_1,\dots,L_d} are linearly independent is thus equivalent to {L_1(x),\dots,L_d(x)} being {\bar{{\bf Q}}}-linearly independent.

Exercise 1 Verify that Theorem 1 and Theorem 4 are equivalent. (Hint: there are only countably many proper subspaces of {{\bf Q}^d}.)

We will not prove the subspace theorem here, but instead focus on a particular application of the subspace theorem, namely to counting integer points on curves. In this paper of Corvaja and Zannier, the subspace theorem was used to give a new proof of the following basic result of Siegel:

Theorem 5 (Siegel’s theorem on integer points) Let {P \in {\bf Q}[x,y]} be an irreducible polynomial of two variables, such that the affine plane curve {C := \{ (x,y): P(x,y)=0\}} either has genus at least one, or has at least three points on the line at infinity, or both. Then {C} has only finitely many integer points {(x,y) \in {\bf Z}^2}.

This is a finiteness theorem, and as such may be easily converted to a nonstandard form:

Theorem 6 (Siegel’s theorem, nonstandard form) Let {P \in {\bf Q}[x,y]} be a standard irreducible polynomial of two variables, such that the affine plane curve {C := \{ (x,y): P(x,y)=0\}} either has genus at least one, or has at least three points on the line at infinity, or both. Then {C} does not contain any strictly nonstandard integer points {(x_*,y_*) \in {}^* {\bf Z}^2 \backslash {\bf Z}^2}.

Note that Siegel’s theorem can fail for genus zero curves that only meet the line at infinity at just one or two points; the key examples here are the graphs {\{ (x,y): y - f(x) = 0\}} for a polynomial {f \in {\bf Z}[x]}, and the Pell equation curves {\{ (x,y): x^2 - dy^2 = 1 \}}. Siegel’s theorem can be compared with the more difficult theorem of Faltings, which establishes finiteness of rational points (not just integer points), but now needs the stricter requirement that the curve {C} has genus at least two (to avoid the additional counterexample of elliptic curves of positive rank, which have infinitely many rational points).

The standard proofs of Siegel’s theorem rely on a combination of the Thue-Siegel-Roth theorem and a number of results on abelian varieties (notably the Mordell-Weil theorem). The Corvaja-Zannier argument rebalances the difficulty of the argument by replacing the Thue-Siegel-Roth theorem by the more powerful subspace theorem (in fact, they need one of the stronger versions of this theorem alluded to earlier), while greatly reducing the reliance on results on abelian varieties. Indeed, for curves with three or more points at infinity, no theory from abelian varieties is needed at all, while for the remaining cases, one mainly needs the existence of the Abel-Jacobi embedding, together with a relatively elementary theorem of Chevalley-Weil which is used in the proof of the Mordell-Weil theorem, but is significantly easier to prove.

The Corvaja-Zannier argument (together with several further applications of the subspace theorem) is presented nicely in this Bourbaki expose of Bilu. To establish the theorem in full generality requires a certain amount of algebraic number theory machinery, such as the theory of valuations on number fields, or of relative discriminants between such number fields. However, the basic ideas can be presented without much of this machinery by focusing on simple special cases of Siegel’s theorem. For instance, we can handle irreducible cubics that meet the line at infinity at exactly three points {[1,\alpha_1,0], [1,\alpha_2,0], [1,\alpha_3,0]}:

Theorem 7 (Siegel’s theorem with three points at infinity) Siegel’s theorem holds when the irreducible polynomial {P(x,y)} takes the form

\displaystyle  P(x,y) = (y - \alpha_1 x) (y - \alpha_2 x) (y - \alpha_3 x) + Q(x,y)

for some quadratic polynomial {Q \in {\bf Q}[x,y]} and some distinct algebraic numbers {\alpha_1,\alpha_2,\alpha_3}.

Proof: We use the nonstandard formalism. Suppose for sake of contradiction that we can find a strictly nonstandard integer point {(x_*,y_*) \in {}^* {\bf Z}^2 \backslash {\bf Z}^2} on a curve {C := \{ (x,y): P(x,y)=0\}} of the indicated form. As this point is infinitesimally close to the line at infinity, {y_*/x_*} must be infinitesimally close to one of {\alpha_1,\alpha_2,\alpha_3}; without loss of generality we may assume that {y_*/x_*} is infinitesimally close to {\alpha_1}.

We now use a version of the polynomial method, to find some polynomials of controlled degree that vanish to high order on the “arm” of the cubic curve {C} that asymptotes to {[1,\alpha_1,0]}. More precisely, let {D \geq 3} be a large integer (actually {D=3} will already suffice here), and consider the {\bar{{\bf Q}}}-vector space {V} of polynomials {R(x,y) \in \bar{{\bf Q}}[x,y]} of degree at most {D}, and of degree at most {2} in the {y} variable; this space has dimension {3D}. Also, as one traverses the arm {y/x \rightarrow \alpha_1} of {C}, any polynomial {R} in {V} grows at a rate of at most {D}, that is to say {R} has a pole of order at most {D} at the point at infinity {[1,\alpha_1,0]}. By performing Laurent expansions around this point (which is a non-singular point of {C}, as the {\alpha_i} are assumed to be distinct), we may thus find a basis {R_1, \dots, R_{3D}} of {V}, with the property that {R_j} has a pole of order at most {D+1-j} at {[1,\alpha_1,0]} for each {j=1,\dots,3D}.

From the control of the pole at {[1,\alpha_1,0]}, we have

\displaystyle  |R_j(x_*,y_*)| \ll (|x_*|+|y_*|)^{D+1-j}

for all {j=1,\dots,3D}. The exponents here become negative for {j > D+1}, and on multiplying them all together we see that

\displaystyle  \prod_{j=1}^{3D} |R_j(x_*,y_*)| \ll (|x_*|+|y_*|)^{3D(D+1) - \frac{3D(3D+1)}{2}}.

This exponent is negative for {D} large enough (or just take {D=3}). If we expand

\displaystyle  R_j(x_*,y_*) = \sum_{a+b \leq D; b \leq 2} \alpha_{j,a,b} x_*^a y_*^b

for some algebraic numbers {\alpha_{j,a,b}}, then we thus have

\displaystyle  \prod_{j=1}^{3D} |\sum_{a+b \leq D; b \leq 2} \alpha_{j,a,b} x_*^a y_*^b| \ll (|x_*|+|y_*|)^{-\epsilon}

for some standard {\epsilon>0}. Note that the {3D}-dimensional vectors {(\alpha_{j,a,b})_{a+b \leq D; b \leq 2}} are linearly independent in {{\bf C}^{3D}}, because the {R_j} are linearly independent in {V}. Applying the Schmidt subspace theorem in the contrapositive, we conclude that the {3D}-tuple {( x_*^a y_*^b )_{a+b \leq D; b \leq 2} \in {}^* {\bf Z}^{3D}} is not in {{\bf Q}}-general position. That is to say, one has a non-trivial constraint of the form

\displaystyle  \sum_{a+b \leq D; b \leq 2} c_{a,b} x_*^a y_*^b = 0 \ \ \ \ \ (4)

for some standard rational coefficients {c_{a,b}}, not all zero. But, as {P} is irreducible and cubic in {y}, it has no common factor with the standard polynomial {\sum_{a+b \leq D; b \leq 2} c_{a,b} x^a y^b}, so by Bezout’s theorem (Theorem 2) the constraint (4) only has standard solutions, contradicting the strictly nonstandard nature of {(x_*,y_*)}. \Box

Exercise 2 Rewrite the above argument so that it makes no reference to nonstandard analysis. (In this case, the rewriting is quite straightforward; however, there will be a subsequent argument in which the standard version is significantly messier than the nonstandard counterpart, which is the reason why I am working with the nonstandard formalism in this blog post.)

A similar argument works for higher degree curves that meet the line at infinity in three or more points, though if the curve has singularities at infinity then it becomes convenient to rely on the Riemann-Roch theorem to control the dimension of the analogue of the space {V}. Note that when there are only two or fewer points at infinity, though, one cannot get the negative exponent of {-\epsilon} needed to usefully apply the subspace theorem. To deal with this case we require some additional tricks. For simplicity we focus on the case of Mordell curves, although it will be convenient to work with more general number fields {{\bf Q} \subset K \subset \bar{{\bf Q}}} than the rationals:

Theorem 8 (Siegel’s theorem for Mordell curves) Let {k} be a non-zero integer. Then there are only finitely many integer solutions {(x,y) \in {\bf Z}^2} to {y^2 - x^3 = k}. More generally, for any number field {K}, and any nonzero {k \in K}, there are only finitely many algebraic integer solutions {(x,y) \in {\mathcal O}_K^2} to {y^2-x^3=k}, where {{\mathcal O}_K} is the ring of algebraic integers in {K}.

Again, we will establish the nonstandard version. We need some additional notation:

Definition 9

  • We define an almost rational integer to be a nonstandard {x \in {}^* {\bf Q}} such that {Mx \in {}^* {\bf Z}} for some standard positive integer {M}, and write {{\bf Q} {}^* {\bf Z}} for the {{\bf Q}}-algebra of almost rational integers.
  • If {K} is a standard number field, we define an almost {K}-integer to be a nonstandard {x \in {}^* K} such that {Mx \in {}^* {\mathcal O}_K} for some standard positive integer {M}, and write {K {}^* {\bf Z} = K {\mathcal O}_K} for the {K}-algebra of almost {K}-integers.
  • We define an almost algebraic integer to be a nonstandard {x \in {}^* {\bar Q}} such that {Mx} is a nonstandard algebraic integer for some standard positive integer {M}, and write {\bar{{\bf Q}} {}^* {\bf Z}} for the {\bar{{\bf Q}}}-algebra of almost algebraic integers.
  • Theorem 10 (Siegel for Mordell, nonstandard version) Let {k} be a non-zero standard algebraic number. Then the curve {\{ (x,y): y^2 - x^3 = k \}} does not contain any strictly nonstandard almost algebraic integer point.

    Another way of phrasing this theorem is that if {x,y} are strictly nonstandard almost algebraic integers, then {y^2-x^3} is either strictly nonstandard or zero.

    Exercise 3 Verify that Theorem 8 and Theorem 10 are equivalent.

    Due to all the ineffectivity, our proof does not supply any bound on the solutions {x,y} in terms of {k}, even if one removes all references to nonstandard analysis. It is a conjecture of Hall (a special case of the notorious ABC conjecture) that one has the bound {|x| \ll_\epsilon |k|^{2+\epsilon}} for all {\epsilon>0} (or equivalently {|y| \ll_\epsilon |k|^{3+\epsilon}}), but even the weaker conjecture that {x,y} are of polynomial size in {k} is open. (The best known bounds are of exponential nature, and are proven using a version of Baker’s method: see for instance this text of Sprindzuk.)

    A direct repetition of the arguments used to prove Theorem 7 will not work here, because the Mordell curve {\{ (x,y): y^2 - x^3 = k \}} only hits the line at infinity at one point, {[0,1,0]}. To get around this we will exploit the fact that the Mordell curve is an elliptic curve and thus has a group law on it. We will then divide all the integer points on this curve by two; as elliptic curves have four 2-torsion points, this will end up placing us in a situation like Theorem 7, with four points at infinity. However, there is an obstruction: it is not obvious that dividing an integer point on the Mordell curve by two will produce another integer point. However, this is essentially true (after enlarging the ring of integers slightly) thanks to a general principle of Chevalley and Weil, which can be worked out explicitly in the case of division by two on Mordell curves by relatively elementary means (relying mostly on unique factorisation of ideals of algebraic integers). We give the details below the fold.

    Read the rest of this entry »

    There are a number of ways to construct the real numbers {{\bf R}}, for instance

    • as the metric completion of {{\bf Q}} (thus, {{\bf R}} is defined as the set of Cauchy sequences of rationals, modulo Cauchy equivalence);
    • as the space of Dedekind cuts on the rationals {{\bf Q}};
    • as the space of quasimorphisms {\phi: {\bf Z} \rightarrow {\bf Z}} on the integers, quotiented by bounded functions. (I believe this construction first appears in this paper of Street, who credits the idea to Schanuel, though the germ of this construction arguably goes all the way back to Eudoxus.)

    There is also a fourth family of constructions that proceeds via nonstandard analysis, as a special case of what is known as the nonstandard hull construction. (Here I will assume some basic familiarity with nonstandard analysis and ultraproducts, as covered for instance in this previous blog post.) Given an unbounded nonstandard natural number {N \in {}^* {\bf N} \backslash {\bf N}}, one can define two external additive subgroups of the nonstandard integers {{}^* {\bf Z}}:

    • The group {O(N) := \{ n \in {}^* {\bf Z}: |n| \leq CN \hbox{ for some } C \in {\bf N} \}} of all nonstandard integers of magnitude less than or comparable to {N}; and
    • The group {o(N) := \{ n \in {}^* {\bf Z}: |n| \leq C^{-1} N \hbox{ for all } C \in {\bf N} \}} of nonstandard integers of magnitude infinitesimally smaller than {N}.

    The group {o(N)} is a subgroup of {O(N)}, so we may form the quotient group {O(N)/o(N)}. This space is isomorphic to the reals {{\bf R}}, and can in fact be used to construct the reals:

    Proposition 1 For any coset {n + o(N)} of {O(N)/o(N)}, there is a unique real number {\hbox{st} \frac{n}{N}} with the property that {\frac{n}{N} = \hbox{st} \frac{n}{N} + o(1)}. The map {n + o(N) \mapsto \hbox{st} \frac{n}{N}} is then an isomorphism between the additive groups {O(N)/o(N)} and {{\bf R}}.

    Proof: Uniqueness is clear. For existence, observe that the set {\{ x \in {\bf R}: Nx \leq n + o(N) \}} is a Dedekind cut, and its supremum can be verified to have the required properties for {\hbox{st} \frac{n}{N}}. \Box

    In a similar vein, we can view the unit interval {[0,1]} in the reals as the quotient

    \displaystyle  [0,1] \equiv [N] / o(N) \ \ \ \ \ (1)

    where {[N]} is the nonstandard (i.e. internal) set {\{ n \in {\bf N}: n \leq N \}}; of course, {[N]} is not a group, so one should interpret {[N]/o(N)} as the image of {[N]} under the quotient map {{}^* {\bf Z} \rightarrow {}^* {\bf Z} / o(N)} (or {O(N) \rightarrow O(N)/o(N)}, if one prefers). Or to put it another way, (1) asserts that {[0,1]} is the image of {[N]} with respect to the map {\pi: n \mapsto \hbox{st} \frac{n}{N}}.

    In this post I would like to record a nice measure-theoretic version of the equivalence (1), which essentially appears already in standard texts on Loeb measure (see e.g. this text of Cutland). To describe the results, we must first quickly recall the construction of Loeb measure on {[N]}. Given an internal subset {A} of {[N]}, we may define the elementary measure {\mu_0(A)} of {A} by the formula

    \displaystyle  \mu_0(A) := \hbox{st} \frac{|A|}{N}.

    This is a finitely additive probability measure on the Boolean algebra of internal subsets of {[N]}. We can then construct the Loeb outer measure {\mu^*(A)} of any subset {A \subset [N]} in complete analogy with Lebesgue outer measure by the formula

    \displaystyle  \mu^*(A) := \inf \sum_{n=1}^\infty \mu_0(A_n)

    where {(A_n)_{n=1}^\infty} ranges over all sequences of internal subsets of {[N]} that cover {A}. We say that a subset {A} of {[N]} is Loeb measurable if, for any (standard) {\epsilon>0}, one can find an internal subset {B} of {[N]} which differs from {A} by a set of Loeb outer measure at most {\epsilon}, and in that case we define the Loeb measure {\mu(A)} of {A} to be {\mu^*(A)}. It is a routine matter to show (e.g. using the Carathéodory extension theorem) that the space {{\mathcal L}} of Loeb measurable sets is a {\sigma}-algebra, and that {\mu} is a countably additive probability measure on this space that extends the elementary measure {\mu_0}. Thus {[N]} now has the structure of a probability space {([N], {\mathcal L}, \mu)}.

    Now, the group {o(N)} acts (Loeb-almost everywhere) on the probability space {[N]} by the addition map, thus {T^h n := n+h} for {n \in [N]} and {h \in o(N)} (excluding a set of Loeb measure zero where {n+h} exits {[N]}). This action is clearly seen to be measure-preserving. As such, we can form the invariant factor {Z^0_{o(N)}([N]) = ([N], {\mathcal L}^{o(N)}, \mu\downharpoonright_{{\mathcal L}^{o(N)}})}, defined by restricting attention to those Loeb measurable sets {A \subset [N]} with the property that {T^h A} is equal {\mu}-almost everywhere to {A} for each {h \in o(N)}.

    The claim is then that this invariant factor is equivalent (up to almost everywhere equivalence) to the unit interval {[0,1]} with Lebesgue measure {m} (and the trivial action of {o(N)}), by the same factor map {\pi: n \mapsto \hbox{st} \frac{n}{N}} used in (1). More precisely:

    Theorem 2 Given a set {A \in {\mathcal L}^{o(N)}}, there exists a Lebesgue measurable set {B \subset [0,1]}, unique up to {m}-a.e. equivalence, such that {A} is {\mu}-a.e. equivalent to the set {\pi^{-1}(B) := \{ n \in [N]: \hbox{st} \frac{n}{N} \in B \}}. Conversely, if {B \in [0,1]} is Lebesgue measurable, then {\pi^{-1}(B)} is in {{\mathcal L}^{o(N)}}, and {\mu( \pi^{-1}(B) ) = m( B )}.

    More informally, we have the measure-theoretic version

    \displaystyle  [0,1] \equiv Z^0_{o(N)}( [N] )

    of (1).

    Proof: We first prove the converse. It is clear that {\pi^{-1}(B)} is {o(N)}-invariant, so it suffices to show that {\pi^{-1}(B)} is Loeb measurable with Loeb measure {m(B)}. This is easily verified when {B} is an elementary set (a finite union of intervals). By countable subadditivity of outer measure, this implies that Loeb outer measure of {\pi^{-1}(E)} is bounded by the Lebesgue outer measure of {E} for any set {E \subset [0,1]}; since every Lebesgue measurable set differs from an elementary set by a set of arbitrarily small Lebesgue outer measure, the claim follows.

    Now we establish the forward claim. Uniqueness is clear from the converse claim, so it suffices to show existence. Let {A \in {\mathcal L}^{o(N)}}. Let {\epsilon>0} be an arbitrary standard real number, then we can find an internal set {A_\epsilon \subset [N]} which differs from {A} by a set of Loeb measure at most {\epsilon}. As {A} is {o(N)}-invariant, we conclude that for every {h \in o(N)}, {A_\epsilon} and {T^h A_\epsilon} differ by a set of Loeb measure (and hence elementary measure) at most {2\epsilon}. By the (contrapositive of the) underspill principle, there must exist a standard {\delta>0} such that {A_\epsilon} and {T^h A_\epsilon} differ by a set of elementary measure at most {2\epsilon} for all {|h| \leq \delta N}. If we then define the nonstandard function {f_\epsilon: [N] \rightarrow {}^* {\bf R}} by the formula

    \displaystyle  f(n) := \hbox{st} \frac{1}{\delta N} \sum_{m \in [N]: m \leq n \leq m+\delta N} 1_{A_\epsilon}(m),

    then from the (nonstandard) triangle inequality we have

    \displaystyle  \frac{1}{N} \sum_{n \in [N]} |f(n) - 1_{A_\epsilon}(n)| \leq 3\epsilon

    (say). On the other hand, {f} has the Lipschitz continuity property

    \displaystyle  |f(n)-f(m)| \leq \frac{2|n-m|}{\delta N}

    and so in particular we see that

    \displaystyle  \hbox{st} f(n) = \tilde f( \hbox{st} \frac{n}{N} )

    for some Lipschitz continuous function {\tilde f: [0,1] \rightarrow [0,1]}. If we then let {E_\epsilon} be the set where {\tilde f \geq 1 - \sqrt{\epsilon}}, one can check that {A_\epsilon} differs from {\pi^{-1}(E_\epsilon)} by a set of Loeb outer measure {O(\sqrt{\epsilon})}, and hence {A} does so also. Sending {\epsilon} to zero, we see (from the converse claim) that {1_{E_\epsilon}} is a Cauchy sequence in {L^1} and thus converges in {L^1} for some Lebesgue measurable {E}. The sets {A_\epsilon} then converge in Loeb outer measure to {\pi^{-1}(E)}, giving the claim. \Box

    Thanks to the Lebesgue differentiation theorem, the conditional expectation {{\bf E}( f | Z^0_{o(N)}([N]))} of a bounded Loeb-measurable function {f: [N] \rightarrow {\bf R}} can be expressed (as a function on {[0,1]}, defined {m}-a.e.) as

    \displaystyle  {\bf E}( f | Z^0_{o(N)}([N]))(x) := \lim_{\epsilon \rightarrow 0} \frac{1}{2\epsilon} \int_{[x-\epsilon N,x+\epsilon N]} f\ d\mu.

    By the abstract ergodic theorem from the previous post, one can also view this conditional expectation as the element in the closed convex hull of the shifts {T^h f}, {h = o(N)} of minimal {L^2} norm. In particular, we obtain a form of the von Neumann ergodic theorem in this context: the averages {\frac{1}{H} \sum_{h=1}^H T^h f} for {H=O(N)} converge (as a net, rather than a sequence) in {L^2} to {{\bf E}( f | Z^0_{o(N)}([N]))}.

    If {f: [N] \rightarrow [-1,1]} is (the standard part of) an internal function, that is to say the ultralimit of a sequence {f_n: [N_n] \rightarrow [-1,1]} of finitary bounded functions, one can view the measurable function {F := {\bf E}( f | Z^0_{o(N)}([N]))} as a limit of the {f_n} that is analogous to the “graphons” that emerge as limits of graphs (see e.g. the recent text of Lovasz on graph limits). Indeed, the measurable function {F: [0,1] \rightarrow [-1,1]} is related to the discrete functions {f_n: [N_n] \rightarrow [-1,1]} by the formula

    \displaystyle  \int_a^b F(x)\ dx = \hbox{st} \lim_{n \rightarrow p} \frac{1}{N_n} \sum_{a N_n \leq m \leq b N_n} f_n(m)

    for all {0 \leq a < b \leq 1}, where {p} is the nonprincipal ultrafilter used to define the nonstandard universe. In particular, from the Arzela-Ascoli diagonalisation argument there is a subsequence {n_j} such that

    \displaystyle  \int_a^b F(x)\ dx = \lim_{j \rightarrow \infty} \frac{1}{N_{n_j}} \sum_{a N_{n_j} \leq m \leq b N_{n_j}} f_n(m),

    thus {F} is the asymptotic density function of the {f_n}. For instance, if {f_n} is the indicator function of a randomly chosen subset of {[N_n]}, then the asymptotic density function would equal {1/2} (almost everywhere, at least).

    I’m continuing to look into understanding the ergodic theory of {o(N)} actions, as I believe this may allow one to apply ergodic theory methods to the “single-scale” or “non-asymptotic” setting (in which one averages only over scales comparable to a large parameter {N}, rather than the traditional asymptotic approach of letting the scale go to infinity). I’m planning some further posts in this direction, though this is still a work in progress.

    (This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

    Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

    The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

    (Discrete) (Continuous) (Limit method)
    Ramsey theory Topological dynamics Compactness
    Density Ramsey theory Ergodic theory Furstenberg correspondence principle
    Graph/hypergraph regularity Measure theory Graph limits
    Polynomial regularity Linear algebra Ultralimits
    Structural decompositions Hilbert space geometry Ultralimits
    Fourier analysis Spectral theory Direct and inverse limits
    Quantitative algebraic geometry Algebraic geometry Schemes
    Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits
    Approximate group theory Topological group theory Model theory

    As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

    • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects {x_n} in a common space {X}, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object {\lim_{n \rightarrow \infty} x_n}, which remains in the same space, and is “close” to many of the original objects {x_n} with respect to the given metric or topology.
    • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects {x_n} in a category {X}, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit {\varinjlim x_n} or the inverse limit {\varprojlim x_n} of these objects, which is another object in the same category {X}, and is connected to the original objects {x_n} by various morphisms.
    • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects {x_{\bf n}} or of spaces {X_{\bf n}}, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, {X_{\bf n}} might be groups and {x_{\bf n}} might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}} or a new space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}, which is still a model of the same language (e.g. if the spaces {X_{\bf n}} were all groups, then the limiting space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} is an abelian group, then the {X_{\bf n}} will also be abelian groups for many {{\bf n}}.)

    The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects {x_{\bf n}} to all lie in a common space {X} in order to form an ultralimit {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; they are permitted to lie in different spaces {X_{\bf n}}; this is more natural in many discrete contexts, e.g. when considering graphs on {{\bf n}} vertices in the limit when {{\bf n}} goes to infinity. Also, no convergence properties on the {x_{\bf n}} are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces {X_{\bf n}} involved are required in order to construct the ultraproduct.

    With so few requirements on the objects {x_{\bf n}} or spaces {X_{\bf n}}, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the {x_{\bf n}}, will be exactly obeyed by the limit object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

    Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

    Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

    Read the rest of this entry »

    Archives