You are currently browsing the tag archive for the ‘nonstandard analysis’ tag.

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

(Discrete) (Continuous) (Limit method)
Ramsey theory Topological dynamics Compactness
Density Ramsey theory Ergodic theory Furstenberg correspondence principle
Graph/hypergraph regularity Measure theory Graph limits
Polynomial regularity Linear algebra Ultralimits
Structural decompositions Hilbert space geometry Ultralimits
Fourier analysis Spectral theory Direct and inverse limits
Quantitative algebraic geometry Algebraic geometry Schemes
Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits
Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

  • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects {x_n} in a common space {X}, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object {\lim_{n \rightarrow \infty} x_n}, which remains in the same space, and is “close” to many of the original objects {x_n} with respect to the given metric or topology.
  • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects {x_n} in a category {X}, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit {\varinjlim x_n} or the inverse limit {\varprojlim x_n} of these objects, which is another object in the same category {X}, and is connected to the original objects {x_n} by various morphisms.
  • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects {x_{\bf n}} or of spaces {X_{\bf n}}, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, {X_{\bf n}} might be groups and {x_{\bf n}} might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}} or a new space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}, which is still a model of the same language (e.g. if the spaces {X_{\bf n}} were all groups, then the limiting space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} is an abelian group, then the {X_{\bf n}} will also be abelian groups for many {{\bf n}}.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects {x_{\bf n}} to all lie in a common space {X} in order to form an ultralimit {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; they are permitted to lie in different spaces {X_{\bf n}}; this is more natural in many discrete contexts, e.g. when considering graphs on {{\bf n}} vertices in the limit when {{\bf n}} goes to infinity. Also, no convergence properties on the {x_{\bf n}} are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces {X_{\bf n}} involved are required in order to construct the ultraproduct.

With so few requirements on the objects {x_{\bf n}} or spaces {X_{\bf n}}, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the {x_{\bf n}}, will be exactly obeyed by the limit object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Read the rest of this entry »

The rectification principle in arithmetic combinatorics asserts, roughly speaking, that very small subsets (or, alternatively, small structured subsets) of an additive group or a field of large characteristic can be modeled (for the purposes of arithmetic combinatorics) by subsets of a group or field of zero characteristic, such as the integers {{\bf Z}} or the complex numbers {{\bf C}}. The additive form of this principle is known as the Freiman rectification principle; it has several formulations, going back of course to the original work of Freiman. Here is one formulation as given by Bilu, Lev, and Ruzsa:

Proposition 1 (Additive rectification) Let {A} be a subset of the additive group {{\bf Z}/p{\bf Z}} for some prime {p}, and let {s \geq 1} be an integer. Suppose that {|A| \leq \log_{2s} p}. Then there exists a map {\phi: A \rightarrow A'} into a subset {A'} of the integers which is a Freiman isomorphism of order {s} in the sense that for any {x_1,\ldots,x_s,y_1,\ldots,y_s \in A}, one has

\displaystyle  x_1+\ldots+x_s = y_1+\ldots+y_s

if and only if

\displaystyle  \phi(x_1)+\ldots+\phi(x_s) = \phi(y_1)+\ldots+\phi(y_s).

Furthermore {\phi} is a right-inverse of the obvious projection homomorphism from {{\bf Z}} to {{\bf Z}/p{\bf Z}}.

The original version of the rectification principle allowed the sets involved to be substantially larger in size (cardinality up to a small constant multiple of {p}), but with the additional hypothesis of bounded doubling involved; see the above-mentioned papers, as well as this later paper of Green and Ruzsa, for further discussion.

The proof of Proposition 1 is quite short (see Theorem 3.1 of Bilu-Lev-Ruzsa); the main idea is to use Minkowski’s theorem to find a non-trivial dilate {aA} of {A} that is contained in a small neighbourhood of the origin in {{\bf Z}/p{\bf Z}}, at which point the rectification map {\phi} can be constructed by hand.

Very recently, Codrut Grosu obtained an arithmetic analogue of the above theorem, in which the rectification map {\phi} preserves both additive and multiplicative structure:

Theorem 2 (Arithmetic rectification) Let {A} be a subset of the finite field {{\bf F}_p} for some prime {p \geq 3}, and let {s \geq 1} be an integer. Suppose that {|A| < \log_2 \log_{2s} \log_{2s^2} p - 1}. Then there exists a map {\phi: A \rightarrow A'} into a subset {A'} of the complex numbers which is a Freiman field isomorphism of order {s} in the sense that for any {x_1,\ldots,x_n \in A} and any polynomial {P(x_1,\ldots,x_n)} of degree at most {s} and integer coefficients of magnitude summing to at most {s}, one has

\displaystyle  P(x_1,\ldots,x_n)=0

if and only if

\displaystyle  P(\phi(x_1),\ldots,\phi(x_n))=0.

Note that it is necessary to use an algebraically closed field such as {{\bf C}} for this theorem, in contrast to the integers used in Proposition 1, as {{\bf F}_p} can contain objects such as square roots of {-1} which can only map to {\pm i} in the complex numbers (once {s} is at least {2}).

Using Theorem 2, one can transfer results in arithmetic combinatorics (e.g. sum-product or Szemerédi-Trotter type theorems) regarding finite subsets of {{\bf C}} to analogous results regarding sufficiently small subsets of {{\bf F}_p}; see the paper of Grosu for several examples of this. This should be compared with the paper of Vu, Wood, and Wood, which introduces a converse principle that embeds finite subsets of {{\bf C}} (or more generally, a characteristic zero integral domain) in a Freiman field-isomorphic fashion into finite subsets of {{\bf F}_p} for arbitrarily large primes {p}, allowing one to transfer arithmetic combinatorical facts from the latter setting to the former.

Grosu’s argument uses some quantitative elimination theory, and in particular a quantitative variant of a lemma of Chang that was discussed previously on this blog. In that previous blog post, it was observed that (an ineffective version of) Chang’s theorem could be obtained using only qualitative algebraic geometry (as opposed to quantitative algebraic geometry tools such as elimination theory results with explicit bounds) by means of nonstandard analysis (or, in what amounts to essentially the same thing in this context, the use of ultraproducts). One can then ask whether one can similarly establish an ineffective version of Grosu’s result by nonstandard means. The purpose of this post is to record that this can indeed be done without much difficulty, though the result obtained, being ineffective, is somewhat weaker than that in Theorem 2. More precisely, we obtain

Theorem 3 (Ineffective arithmetic rectification) Let {s, n \geq 1}. Then if {{\bf F}} is a field of characteristic at least {C_{s,n}} for some {C_{s,n}} depending on {s,n}, and {A} is a subset of {{\bf F}} of cardinality {n}, then there exists a map {\phi: A \rightarrow A'} into a subset {A'} of the complex numbers which is a Freiman field isomorphism of order {s}.

Our arguments will not provide any effective bound on the quantity {C_{s,n}} (though one could in principle eventually extract such a bound by deconstructing the proof of Proposition 4 below), making this result weaker than Theorem 2 (save for the minor generalisation that it can handle fields of prime power order as well as fields of prime order as long as the characteristic remains large).

Following the principle that ultraproducts can be used as a bridge to connect quantitative and qualitative results (as discussed in these previous blog posts), we will deduce Theorem 3 from the following (well-known) qualitative version:

Proposition 4 (Baby Lefschetz principle) Let {k} be a field of characteristic zero that is finitely generated over the rationals. Then there is an isomorphism {\phi: k \rightarrow \phi(k)} from {k} to a subfield {\phi(k)} of {{\bf C}}.

This principle (first laid out in an appendix of Lefschetz’s book), among other things, often allows one to use the methods of complex analysis (e.g. Riemann surface theory) to study many other fields of characteristic zero. There are many variants and extensions of this principle; see for instance this MathOverflow post for some discussion of these. I used this baby version of the Lefschetz principle recently in a paper on expanding polynomial maps.

Proof: We give two proofs of this fact, one using transcendence bases and the other using Hilbert’s nullstellensatz.

We begin with the former proof. As {k} is finitely generated over {{\bf Q}}, it has finite transcendence degree, thus one can find algebraically independent elements {x_1,\ldots,x_m} of {k} over {{\bf Q}} such that {k} is a finite extension of {{\bf Q}(x_1,\ldots,x_m)}, and in particular by the primitive element theorem {k} is generated by {{\bf Q}(x_1,\ldots,x_m)} and an element {\alpha} which is algebraic over {{\bf Q}(x_1,\ldots,x_m)}. (Here we use the fact that characteristic zero fields are separable.) If we then define {\phi} by first mapping {x_1,\ldots,x_m} to generic (and thus algebraically independent) complex numbers {z_1,\ldots,z_m}, and then setting {\phi(\alpha)} to be a complex root of of the minimal polynomial for {\alpha} over {{\bf Q}(x_1,\ldots,x_m)} after replacing each {x_i} with the complex number {z_i}, we obtain a field isomorphism {\phi: k \rightarrow \phi(k)} with the required properties.

Now we give the latter proof. Let {x_1,\ldots,x_m} be elements of {k} that generate that field over {{\bf Q}}, but which are not necessarily algebraically independent. Our task is then equivalent to that of finding complex numbers {z_1,\ldots,z_m} with the property that, for any polynomial {P(x_1,\ldots,x_m)} with rational coefficients, one has

\displaystyle  P(x_1,\ldots,x_m) = 0

if and only if

\displaystyle  P(z_1,\ldots,z_m) = 0.

Let {{\mathcal P}} be the collection of all polynomials {P} with rational coefficients with {P(x_1,\ldots,x_m)=0}, and {{\mathcal Q}} be the collection of all polynomials {P} with rational coefficients with {P(x_1,\ldots,x_m) \neq 0}. The set

\displaystyle  S := \{ (z_1,\ldots,z_m) \in {\bf C}^m: P(z_1,\ldots,z_m)=0 \hbox{ for all } P \in {\mathcal P} \}

is the intersection of countably many algebraic sets and is thus also an algebraic set (by the Hilbert basis theorem or the Noetherian property of algebraic sets). If the desired claim failed, then {S} could be covered by the algebraic sets {\{ (z_1,\ldots,z_m) \in {\bf C}^m: Q(z_1,\ldots,z_m) = 0 \}} for {Q \in {\mathcal Q}}. By decomposing into irreducible varieties and observing (e.g. from the Baire category theorem) that a variety of a given dimension over {{\bf C}} cannot be covered by countably many varieties of smaller dimension, we conclude that {S} must in fact be covered by a finite number of such sets, thus

\displaystyle  S \subset \bigcup_{i=1}^n \{ (z_1,\ldots,z_m) \in {\bf C}^m: Q_i(z_1,\ldots,z_m) = 0 \}

for some {Q_1,\ldots,Q_n \in {\bf C}^m}. By the nullstellensatz, we thus have an identity of the form

\displaystyle  (Q_1 \ldots Q_n)^l = P_1 R_1 + \ldots + P_r R_r

for some natural numbers {l,r \geq 1}, polynomials {P_1,\ldots,P_r \in {\mathcal P}}, and polynomials {R_1,\ldots,R_r} with coefficients in {\overline{{\bf Q}}}. In particular, this identity also holds in the algebraic closure {\overline{k}} of {k}. Evaluating this identity at {(x_1,\ldots,x_m)} we see that the right-hand side is zero but the left-hand side is non-zero, a contradiction, and the claim follows. \Box

From Proposition 4 one can now deduce Theorem 3 by a routine ultraproduct argument (the same one used in these previous blog posts). Suppose for contradiction that Theorem 3 fails. Then there exists natural numbers {s,n \geq 1}, a sequence of finite fields {{\bf F}_i} of characteristic at least {i}, and subsets {A_i=\{a_{i,1},\ldots,a_{i,n}\}} of {{\bf F}_i} of cardinality {n} such that for each {i}, there does not exist a Freiman field isomorphism of order {s} from {A_i} to the complex numbers. Now we select a non-principal ultrafilter {\alpha \in \beta {\bf N} \backslash {\bf N}}, and construct the ultraproduct {{\bf F} := \prod_{i \rightarrow \alpha} {\bf F}_i} of the finite fields {{\bf F}_i}. This is again a field (and is a basic example of what is known as a pseudo-finite field); because the characteristic of {{\bf F}_i} goes to infinity as {i \rightarrow \infty}, it is easy to see (using Los’s theorem) that {{\bf F}} has characteristic zero and can thus be viewed as an extension of the rationals {{\bf Q}}.

Now let {a_j := \lim_{i \rightarrow \alpha} a_{i,j}} be the ultralimit of the {a_{i,j}}, so that {A := \{a_1,\ldots,a_n\}} is the ultraproduct of the {A_i}, then {A} is a subset of {{\bf F}} of cardinality {n}. In particular, if {k} is the field generated by {{\bf Q}} and {A}, then {k} is a finitely generated extension of the rationals and thus, by Proposition 4 there is an isomorphism {\phi: k \rightarrow \phi(k)} from {k} to a subfield {\phi(k)} of the complex numbers. In particular, {\phi(a_1),\ldots,\phi(a_n)} are complex numbers, and for any polynomial {P(x_1,\ldots,x_n)} with integer coefficients, one has

\displaystyle  P(a_1,\ldots,a_n) = 0

if and only if

\displaystyle  P(\phi(a_1),\ldots,\phi(a_n)) = 0.

By Los’s theorem, we then conclude that for all {i} sufficiently close to {\alpha}, one has for all polynomials {P(x_1,\ldots,x_n)} of degree at most {s} and whose coefficients are integers whose magnitude sums up to {s}, one has

\displaystyle  P(a_{i,1},\ldots,a_{i,n}) = 0

if and only if

\displaystyle  P(\phi(a_1),\ldots,\phi(a_n)) = 0.

But this gives a Freiman field isomorphism of order {s} between {A_i} and {\phi(A)}, contradicting the construction of {A_i}, and Theorem 3 follows.

Two weeks ago I was at Oberwolfach, for the Arbeitsgemeinschaft in Ergodic Theory and Combinatorial Number Theory that I was one of the organisers for. At this workshop, I learned the details of a very nice recent convergence result of Miguel Walsh (who, incidentally, is an informal grandstudent of mine, as his advisor, Roman Sasyk, was my informal student), which considerably strengthens and generalises a number of previous convergence results in ergodic theory (including one of my own), with a remarkably simple proof. Walsh’s argument is phrased in a finitary language (somewhat similar, in fact, to the approach used in my paper mentioned previously), and (among other things) relies on the concept of metastability of sequences, a variant of the notion of convergence which is useful in situations in which one does not expect a uniform convergence rate; see this previous blog post for some discussion of metastability. When interpreted in a finitary setting, this concept requires a fair amount of “epsilon management” to manipulate; also, Walsh’s argument uses some other epsilon-intensive finitary arguments, such as a decomposition lemma of Gowers based on the Hahn-Banach theorem. As such, I was tempted to try to rewrite Walsh’s argument in the language of nonstandard analysis to see the extent to which these sorts of issues could be managed. As it turns out, the argument gets cleaned up rather nicely, with the notion of metastability being replaced with the simpler notion of external Cauchy convergence (which we will define below the fold).

Let’s first state Walsh’s theorem. This theorem is a norm convergence theorem in ergodic theory, and can be viewed as a substantial generalisation of one of the most fundamental theorems of this type, namely the mean ergodic theorem:

Theorem 1 (Mean ergodic theorem) Let {(X,\mu,T)} be a measure-preserving system (a probability space {(X,\mu)} with an invertible measure-preserving transformation {T}). Then for any {f \in L^2(X,\mu)}, the averages {\frac{1}{N} \sum_{n=1}^N T^n f} converge in {L^2(X,\mu)} norm as {N \rightarrow \infty}, where {T^n f(x) := f(T^{-n} x)}.

In this post, all functions in {L^2(X,\mu)} and similar spaces will be taken to be real instead of complex-valued for simplicity, though the extension to the complex setting is routine.

Actually, we have a precise description of the limit of these averages, namely the orthogonal projection of {f} to the {T}-invariant factors. (See for instance my lecture notes on this theorem.) While this theorem ostensibly involves measure theory, it can be abstracted to the more general setting of unitary operators on a Hilbert space:

Theorem 2 (von Neumann mean ergodic theorem) Let {H} be a Hilbert space, and let {U: H \rightarrow H} be a unitary operator on {H}. Then for any {f \in H}, the averages {\frac{1}{N} \sum_{n=1}^N U^n f} converge strongly in {H} as {N \rightarrow \infty}.

Again, see my lecture notes (or just about any text in ergodic theory) for a proof.

Now we turn to Walsh’s theorem.

Theorem 3 (Walsh’s convergence theorem) Let {(X,\mu)} be a measure space with a measure-preserving action of a nilpotent group {G}. Let {g_1,\ldots,g_k: {\bf Z} \rightarrow G} be polynomial sequences in {G} (i.e. each {g_i} takes the form {g_i(n) = a_{i,1}^{p_{i,1}(n)} \ldots a_{i,j}^{p_{i,j}(n)}} for some {a_{i,1},\ldots,a_{i,j} \in G} and polynomials {p_{i,1},\ldots,p_{i,j}: {\bf Z} \rightarrow {\bf Z}}). Then for any {f_1,\ldots,f_k \in L^\infty(X,\mu)}, the averages {\frac{1}{N} \sum_{n=1}^N (g_1(n) f_1) \ldots (g_k(n) f_k)} converge in {L^2(X,\mu)} norm as {N \rightarrow \infty}, where {g(n) f(x) := f(g(n)^{-1} x)}.

It turns out that this theorem can also be abstracted to some extent, although due to the multiplication in the summand {(g_1(n) f_1) \ldots (g_k(n) f_k)}, one cannot work purely with Hilbert spaces as in the von Neumann mean ergodic theorem, but must also work with something like the Banach algebra {L^\infty(X,\mu)}. There are a number of ways to formulate this abstraction (which will be of some minor convenience to us, as it will allow us to reduce the need to invoke the nonstandard measure theory of Loeb, discussed for instance in this blog post); we will use the notion of a (real) commutative probability space {({\mathcal A},\tau)}, which for us will be a commutative unital algebra {{\mathcal A}} over the reals together with a linear functional {\tau: {\mathcal A} \rightarrow {\bf R}} which maps {1} to {1} and obeys the non-negativity axiom {\tau(f^2) \ge 0} for all {f}. The key example to keep in mind here is {{\mathcal A} = L^\infty(X,\mu)} of essentially bounded real-valued measurable functions with the supremum norm, and with the trace {\tau(f) := \int_X f\ d\mu}. We will also assume in our definition of commutative probability spaces that all elements {f} of {{\mathcal A}} are bounded in the sense that the spectral radius {\rho(f) := \lim_{k \rightarrow \infty} \tau(f^{2k})^{1/2k}} is finite. (In the concrete case of {L^\infty(X,\mu)}, the spectral radius is just the {L^\infty} norm.)

Given a commutative probability space, we can form an inner product {\langle, \rangle_{L^2(\tau)}} on it by the formula

\displaystyle  \langle f, g \rangle_{L^2(\tau)} := \tau(fg).

This is a positive semi-definite form, and gives a (possibly degenerate) inner product structure on {{\mathcal A}}. We could complete this structure into a Hilbert space {L^2(\tau)} (after quotienting out the elements of zero norm), but we will not do so here, instead just viewing {L^2(\tau)} as providing a semi-metric on {{\mathcal A}}. For future reference we record the inequalities

\displaystyle  \rho(fg) \leq \rho(f) \rho(g)

\displaystyle  \rho(f+g) \leq \rho(f) + \rho(g)

\displaystyle  \| fg\|_{L^2(\tau)} \leq \|f\|_{L^2(\tau)} \rho(g)

for any {f,g}, which we will use in the sequel without further comment; see e.g. these previous blog notes for proofs. (Actually, for the purposes of proving Theorem 3, one can specialise to the {L^\infty(X,\mu)} case (and ultraproducts thereof), in which case these inequalities are just the triangle and Hölder inequalities.)

The abstract version of Theorem 3 is then

Theorem 4 (Walsh’s theorem, abstract version) Let {({\mathcal A},\tau)} be a commutative probability space, and let {G} be a nilpotent group acting on {{\mathcal A}} by isomorphisms (preserving the algebra, conjugation, and trace structure, and thus also preserving the spectral radius and {L^2(\tau)} norm). Let {g_1,\ldots,g_k: {\bf Z} \rightarrow G} be polynomial sequences. Then for any {f_1,\ldots,f_k \in {\mathcal A}}, the averages {\frac{1}{N} \sum_{n=1}^N (g_1(n) f_1) \ldots (g_k(n) f_k)} form a Cauchy sequence in {L^2(\tau)} (semi-)norm as {N \rightarrow \infty}.

It is easy to see that this theorem generalises Theorem 3. Conversely, one can use the commutative Gelfand-Naimark theorem to deduce Theorem 4 from Theorem 3, although we will not need this implication. Note how we are abandoning all attempts to discern what the limit of the sequence actually is, instead contenting ourselves with demonstrating that it is merely a Cauchy sequence. With this phrasing, it is tempting to ask whether there is any analogue of Walsh’s theorem for noncommutative probability spaces, but unfortunately the answer to that question is negative for all but the simplest of averages, as was worked out in this paper of Austin, Eisner, and myself.

Our proof of Theorem 4 will proceed as follows. Firstly, in order to avoid the epsilon management alluded to earlier, we will take an ultraproduct to rephrase the theorem in the language of nonstandard analysis; for reasons that will be clearer later, we will also convert the convergence problem to a problem of obtaining metastability (external Cauchy convergence). Then, we observe that (the nonstandard counterpart of) the expression {\|\frac{1}{N} \sum_{n=1}^N (g_1(n) f_1) \ldots (g_k(n) f_k)\|_{L^2(\tau)}^2} can be viewed as the inner product of (say) {f_k} with a certain type of expression, which we call a dual function. By performing an orthogonal projection to the span of the dual functions, we can split {f_k} into the sum of an expression orthogonal to all dual functions (the “pseudorandom” component), and a function that can be well approximated by finite linear combinations of dual functions (the “structured” component). The contribution of the pseudorandom component is asymptotically negligible, so we can reduce to consideration of the structured component. But by a little bit of rearrangement, this can be viewed as an average of expressions similar to the initial average {\frac{1}{N} \sum_{n=1}^N (g_1(n) f_1) \ldots (g_k(n) f_k)}, except with the polynomials {g_1,\ldots,g_k} replaced by a “lower complexity” set of such polynomials, which can be greater in number, but which have slightly lower degrees in some sense. One can iterate this (using “PET induction”) until all the polynomials become trivial, at which point the claim follows.

Read the rest of this entry »

Much as group theory is the study of groups, or graph theory is the study of graphs, model theory is the study of models (also known as structures) of some language {{\mathcal L}} (which, in this post, will always be a single-sorted, first-order language). A structure is a set {X}, equipped with one or more operations, constants, and relations. This is of course an extremely general type of mathematical object, but (quite remarkably) one can still say a substantial number of interesting things about very broad classes of structures.

We will observe the common abuse of notation of using the set {X} as a metonym for the entire structure, much as we usually refer to a group {(G,1,\cdot,()^{-1})} simply as {G}, a vector space {(V, 0, +, \cdot)} simply as {V}, and so forth. Following another common bending of the rules, we also allow some operations on structures (such as the multiplicative inverse operation on a group or field) to only be partially defined, and we allow use of the usual simplifying conventions for mathematical formulas (e.g. writing {a+b+c} instead of {(a+b)+c} or {a+(b+c)}, in cases where associativity is known). We will also deviate slightly from the usual practice in logic by emphasising individual structures, rather than the theory of general classes of structures; for instance, we will talk about the theory of a single field such as {{\bf R}} or {{\bf C}}, rather than the theory of all fields of a certain type (e.g. real closed fields or algebraically closed fields).

Once one has a structure {X}, one can introduce the notion of a definable subset of {X}, or more generally of a Cartesian power {X^n} of {X}, defined as a set {E \subset X^n} of the form

\displaystyle  E = \{ (x_1,\ldots,x_n): P(x_1,\ldots,x_n) \hbox{ true} \} \ \ \ \ \ (1)

for some formula {P} in the language {{\mathcal L}} with {n} free variables and any number of constants from {X} (that is, {P(x_1,\ldots,x_n)} is a well-formed formula built up from a finite number of constants {c_1,\ldots,c_m} in {X}, the relations and operations on {X}, logical connectives such as {\neg}, {\wedge}, {\implies}, and the quantifiers {\forall, \exists}). Thus, for instance, in the theory of the arithmetic of the natural numbers {{\bf N} = ({\bf N}, 0, 1, +, \times)}, the set of primes {{\mathcal P}} is a definable set, since we have

\displaystyle  {\mathcal P} = \{ x \in {\bf N}: (\exists y: x=y+2) \wedge \neg (\exists z,w: x = (z+2)(w+2)) \}.

In the theory of the field of reals {{\bf R} = ({\bf R}, 0, 1, +, -, \times, ()^{-1})}, the unit circle {S^1} is an example of a definable set,

\displaystyle  S^1 = \{ (x,y) \in {\bf R}^2: x^2+y^2 = 1 \},

but so is the the complement of the circle,

\displaystyle  {\bf R}^2 \backslash S^1 = \{ (x,y) \in {\bf R}^2: \neg(x^2+y^2 = 1) \}

and the interval {[-1,1]}:

\displaystyle  [-1,1] = \{ x \in {\bf R}: \exists y: x^2+y^2 = 1\}.

Due to the unlimited use of constants, any finite subset of a power {X^n} of any structure {X} is, by our conventions, definable in that structure. (One can of course also consider definability without parameters (also known as {0}-definability), in which arbitrary constants are not permitted, but we will not do so here.)

We can isolate some special subclasses of definable sets:

  • An atomic definable set is a set of the form (1) in which {P()} is an atomic formula (i.e. it does not contain any logical connectives or quantifiers).
  • A quantifier-free definable set is a set of the form (1) in which {P()} is quantifier-free (i.e. it can contain logical connectives, but does not contain the quantifiers {\forall, \exists}).

Example 1 In the theory of a field such as {{\bf R}}, an atomic definable set is the same thing as an affine algebraic set (also known as an affine algebraic variety, with the understanding that varieties are not necessarily assumed to be irreducible), and a quantifier-free definable set is known as a constructible set; thus we see that algebraic geometry can be viewed in some sense as a special case of model theory. (Conversely, it can in fact be quite profitable to think of model theory as an abstraction of algebraic geometry; for instance, the concepts of Morley rank and Morley degree in model theory (discussed in this previous blog post) directly generalises the concepts of dimension and degree in algebraic geometry.) Over {{\bf R}}, the interval {[-1,1]} is a definable set, but not a quantifier-free definable set (and certainly not an atomic definable set); and similarly for the primes over {{\bf N}}.

A quantifier-free definable set in {X^n} is nothing more than a finite boolean combination of atomic definable sets; in other words, the class of quantifier-free definable sets over {X} is the smallest class that contains the atomic definable sets and is closed under boolean operations such as complementation and union (which generate all the other boolean operations). Similarly, the class of definable sets over {X} is the smallest class that contains the quantifier-free definable sets, and is also closed under the operation of projection {\pi_n: E \mapsto \pi_n(E)} from {X^{n+1}} to {X^n} for every natural number {n}, where {\pi_n: X^{n+1} \rightarrow X^n} is the map {\pi_n(x_1,\ldots,x_n,x_{n+1}) := (x_1,\ldots,x_n)}.

Some structures have the property of enjoying quantifier elimination, which means that every definable set is in fact a quantifier-free definable set, or equivalently that the projection of a quantifier-free definable set is again quantifier-free. For instance, an algebraically closed field {k} (with the field operations) has quantifier elimination (i.e. the projection of a constructible set is again constructible); this fact can be proven by the classical tool of resultants, and among other things can be used to give a proof of Hilbert’s nullstellensatz. (Note though that projection does not necessary preserve the property of being atomic; for instance, the projection of the atomic set {\{ (x,y) \in k^2: xy=1 \}} is the non-atomic, but still quantifier-free definable, set {\{ x \in k: \neg (k=0) \}}.) In the converse direction, it is not difficult to use the nullstellensatz to deduce quantifier elimination. For theory of the real field {{\bf R}}, which is not algebraically closed, one does not have quantifier elimination, as one can see from the example of the unit circle (which is a quantifier-free definable set) projecting down to the interval {[-1,1]} (which is definable, but not quantifer-free definable). However, if one adds the additional operation of order {<} to the reals, giving it the language of an ordered field rather than just a field, then quantifier elimination is recovered (the class of quantifier-free definable sets now enlarges to match the class of definable sets, which in this case is also the class of semi-algebraic sets); this is the famous Tarski-Seidenberg theorem.

On the other hand, many important structures do not have quantifier elimination; typically, the projection of a quantifier-free definable set is not, in general, quantifier-free definable. This failure of the projection property also shows up in many contexts outside of model theory; for instance, Lebesgue famously made the error of thinking that the projection of a Borel measurable set remained Borel measurable (it is merely an analytic set instead). Turing’s halting theorem can be viewed as an assertion that the projection of a decidable set (also known as a computable or recursive set) is not necessarily decidable (it is merely semi-decidable (or recursively enumerable) instead). The notorious P=NP problem can also be essentially viewed in this spirit; roughly speaking (and glossing over the placement of some quantifiers), it asks whether the projection of a polynomial-time decidable set is again polynomial-time decidable. And so forth. (See this blog post of Dick Lipton for further discussion of the subtleties of projections.)

Now we consider the status of quantifier elimination for the theory of a finite field {F}. If interpreted naively, quantifier elimination is trivial for a finite field {F}, since every subset of {F^n} is finite and thus quantifier-free definable. However, we can recover an interesting question in one of two (essentially equivalent) ways. One is to work in the asymptotic regime in which the field {F} is large, but the length of the formulae used to construct one’s definable sets stays bounded uniformly in the size of {F} (where we view any constant in {F} as contributing a unit amount to the length of a formula, no matter how large {F} is). A simple counting argument then shows that only a small number of subsets of {F^n} become definable in the asymptotic limit {|F| \rightarrow \infty}, since the number of definable sets clearly grows at most polynomially in {|F|} for any fixed bound on the formula length, while the number of all subsets of {|F|^n} grows exponentially in {n}.

Another way to proceed is to work not with a single finite field {F}, or even with a sequence {F_m} of finite fields, but with the ultraproduct {F = \prod_{m \rightarrow p} F_m} of a sequence of finite fields, and to study the properties of definable sets over this ultraproduct. (We will be using the notation of ultraproducts and nonstandard analysis from this previous blog post.) This approach is equivalent to the more finitary approach mentioned in the previous paragraph, at least if one does not care to track of the exact bounds on the length of the formulae involved. Indeed, thanks to Los’s theorem, a definable subset {E} of {F^n} is nothing more than the ultraproduct {E = \prod_{m \rightarrow p} E_m} of definable subsets {E_m} of {F_m^n} for all {m} sufficiently close to {p}, with the length of the formulae used to define {E_m} uniformly bounded in {m}. In the language of nonstandard analysis, one can view {F} as a nonstandard finite field.

The ultraproduct {F} of finite fields is an important example of a pseudo-finite field – a field that obeys all the sentences in the languages of fields that finite fields do, but is not necessarily itself a finite field. The model theory of pseudo-finite fields was first studied systematically by Ax (in the same paper where the Ax-Grothendieck theorem, discussed previously on this blog, was established), with important further contributions by Kiefe, by Fried-Sacerdote, by two papers of Chatzidakis-van den Dries-Macintyre, and many other authors.

As mentioned before, quantifier elimination trivially holds for finite fields. But for infinite pseudo-finite fields, such as the ultraproduct {F = \prod_{m \rightarrow p} F_m} of finite fields with {|F_m|} going to infinity, quantifier elimination fails. For instance, in a finite field {F_m}, the set {E_m := \{ x \in F_m: \exists y \in F_m: x=y^2 \}} of quadratic residues is a definable set, with a bounded formula length, and so in the ultraproduct {F =\prod_{m \rightarrow p} F_m}, the set {E := \prod_{m\rightarrow p} E_m} of nonstandard quadratic residues is also a definable set. However, in one dimension, we see from the factor theorem that the only atomic definable sets are either finite or the whole field {F}, and so the only constructible sets (i.e. the only quantifier-free definable sets) are either finite or cofinite in {F}. Since the quadratic residues have asymptotic density {1/2} in a large finite field, they cannot form a quantifier-free definable set, despite being definable.

Nevertheless, there is a very nice almost quantifier elimination result for these fields, in characteristic zero at least, which we phrase here as follows:

Theorem 1 (Almost quantifier elimination) Let {F} be a nonstandard finite field of characteristic zero, and let {E \subset F^n} be a definable set over {F}. Then {E} is the union of finitely many sets of the form

\displaystyle  E = \{ x \in V(F): \exists t \in F: P(x,t) = 0 \} \ \ \ \ \ (2)

where {V(F)} is an atomic definable subset of {F^n} (i.e. the {F}-points of an algebraic variety {V} defined over {F} in {F^n}) and {P: F^{n+1} \rightarrow F} is a polynomial.

Results of this type were first obtained essentially due to Catarina Kiefe, although the formulation here is closer to that of Chatzidakis-van den Dries-Macintyre.

Informally, this theorem says that while we cannot quite eliminate all quantifiers from a definable set over a nonstandard finite field, we can eliminate all but one existential quantifier. Note that negation has also been eliminated in this theorem; for instance, the definable set {F \backslash \{0\} = \{ x \in F: \neg(x=0) \}} uses a negation, but can also be described using a single existential quantifier as {\{ x \in F: \exists t: xt = 1 \}}.) I believe that there are more complicated analogues of this result in positive characteristic, but I have not studied this case in detail (Kiefe’s result does not assume characteristic zero, but her conclusion is slightly different from the one given here). In the one-dimensional case {n=1}, the only varieties {V} are the affine line and finite sets, and we can simplify the above statement, namely that any definable subset of {F} takes the form {\{ x\in F: \exists t \in F: P(x,t) = 0 \}} for some polynomial {P} (i.e. definable sets in {F} are nothing more than the projections of the {F}-points of a plane curve).

There is an equivalent formulation of this theorem for standard finite fields, namely that if {F} is a finite field and {E \subset F^n} is definable using a formula of length at most {M}, then {E} can be expressed in the form (2) with the degree of {P} bounded by some quantity {C_{M,n}} depending on {M} and {n}, assuming that the characteristic of {F} is sufficiently large depending on {M, n}.

The theorem gives quite a satisfactory description of definable sets in either standard or nonstandard finite fields (at least if one does not care about effective bounds in some of the constants, and if one is willing to exclude the small characteristic case); for instance, in conjunction with the Lang-Weil bound discussed in this recent blog post, it shows that any non-empty definable subset of a nonstandard finite field has a nonstandard cardinality of {(\alpha + O(|F|^{-1/2})) |F|^d} for some positive standard rational {\alpha} and integer {d}. Equivalently, any non-empty definable subset of {F^n} for some standard finite field {F} using a formula of length at most {M} has a standard cardinality of {(\alpha + O_{M,n}(|F|^{-1/2})) |F|^d} for some positive rational of height {O_{M,n}(1)} and some natural number {d} between {0} and {n}. (For instance, in the example of the quadratic residues given above, {d} is equal to {1} and {\alpha} equal to {1/2}.) There is a more precise statement to this effect, namely that the Poincaré series of a definable set is rational; see Kiefe’s paper for details.

Below the fold I give a proof of Theorem 1, which relies primarily on the Lang-Weil bound mentioned above.

Read the rest of this entry »

Nonstandard analysis is a mathematical framework in which one extends the standard mathematical universe {{\mathfrak U}} of standard numbers, standard sets, standard functions, etc. into a larger nonstandard universe {{}^* {\mathfrak U}} of nonstandard numbers, nonstandard sets, nonstandard functions, etc., somewhat analogously to how one places the real numbers inside the complex numbers, or the rationals inside the reals. This nonstandard universe enjoys many of the same properties as the standard one; in particular, we have the transfer principle that asserts that any statement in the language of first order logic is true in the standard universe if and only if it is true in the nonstandard one. (For instance, because Fermat’s last theorem is known to be true for standard natural numbers, it is automatically true for nonstandard natural numbers as well.) However, the nonstandard universe also enjoys some additional useful properties that the standard one does not, most notably the countable saturation property, which is a property somewhat analogous to the completeness property of a metric space; much as metric completeness allows one to assert that the intersection of a countable family of nested closed balls is non-empty, countable saturation allows one to assert that the intersection of a countable family of nested satisfiable formulae is simultaneously satisfiable. (See this previous blog post for more on the analogy between the use of nonstandard analysis and the use of metric completions.) Furthermore, by viewing both the standard and nonstandard universes externally (placing them both inside a larger metatheory, such as a model of Zermelo-Frankel-Choice (ZFC) set theory; in some more advanced set-theoretic applications one may also wish to add some large cardinal axioms), one can place some useful additional definitions and constructions on these universes, such as defining the concept of an infinitesimal nonstandard number (a number which is smaller in magnitude than any positive standard number). The ability to rigorously manipulate infinitesimals is of course one of the most well-known advantages of working with nonstandard analysis.

To build a nonstandard universe {{}^* {\mathfrak U}} from a standard one {{\mathfrak U}}, the most common approach is to take an ultrapower of {{\mathfrak U}} with respect to some non-principal ultrafilter over the natural numbers; see e.g. this blog post for details. Once one is comfortable with ultrafilters and ultrapowers, this becomes quite a simple and elegant construction, and greatly demystifies the nature of nonstandard analysis.

On the other hand, nonprincipal ultrafilters do have some unappealing features. The most notable one is that their very existence requires the axiom of choice (or more precisely, a weaker form of this axiom known as the boolean prime ideal theorem). Closely related to this is the fact that one cannot actually write down any explicit example of a nonprincipal ultrafilter, but must instead rely on nonconstructive tools such as Zorn’s lemma, the Hahn-Banach theorem, Tychonoff’s theorem, the Stone-Cech compactification, or the boolean prime ideal theorem to locate one. As such, ultrafilters definitely belong to the “infinitary” side of mathematics, and one may feel that it is inappropriate to use such tools for “finitary” mathematical applications, such as those which arise in hard analysis. From a more practical viewpoint, because of the presence of the infinitary ultrafilter, it can be quite difficult (though usually not impossible, with sufficient patience and effort) to take a finitary result proven via nonstandard analysis and coax an effective quantitative bound from it.

There is however a “cheap” version of nonstandard analysis which is less powerful than the full version, but is not as infinitary in that it is constructive (in the sense of not requiring any sort of choice-type axiom), and which can be translated into standard analysis somewhat more easily than a fully nonstandard argument; indeed, a cheap nonstandard argument can often be presented (by judicious use of asymptotic notation) in a way which is nearly indistinguishable from a standard one. It is obtained by replacing the nonprincipal ultrafilter in fully nonstandard analysis with the more classical Fréchet filter of cofinite subsets of the natural numbers, which is the filter that implicitly underlies the concept of the classical limit {\lim_{{\bf n} \rightarrow \infty} a_{\bf n}} of a sequence when the underlying asymptotic parameter {{\bf n}} goes off to infinity. As such, “cheap nonstandard analysis” aligns very well with traditional mathematics, in which one often allows one’s objects to be parameterised by some external parameter such as {{\bf n}}, which is then allowed to approach some limit such as {\infty}. The catch is that the Fréchet filter is merely a filter and not an ultrafilter, and as such some of the key features of fully nonstandard analysis are lost. Most notably, the law of the excluded middle does not transfer over perfectly from standard analysis to cheap nonstandard analysis; much as there exist bounded sequences of real numbers (such as {0,1,0,1,\ldots}) which do not converge to a (classical) limit, there exist statements in cheap nonstandard analysis which are neither true nor false (at least without passing to a subsequence, see below). The loss of such a fundamental law of mathematical reasoning may seem like a major disadvantage for cheap nonstandard analysis, and it does indeed make cheap nonstandard analysis somewhat weaker than fully nonstandard analysis. But in some situations (particularly when one is reasoning in a “constructivist” or “intuitionistic” fashion, and in particular if one is avoiding too much reliance on set theory) it turns out that one can survive the loss of this law; and furthermore, the law of the excluded middle is still available for standard analysis, and so one can often proceed by working from time to time in the standard universe to temporarily take advantage of this law, and then transferring the results obtained there back to the cheap nonstandard universe once one no longer needs to invoke the law of the excluded middle. Furthermore, the law of the excluded middle can be recovered by adopting the freedom to pass to subsequences with regards to the asymptotic parameter {{\bf n}}; this technique is already in widespread use in the analysis of partial differential equations, although it is generally referred to by names such as “the compactness method” rather than as “cheap nonstandard analysis”.

Below the fold, I would like to describe this cheap version of nonstandard analysis, which I think can serve as a pedagogical stepping stone towards fully nonstandard analysis, as it is formally similar to (though weaker than) fully nonstandard analysis, but on the other hand is closer in practice to standard analysis. As we shall see below, the relation between cheap nonstandard analysis and standard analysis is analogous in many ways to the relation between probabilistic reasoning and deterministic reasoning; it also resembles somewhat the preference in much of modern mathematics for viewing mathematical objects as belonging to families (or to categories) to be manipulated en masse, rather than treating each object individually. (For instance, nonstandard analysis can be used as a partial substitute for scheme theory in order to obtain uniformly quantitative results in algebraic geometry, as discussed for instance in this previous blog post.)

Read the rest of this entry »

Emmanuel Breuillard, Ben Green, and I have just uploaded to the arXiv our paper “The structure of approximate groups“, submitted to Pub. IHES. We had announced the main results of this paper in various forums (including this blog) for a few months now, but it had taken some time to fully write up the paper and put in various refinements and applications.

As announced previously, the main result of this paper is what is a (virtually, qualitatively) complete description of finite approximate groups in an arbitrary (local or global) group {G}. For simplicity let us work in the much more familiar setting of global groups, although our results also apply (but are a bit more technical to state) in the local group setting.

Recall that in a global group {G = (G,\cdot)}, a {K}-approximate group is a symmetric subset {A} of {G} containing the origin, with the property that the product set {A \cdot A} is covered by {K} left-translates of {A}. Examples of {O(1)}-approximate groups include genuine groups, convex bodies in a bounded dimensional vector space, small balls in a bounded dimensional Lie group, large balls in a discrete nilpotent group of bounded rank or step, or generalised arithmetic progressions (or more generally, coset progressions) of bounded rank in an abelian group. Specialising now to finite approximate groups, a key example of such a group is what we call a coset nilprogression: a set of the form {\pi^{-1}(P)}, where {\pi: G' \rightarrow N} is a homomorphism with finite kernel from a subgroup {G'} of {G} to a nilpotent group {N} of bounded step, and {P = P(u_1,\ldots,u_r;N_1,\ldots,N_r)} is a nilprogression with a bounded number of generators {u_1,\ldots,u_r} in {N} and some lengths {N_1,\ldots,N_r \gg 1}, where {P(u_1,\ldots,u_r;N_1,\ldots,N_r)} consists of all the words involving at most {N_1} copies of {u_1^{\pm 1}}, {N_2} copies of {u_2^{\pm 1}}, and so forth up to {N_r} copies of {u_r^{\pm 1}}. One can show (by some nilpotent algebra) that all such coset nilprogressions are {O(1)}-approximate groups so long as the step and the rank {r} are bounded (and if {N_1,\ldots,N_r} are sufficiently large).

Our main theorem (which was essentially conjectured independently by Helfgott and by Lindenstrauss) asserts, roughly speaking, that coset nilprogressions are essentially the only examples of approximate groups.

Theorem 1 Let {A} be a {K}-approximate group. Then {A^4} contains a coset nilprogression {P} of rank and step {O_K(1)}, such that {A} can be covered by {O_K(1)} left-translates of {P}.

In the torsion-free abelian case, this result is essentially Freiman’s theorem (with an alternate proof by Ruzsa); for general abelian case, it is due to Green and Ruzsa. Various partial results in this direction for some other groups (e.g. free groups, nilpotent groups, solvable groups, or simple groups of Lie type) are also known; see these previous blog posts for a summary of several of these results.

This result has a number of applications to geometric growth theory, and in particular to variants of Gromov’s theorem of groups of polynomial growth, which asserts that a finitely generated group is of polynomial growth if and only if it is virtually nilpotent. The connection lies in the fact that if the balls {B_S(R)} associated to a finite set of generators {S} has polynomial growth, then some simple volume-packing arguments combined with the pigeonhole principle will show that {B_S(R)} will end up being a {O(1)}-approximate group for many radii {R}. In fact, since our theorem only needs a single approximate group to obtain virtually nilpotent structure, we are able to obtain some new strengthenings of Gromov’s theorem. For instance, if {A} is any {K}-approximate group in a finitely generated group {G} that contains {B_S(R_0)} for some set of generators {S} and some {R_0} that is sufficiently large depending on {K}, our theorem implies that {G} is virtually nilpotent, answering a question of Petrunin. Among other things, this gives an alternate proof of a recent result of Kapovitch and Wilking (see also this previous paper of Cheeger and Colding) that a compact manifold of bounded diameter and Ricci curvature at least {-\epsilon} necessarily has a virtually nilpotent fundamental group if {\epsilon} is sufficiently small (depending only on dimension). The main point here is that no lower bound on the injectivity radius is required. Another application is a “Margulis-type lemma”, which asserts that if a metric space {X} has “bounded packing” (in the sense that any ball of radius (say) {4} is covered by a bounded number of balls of radius {1}), and {\Gamma} is a group of isometries on {X} that acts discretely (i.e. every orbit has only finitely many elements (counting multiplicity) in each bounded set), then the near-stabiliser {\{ \gamma \in \Gamma: d(\gamma x, x) \leq \epsilon \}} of a point {x} is virtually nilpotent if {\epsilon} is small enough depending on the packing constant.

There are also some variants and refinements to the main theorem proved in the paper, such as an extension to local groups, and also an improvement on the bound on the rank and step from {O_K(1)} to {O(\log K)} (but at the cost of replacing {A^4} in the theorem with {A^{O(1)}}).

I’ll be discussing the proof of the main theorem in detail in the next few lecture notes of my current graduate course. The full proof is somewhat lengthy (occupying about 50 pages of the 90-page paper), but can be summarised in the following steps:

  1. (Hrushovski) Take an arbitrary sequence {A_n} of finite {K}-approximate groups, and show that an appropriate limit {A} of such groups can be “modeled” in some sense by an open bounded subset of a locally compact group. (The precise definition of “model” is technical, but “macroscopically faithful representation” is a good first approximation.) As discussed in the previous lecture notes, we use an ultralimit for this purpose; the paper of Hrushovski where this strategy was first employed also considered more sophisticated model-theoretic limits. To build a locally compact topology, Hrushovski used some tools from definability theory; in our paper, we instead use a combinatorial lemma of Sanders (closely related to a similar result of Croot and Sisask.)
  2. (Gleason-Yamabe) The locally compact group can in turn be “modeled” by a Lie group (possibly after shrinking the group, and thus the ultralimit {A}, slightly). (This result arose from the solution to Hilbert’s fifth problem, as discussed here. For our extension to local groups, we use a recent local version of the Gleason-Yamabe theorem, due to Goldbring.)
  3. (Gleason) Using the escape properties of the Lie model, construct a norm {\| \|} (and thus a left-invariant metric {d}) on the ultralimit approximate group {A} (and also on the finitary groups {A_n}) that obeys a number of good properties, such as a commutator estimate {\| [g,h]\| \ll \|g\| \|h\|}. (This is modeled on an analogous construction used in the theory of Hilbert’s fifth problem, as discussed in this previous set of lecture notes.) This norm is essentially an escape norm associated to (a slight modification) of {A} or {A_n}.
  4. (Jordan-Bieberbach-Frobenius) We now take advantage of the finite nature of the {A_n} by locating the non-trivial element {e} of {A_n} with minimal escape norm (but one has to first quotient out the elements of zero escape norm first). The commutator estimate mentioned previously ensures that this element is essentially “central” in {A_n}. One can then quotient out a progression {P(e;N)} generated by this central element (reducing the dimension of the Lie model by one in the process) and iterates the process until the dimension of the model drops to zero. Reversing the process, this constructs a coset nilprogression inside {A_n^4}. This argument is based on the classic proof of Jordan’s theorem due to Bieberbach and Frobenius, as discussed in this blog post.

One quirk of the argument is that it requires one to work in the category of local groups rather than global groups. (This is somewhat analogous to how, in the standard proofs of Freiman’s theorem, one needs to work with the category of Freiman homomorphisms, rather than group homomorphisms.) The reason for this arises when performing the quotienting step in the Jordan-Bieberbach-Frobenius leg of the argument. The obvious way to perform this step (and the thing that we tried first) would be to quotient out by the entire cyclic group {\langle e \rangle} generated by the element {e} of minimal escape norm. However, it turns out that this doesn’t work too well, because the group quotiented out is so “large” that it can create a lot of torsion in the quotient. In particular, elements which used to have positive escape norm, can now become trapped in the quotient of {A_n}, thus sending their escape norm to zero. This leads to an inferior conclusion (in which a coset nilprogression is replaced by a more complicated tower of alternating extensions between central progressions and finite groups, similar to the towers encountered in my previous paper on this topic). To prevent this unwanted creation of torsion, one has to truncate the cyclic group {\langle e \rangle} before it escapes {A_n}, so that one quotients out by a geometric progression {P(e;N)} rather than the cyclic group. But the operation of quotienting out by a {P(e;N)}, which is a local group rather than a global one, cannot be formalised in the category of global groups, but only in the category of local groups. Because of this, we were forced to carry out the entire argument using the language of local groups. As it turns out, the arguments are ultimately more natural in this setting, although there is an initial investment of notation required, given that global group theory is much more familiar and well-developed than local group theory.

One interesting feature of the argument is that it does not use much of the existing theory of Freiman-type theorems, instead building the coset nilprogression directly from the geometric properties of the approximate group. In particular, our argument gives a new proof of Freiman’s theorem in the abelian case, which largely avoids Fourier analysis (except through the use of the theory of Hilbert’s fifth problem, which uses the Peter-Weyl theorem (or, in the abelian case, Pontryagin duality), which is basically a version of Fourier analysis).

I have blogged several times in the past about nonstandard analysis, which among other things is useful in allowing one to import tools from infinitary (or qualitative) mathematics in order to establish results in finitary (or quantitative) mathematics. One drawback, though, to using nonstandard analysis methods is that the bounds one obtains by such methods are usually ineffective: in particular, the conclusions of a nonstandard analysis argument may involve an unspecified constant {C} that is known to be finite but for which no explicit bound is obviously available. (In many cases, a bound can eventually be worked out by performing proof mining on the argument, and in particular by carefully unpacking the proofs of all the various results from infinitary mathematics that were used in the argument, as opposed to simply using them as “black boxes”, but this is a time-consuming task and the bounds that one eventually obtains tend to be quite poor (e.g. tower exponential or Ackermann type bounds are not uncommon).)

Because of this fact, it would seem that quantitative bounds, such as polynomial type bounds {X \leq C Y^C} that show that one quantity {X} is controlled in a polynomial fashion by another quantity {Y}, are not easily obtainable through the ineffective methods of nonstandard analysis. Actually, this is not the case; as I will demonstrate by an example below, nonstandard analysis can certainly yield polynomial type bounds. The catch is that the exponent {C} in such bounds will be ineffective; but nevertheless such bounds are still good enough for many applications.

Let us now illustrate this by reproving a lemma from this paper of Mei-Chu Chang (Lemma 2.14, to be precise), which was recently pointed out to me by Van Vu. Chang’s paper is focused primarily on the sum-product problem, but she uses a quantitative lemma from algebraic geometry which is of independent interest. To motivate the lemma, let us first establish a qualitative version:

Lemma 1 (Qualitative solvability) Let {P_1,\ldots,P_r: {\bf C}^d \rightarrow {\bf C}} be a finite number of polynomials in several variables with rational coefficients. If there is a complex solution {z = (z_1,\ldots,z_d) \in {\bf C}^d} to the simultaneous system of equations

\displaystyle  P_1(z) = \ldots = P_r(z) = 0,

then there also exists a solution {z \in \overline{{\bf Q}}^d} whose coefficients are algebraic numbers (i.e. they lie in the algebraic closure {{\bf Q}} of the rationals).

Proof: Suppose there was no solution to {P_1(z)=\ldots=P_r(z)=0} over {\overline{{\bf Q}}}. Applying Hilbert’s nullstellensatz (which is available as {\overline{{\bf Q}}} is algebraically closed), we conclude the existence of some polynomials {Q_1,\ldots,Q_r} (with coefficients in {\overline{{\bf Q}}}) such that

\displaystyle  P_1 Q_1 + \ldots + P_r Q_r = 1

as polynomials. In particular, we have

\displaystyle  P_1(z) Q_1(z) + \ldots + P_r(z) Q_r(z) = 1

for all {z \in {\bf C}^d}. This shows that there is no solution to {P_1(z)=\ldots=P_r(z)=0} over {{\bf C}}, as required. \Box

Remark 1 Observe that in the above argument, one could replace {{\bf Q}} and {{\bf C}} by any other pair of fields, with the latter containing the algebraic closure of the former, and still obtain the same result.

The above lemma asserts that if a system of rational equations is solvable at all, then it is solvable with some algebraic solution. But it gives no bound on the complexity of that solution in terms of the complexity of the original equation. Chang’s lemma provides such a bound. If {H \geq 1} is an integer, let us say that an algebraic number has height at most {H} if its minimal polynomial (after clearing denominators) consists of integers of magnitude at most {H}.

Lemma 2 (Quantitative solvability) Let {P_1,\ldots,P_r: {\bf C}^d \rightarrow {\bf C}} be a finite number of polynomials of degree at most {D} with rational coefficients, each of height at most {H}. If there is a complex solution {z = (z_1,\ldots,z_d) \in {\bf C}^d} to the simultaneous system of equations

\displaystyle  P_1(z) = \ldots = P_r(z) = 0,

then there also exists a solution {z \in \overline{{\bf Q}}^d} whose coefficients are algebraic numbers of degree at most {C} and height at most {CH^C}, where {C = C_{D, d,r}} depends only on {D}, {d} and {r}.

Chang proves this lemma by essentially establishing a quantitative version of the nullstellensatz, via elementary elimination theory (somewhat similar, actually, to the approach I took to the nullstellensatz in my own blog post). She also notes that one could also establish the result through the machinery of Gröbner bases. In each of these arguments, it was not possible to use Lemma 1 (or the closely related nullstellensatz) as a black box; one actually had to unpack one of the proofs of that lemma or nullstellensatz to get the polynomial bound. However, using nonstandard analysis, it is possible to get such polynomial bounds (albeit with an ineffective value of the constant {C}) directly from Lemma 1 (or more precisely, the generalisation in Remark 1) without having to inspect the proof, and instead simply using it as a black box, thus providing a “soft” proof of Lemma 2 that is an alternative to the “hard” proofs mentioned above.

Here’s how the proof works. Informally, the idea is that Lemma 2 should follow from Lemma 1 after replacing the field of rationals {{\bf Q}} with “the field of rationals of polynomially bounded height”. Unfortunately, the latter object does not really make sense as a field in standard analysis; nevertheless, it is a perfectly sensible object in nonstandard analysis, and this allows the above informal argument to be made rigorous.

We turn to the details. As is common whenever one uses nonstandard analysis to prove finitary results, we use a “compactness and contradiction” argument (or more precisely, an “ultralimit and contradiction” argument). Suppose for contradiction that Lemma 2 failed. Carefully negating the quantifiers (and using the axiom of choice), we conclude that there exists {D, d, r} such that for each natural number {n}, there is a positive integer {H^{(n)}} and a family {P_1^{(n)}, \ldots, P_r^{(n)}: {\bf C}^d \rightarrow {\bf C}} of polynomials of degree at most {D} and rational coefficients of height at most {H^{(n)}}, such that there exist at least one complex solution {z^{(n)} \in {\bf C}^d} to

\displaystyle  P_1^{(n)}(z^{(n)}) = \ldots = P_r(z^{(n)}) = 0, \ \ \ \ \ (1)

but such that there does not exist any such solution whose coefficients are algebraic numbers of degree at most {n} and height at most {n (H^{(n)})^n}.

Now we take ultralimits (see e.g. this previous blog post of a quick review of ultralimit analysis, which we will assume knowledge of in the argument that follows). Let {p \in \beta {\bf N} \backslash {\bf N}} be a non-principal ultrafilter. For each {i=1,\ldots,r}, the ultralimit

\displaystyle  P_i := \lim_{n \rightarrow p} P_i^{(n)}

of the (standard) polynomials {P_i^{(n)}} is a nonstandard polynomial {P_i: {}^* {\bf C}^d \rightarrow {}^* {\bf C}} of degree at most {D}, whose coefficients now lie in the nonstandard rationals {{}^* {\bf Q}}. Actually, due to the height restriction, we can say more. Let {H := \lim_{n \rightarrow p} H^{(n)} \in {}^* {\bf N}} be the ultralimit of the {H^{(n)}}, this is a nonstandard natural number (which will almost certainly be unbounded, but we will not need to use this). Let us say that a nonstandard integer {a} is of polynomial size if we have {|a| \leq C H^C} for some standard natural number {C}, and say that a nonstandard rational number {a/b} is of polynomial height if {a}, {b} are of polynomial size. Let {{\bf Q}_{poly(H)}} be the collection of all nonstandard rationals of polynomial height. (In the language of nonstandard analysis, {{\bf Q}_{poly(H)}} is an external set rather than an internal one, because it is not itself an ultraproduct of standard sets; but this will not be relevant for the argument that follows.) It is easy to see that {{\bf Q}_{poly(H)}} is a field, basically because the sum or product of two integers of polynomial size, remains of polynomial size. By construction, it is clear that the coefficients of {P_i} are nonstandard rationals of polynomial height, and thus {P_1,\ldots,P_r} are defined over {{\bf Q}_{poly(H)}}.

Meanwhile, if we let {z := \lim_{n \rightarrow p} z^{(n)} \in {}^* {\bf C}^d} be the ultralimit of the solutions {z^{(n)}} in (1), we have

\displaystyle  P_1(z) = \ldots = P_r(z) = 0,

thus {P_1,\ldots,P_r} are solvable in {{}^* {\bf C}}. Applying Lemma 1 (or more precisely, the generalisation in Remark 1), we see that {P_1,\ldots,P_r} are also solvable in {\overline{{\bf Q}_{poly(H)}}}. (Note that as {{\bf C}} is algebraically closed, {{}^*{\bf C}} is also (by Los’s theorem), and so {{}^* {\bf C}} contains {\overline{{\bf Q}_{poly(H)}}}.) Thus, there exists {w \in \overline{{\bf Q}_{poly(H)}}^d} with

\displaystyle  P_1(w) = \ldots = P_r(w) = 0.

As {\overline{{\bf Q}_{poly(H)}}^d} lies in {{}^* {\bf C}^d}, we can write {w} as an ultralimit {w = \lim_{n \rightarrow p} w^{(n)}} of standard complex vectors {w^{(n)} \in {\bf C}^d}. By construction, the coefficients of {w} each obey a non-trivial polynomial equation of degree at most {C} and whose coefficients are nonstandard integers of magnitude at most {C H^C}, for some standard natural number {C}. Undoing the ultralimit, we conclude that for {n} sufficiently close to {p}, the coefficients of {w^{(n)}} obey a non-trivial polynomial equation of degree at most {C} whose coefficients are standard integers of magnitude at most {C (H^{(n)})^C}. In particular, these coefficients have height at most {C (H^{(n)})^C}. Also, we have

\displaystyle  P_1^{(n)}(w^{(n)}) = \ldots = P_r^{(n)}(w^{(n)}) = 0.

But for {n} larger than {C}, this contradicts the construction of the {P_i^{(n)}}, and the claim follows. (Note that as {p} is non-principal, any neighbourhood of {p} in {{\bf N}} will contain arbitrarily large natural numbers.)

Remark 2 The same argument actually gives a slightly stronger version of Lemma 2, namely that the integer coefficients used to define the algebraic solution {z} can be taken to be polynomials in the coefficients of {P_1,\ldots,P_r}, with degree and coefficients bounded by {C_{D,d,r}}.

One of the key difficulties in performing analysis in infinite-dimensional function spaces, as opposed to finite-dimensional vector spaces, is that the Bolzano-Weierstrass theorem no longer holds: a bounded sequence in an infinite-dimensional function space need not have any convergent subsequences (when viewed using the strong topology). To put it another way, the closed unit ball in an infinite-dimensional function space usually fails to be (sequentially) compact.

As compactness is such a useful property to have in analysis, various tools have been developed over the years to try to salvage some sort of substitute for the compactness property in infinite-dimensional spaces. One of these tools is concentration compactness, which was discussed previously on this blog. This can be viewed as a compromise between weak compactness (which is true in very general circumstances, but is often too weak for applications) and strong compactness (which would be very useful in applications, but is usually false), in which one obtains convergence in an intermediate sense that involves a group of symmetries acting on the function space in question.

Concentration compactness is usually stated and proved in the language of standard analysis: epsilons and deltas, limits and supremas, and so forth. In this post, I wanted to note that one could also state and prove the basic foundations of concentration compactness in the framework of nonstandard analysis, in which one now deals with infinitesimals and ultralimits instead of epsilons and ordinary limits. This is a fairly mild change of viewpoint, but I found it to be informative to view this subject from a slightly different perspective. The nonstandard proofs require a fair amount of general machinery to set up, but conversely, once all the machinery is up and running, the proofs become slightly shorter, and can exploit tools from (standard) infinitary analysis, such as orthogonal projections in Hilbert spaces, or the continuous-pure point decomposition of measures. Because of the substantial amount of setup required, nonstandard proofs tend to have significantly more net complexity than their standard counterparts when it comes to basic results (such as those presented in this post), but the gap between the two narrows when the results become more difficult, and for particularly intricate and deep results it can happen that nonstandard proofs end up being simpler overall than their standard analogues, particularly if the nonstandard proof is able to tap the power of some existing mature body of infinitary mathematics (e.g. ergodic theory, measure theory, Hilbert space theory, or topological group theory) which is difficult to directly access in the standard formulation of the argument.

Read the rest of this entry »

Many structures in mathematics are incomplete in one or more ways. For instance, the field of rationals {{\bf Q}} or the reals {{\bf R}} are algebraically incomplete, because there are some non-trivial algebraic equations (such as {x^2=2} in the case of the rationals, or {x^2=-1} in the case of the reals) which could potentially have solutions (because they do not imply a necessarily false statement, such as {1=0}, just using the laws of algebra), but do not actually have solutions in the specified field.

Similarly, the rationals {{\bf Q}}, when viewed now as a metric space rather than as a field, are also metrically incomplete, beause there exist sequences in the rationals (e.g. the decimal approximations {3, 3.1, 3.14, 3.141, \ldots} of the irrational number {\pi}) which could potentially converge to a limit (because they form a Cauchy sequence), but do not actually converge in the specified metric space.

A third type of incompleteness is that of logical incompleteness, which applies now to formal theories rather than to fields or metric spaces. For instance, Zermelo-Frankel-Choice (ZFC) set theory is logically incomplete, because there exist statements (such as the consistency of ZFC) which could potentially be provable by the theory (because it does not lead to a contradiction, or at least so we believe, just from the axioms and deductive rules of the theory), but is not actually provable in this theory.

A fourth type of incompleteness, which is slightly less well known than the above three, is what I will call elementary incompleteness (and which model theorists call the failure of the countable saturation property). It applies to any structure that is describable by a first-order language, such as a field, a metric space, or a universe of sets. For instance, in the language of ordered real fields, the real line {{\bf R}} is elementarily incomplete, because there exists a sequence of statements (such as the statements {0 < x < 1/n} for natural numbers {n=1,2,\ldots}) in this language which are potentially simultaneously satisfiable (in the sense that any finite number of these statements can be satisfied by some real number {x}) but are not actually simultaneously satisfiable in this theory.

In each of these cases, though, it is possible to start with an incomplete structure and complete it to a much larger structure to eliminate the incompleteness. For instance, starting with an arbitrary field {k}, one can take its algebraic completion (or algebraic closure) {\overline{k}}; for instance, {{\bf C} = \overline{{\bf R}}} can be viewed as the algebraic completion of {{\bf R}}. This field is usually significantly larger than the original field {k}, but contains {k} as a subfield, and every element of {\overline{k}} can be described as the solution to some polynomial equation with coefficients in {k}. Furthermore, {\overline{k}} is now algebraically complete (or algebraically closed): every polynomial equation in {\overline{k}} which is potentially satisfiable (in the sense that it does not lead to a contradiction such as {1=0} from the laws of algebra), is actually satisfiable in {\overline{k}}.

Similarly, starting with an arbitrary metric space {X}, one can take its metric completion {\overline{X}}; for instance, {{\bf R} = \overline{{\bf Q}}} can be viewed as the metric completion of {{\bf Q}}. Again, the completion {\overline{X}} is usually much larger than the original metric space {X}, but contains {X} as a subspace, and every element of {\overline{X}} can be described as the limit of some Cauchy sequence in {X}. Furthermore, {\overline{X}} is now a complete metric space: every sequence in {\overline{X}} which is potentially convergent (in the sense of being a Cauchy sequence), is now actually convegent in {\overline{X}}.

In a similar vein, we have the Gödel completeness theorem, which implies (among other things) that for any consistent first-order theory {T} for a first-order language {L}, there exists at least one completion {\overline{T}} of that theory {T}, which is a consistent theory in which every sentence in {L} which is potentially true in {\overline{T}} (because it does not lead to a contradiction in {\overline{T}}) is actually true in {\overline{T}}. Indeed, the completeness theorem provides at least one model (or structure) {{\mathfrak U}} of the consistent theory {T}, and then the completion {\overline{T} = \hbox{Th}({\mathfrak U})} can be formed by interpreting every sentence in {L} using {{\mathfrak U}} to determine its truth value. Note, in contrast to the previous two examples, that the completion is usually not unique in any way; a theory {T} can have multiple inequivalent models {{\mathfrak U}}, giving rise to distinct completions of the same theory.

Finally, if one starts with an arbitrary structure {{\mathfrak U}}, one can form an elementary completion {{}^* {\mathfrak U}} of it, which is a significantly larger structure which contains {{\mathfrak U}} as a substructure, and such that every element of {{}^* {\mathfrak U}} is an elementary limit of a sequence of elements in {{\mathfrak U}} (I will define this term shortly). Furthermore, {{}^* {\mathfrak U}} is elementarily complete; any sequence of statements that are potentially simultaneously satisfiable in {{}^* {\mathfrak U}} (in the sense that any finite number of statements in this collection are simultaneously satisfiable), will actually be simultaneously satisfiable. As we shall see, one can form such an elementary completion by taking an ultrapower of the original structure {{\mathfrak U}}. If {{\mathfrak U}} is the standard universe of all the standard objects one considers in mathematics, then its elementary completion {{}^* {\mathfrak U}} is known as the nonstandard universe, and is the setting for nonstandard analysis.

As mentioned earlier, completion tends to make a space much larger and more complicated. If one algebraically completes a finite field, for instance, one necessarily obtains an infinite field as a consequence. If one metrically completes a countable metric space with no isolated points, such as {{\bf Q}}, then one necessarily obtains an uncountable metric space (thanks to the Baire category theorem). If one takes a logical completion of a consistent first-order theory that can model true arithmetic, then this completion is no longer describable by a recursively enumerable schema of axioms, thanks to Gödel’s incompleteness theorem. And if one takes the elementary completion of a countable structure, such as the integers {{\bf Z}}, then the resulting completion {{}^* {\bf Z}} will necessarily be uncountable.

However, there are substantial benefits to working in the completed structure which can make it well worth the massive increase in size. For instance, by working in the algebraic completion of a field, one gains access to the full power of algebraic geometry. By working in the metric completion of a metric space, one gains access to powerful tools of real analysis, such as the Baire category theorem, the Heine-Borel theorem, and (in the case of Euclidean completions) the Bolzano-Weierstrass theorem. By working in a logically and elementarily completed theory (aka a saturated model) of a first-order theory, one gains access to the branch of model theory known as definability theory, which allows one to analyse the structure of definable sets in much the same way that algebraic geometry allows one to analyse the structure of algebraic sets. Finally, when working in an elementary completion of a structure, one gains a sequential compactness property, analogous to the Bolzano-Weierstrass theorem, which can be interpreted as the foundation for much of nonstandard analysis, as well as providing a unifying framework to describe various correspondence principles between finitary and infinitary mathematics.

In this post, I wish to expand upon these above points with regard to elementary completion, and to present nonstandard analysis as a completion of standard analysis in much the same way as, say, complex algebra is a completion of real algebra, or real metric geometry is a completion of rational metric geometry.

Read the rest of this entry »

I have blogged a number of times in the past about the relationship between finitary (or “hard”, or “quantitative”) analysis, and infinitary (or “soft”, or “qualitative”) analysis. One way to connect the two types of analysis is via compactness arguments (and more specifically, contradiction and compactness arguments); such arguments can convert qualitative properties (such as continuity) to quantitative properties (such as bounded), basically because of the fundamental fact that continuous functions on a compact space are bounded (or the closely related fact that sequentially continuous functions on a sequentially compact space are bounded).

A key stage in any such compactness argument is the following: one has a sequence {X_n} of “quantitative” or “finitary” objects or spaces, and one has to somehow end up with a “qualitative” or “infinitary” limit object {X} or limit space. One common way to achieve this is to embed everything inside some universal space and then use some weak compactness property of that space, such as the Banach-Alaoglu theorem (or its sequential counterpart). This is for instance the idea behind the Furstenberg correspondence principle relating ergodic theory to combinatorics; see for instance this post of mine on this topic.

However, there is a slightly different approach, which I will call ultralimit analysis, which proceeds via the machinery of ultrafilters and ultraproducts; typically, the limit objects {X} one constructs are now the ultraproducts (or ultralimits) of the original objects {X_\alpha}. There are two main facts that make ultralimit analysis powerful. The first is that one can take ultralimits of arbitrary sequences of objects, as opposed to more traditional tools such as metric completions, which only allow one to take limits of Cauchy sequences of objects. The second fact is Los’s theorem, which tells us that {X} is an elementary limit of the {X_\alpha} (i.e. every sentence in first-order logic which is true for the {X_\alpha} for {\alpha} large enough, is true for {X}). This existence of elementary limits is a manifestation of the compactness theorem in logic; see this earlier blog post for more discussion. So we see that compactness methods and ultrafilter methods are closely intertwined. (See also my earlier class notes for a related connection between ultrafilters and compactness.)

Ultralimit analysis is very closely related to nonstandard analysis. I already discussed some aspects of this relationship in an earlier post, and will expand upon it at the bottom of this post. Roughly speaking, the relationship between ultralimit analysis and nonstandard analysis is analogous to the relationship between measure theory and probability theory.

To illustrate how ultralimit analysis is actually used in practice, I will show later in this post how to take a qualitative infinitary theory – in this case, basic algebraic geometry – and apply ultralimit analysis to then deduce a quantitative version of this theory, in which the complexity of the various algebraic sets and varieties that appear as outputs are controlled uniformly by the complexity of the inputs. The point of this exercise is to show how ultralimit analysis allows for a relatively painless conversion back and forth between the quantitative and qualitative worlds, though in some cases the quantitative translation of a qualitative result (or vice versa) may be somewhat unexpected. In an upcoming paper of myself, Ben Green, and Emmanuel Breuillard (announced in the previous blog post), we will rely on ultralimit analysis to reduce the messiness of various quantitative arguments by replacing them with a qualitative setting in which the theory becomes significantly cleaner.

For sake of completeness, I also redo some earlier instances of the correspondence principle via ultralimit analysis, namely the deduction of the quantitative Gromov theorem from the qualitative one, and of Szemerédi’s theorem from the Furstenberg recurrence theorem, to illustrate how close the two techniques are to each other.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,329 other followers