You are currently browsing the category archive for the ‘math.LO’ category.
(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)
Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.
The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:
(Discrete) | (Continuous) | (Limit method) |
Ramsey theory | Topological dynamics | Compactness |
Density Ramsey theory | Ergodic theory | Furstenberg correspondence principle |
Graph/hypergraph regularity | Measure theory | Graph limits |
Polynomial regularity | Linear algebra | Ultralimits |
Structural decompositions | Hilbert space geometry | Ultralimits |
Fourier analysis | Spectral theory | Direct and inverse limits |
Quantitative algebraic geometry | Algebraic geometry | Schemes |
Discrete metric spaces | Continuous metric spaces | Gromov-Hausdorff limits |
Approximate group theory | Topological group theory | Model theory |
As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:
- Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects in a common space , which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object , which remains in the same space, and is “close” to many of the original objects with respect to the given metric or topology.
- Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects in a category , which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit or the inverse limit of these objects, which is another object in the same category , and is connected to the original objects by various morphisms.
- Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects or of spaces , each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, might be groups and might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object or a new space , which is still a model of the same language (e.g. if the spaces were all groups, then the limiting space will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if is an abelian group, then the will also be abelian groups for many .)
The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects to all lie in a common space in order to form an ultralimit ; they are permitted to lie in different spaces ; this is more natural in many discrete contexts, e.g. when considering graphs on vertices in the limit when goes to infinity. Also, no convergence properties on the are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces involved are required in order to construct the ultraproduct.
With so few requirements on the objects or spaces , the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the , will be exactly obeyed by the limit object ; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.
Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.
Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.
The classical foundations of probability theory (discussed for instance in this previous blog post) is founded on the notion of a probability space – a space (the sample space) equipped with a -algebra (the event space), together with a countably additive probability measure that assigns a real number in the interval to each event.
One can generalise the concept of a probability space to a finitely additive probability space, in which the event space is now only a Boolean algebra rather than a -algebra, and the measure is now only finitely additive instead of countably additive, thus when are disjoint events. By giving up countable additivity, one loses a fair amount of measure and integration theory, and in particular the notion of the expectation of a random variable becomes problematic (unless the random variable takes only finitely many values). Nevertheless, one can still perform a fair amount of probability theory in this weaker setting.
In this post I would like to describe a further weakening of probability theory, which I will call qualitative probability theory, in which one does not assign a precise numerical probability value to each event, but instead merely records whether this probability is zero, one, or something in between. Thus is now a function from to the set , where is a new symbol that replaces all the elements of the open interval . In this setting, one can no longer compute quantitative expressions, such as the mean or variance of a random variable; but one can still talk about whether an event holds almost surely, with positive probability, or with zero probability, and there are still usable notions of independence. (I will refer to classical probability theory as quantitative probability theory, to distinguish it from its qualitative counterpart.)
The main reason I want to introduce this weak notion of probability theory is that it becomes suited to talk about random variables living inside algebraic varieties, even if these varieties are defined over fields other than or . In algebraic geometry one often talks about a “generic” element of a variety defined over a field , which does not lie in any specified variety of lower dimension defined over . Once has positive dimension, such generic elements do not exist as classical, deterministic -points in , since of course any such point lies in the -dimensional subvariety of . There are of course several established ways to deal with this problem. One way (which one might call the “Weil” approach to generic points) is to extend the field to a sufficiently transcendental extension , in order to locate a sufficient number of generic points in . Another approach (which one might dub the “Zariski” approach to generic points) is to work scheme-theoretically, and interpret a generic point in as being associated to the zero ideal in the function ring of . However I want to discuss a third perspective, in which one interprets a generic point not as a deterministic object, but rather as a random variable taking values in , but which lies in any given lower-dimensional subvariety of with probability zero. This interpretation is intuitive, but difficult to implement in classical probability theory (except perhaps when considering varieties over or ) due to the lack of a natural probability measure to place on algebraic varieties; however it works just fine in qualitative probability theory. In particular, the algebraic geometry notion of being “generically true” can now be interpreted probabilistically as an assertion that something is “almost surely true”.
It turns out that just as qualitative random variables may be used to interpret the concept of a generic point, they can also be used to interpret the concept of a type in model theory; the type of a random variable is the set of all predicates that are almost surely obeyed by . In contrast, model theorists often adopt a Weil-type approach to types, in which one works with deterministic representatives of a type, which often do not occur in the original structure of interest, but only in a sufficiently saturated extension of that structure (this is the analogue of working in a sufficiently transcendental extension of the base field). However, it seems that (in some cases at least) one can equivalently view types in terms of (qualitative) random variables on the original structure, avoiding the need to extend that structure. (Instead, one reserves the right to extend the sample space of one’s probability theory whenever necessary, as part of the “probabilistic way of thinking” discussed in this previous blog post.) We illustrate this below the fold with two related theorems that I will interpret through the probabilistic lens: the “group chunk theorem” of Weil (and later developed by Hrushovski), and the “group configuration theorem” of Zilber (and again later developed by Hrushovski). For sake of concreteness we will only consider these theorems in the theory of algebraically closed fields, although the results are quite general and can be applied to many other theories studied in model theory.
Let be a field. A definable set over is a set of the form
where is a natural number, and is a predicate involving the ring operations of , the equality symbol , an arbitrary number of constants and free variables in , the quantifiers , boolean operators such as , and parentheses and colons, where the quantifiers are always understood to be over the field . Thus, for instance, the set of quadratic residues
is definable over , and any algebraic variety over is also a definable set over . Henceforth we will abbreviate “definable over ” simply as “definable”.
If is a finite field, then every subset of is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the complexity of a definable set. We say that is a definable set of complexity at most if , and can be written in the form (1) for some predicate of length at most (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in dimensions of degree would be a definable set of complexity . We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large.
In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells:
Lemma 1 (Algebraic regularity lemma) Let be a finite field, let be definable non-empty sets of complexity at most , and let also be definable with complexity at most . Assume that the characteristic of is sufficiently large depending on . Then we may partition and with , with the following properties:
My original proof of this lemma was quite complicated, based on an explicit calculation of the “square”
of using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.)
Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields (although its content is trivial if the field is not sufficiently large depending on the complexity at most ).
Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre:
Proposition 2 Let be a finite field, and let .
- (Discretised cardinality) If is a non-empty definable set of complexity at most , then one has
where is a natural number, and is a positive rational number with numerator and denominator . In particular, we have .
- (Definable cardinality) Assume is sufficiently large depending on . If , and are definable sets of complexity at most , so that can be viewed as a definable subset of that is definably parameterised by , then for each natural number and each positive rational with numerator and denominator , the set
is definable with complexity , where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” and “measure” of depends definably on .)
We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.)
The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most and cardinality , if is sufficiently large depending on . If are definable sets of complexity at most , it shows that for some rational with numerator and denominator ; furthermore, if , we may improve this bound to . In particular, we obtain the following “self-improving” properties:
- If are definable of complexity at most and for some , then (if is sufficiently small depending on and is sufficiently large depending on ) this forces .
- If are definable of complexity at most and for some and positive rational , then (if is sufficiently small depending on and is sufficiently large depending on ) this forces .
It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to ) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski.
The rectification principle in arithmetic combinatorics asserts, roughly speaking, that very small subsets (or, alternatively, small structured subsets) of an additive group or a field of large characteristic can be modeled (for the purposes of arithmetic combinatorics) by subsets of a group or field of zero characteristic, such as the integers or the complex numbers . The additive form of this principle is known as the Freiman rectification principle; it has several formulations, going back of course to the original work of Freiman. Here is one formulation as given by Bilu, Lev, and Ruzsa:
Proposition 1 (Additive rectification) Let be a subset of the additive group for some prime , and let be an integer. Suppose that . Then there exists a map into a subset of the integers which is a Freiman isomorphism of order in the sense that for any , one has
if and only if
Furthermore is a right-inverse of the obvious projection homomorphism from to .
The original version of the rectification principle allowed the sets involved to be substantially larger in size (cardinality up to a small constant multiple of ), but with the additional hypothesis of bounded doubling involved; see the above-mentioned papers, as well as this later paper of Green and Ruzsa, for further discussion.
The proof of Proposition 1 is quite short (see Theorem 3.1 of Bilu-Lev-Ruzsa); the main idea is to use Minkowski’s theorem to find a non-trivial dilate of that is contained in a small neighbourhood of the origin in , at which point the rectification map can be constructed by hand.
Very recently, Codrut Grosu obtained an arithmetic analogue of the above theorem, in which the rectification map preserves both additive and multiplicative structure:
Theorem 2 (Arithmetic rectification) Let be a subset of the finite field for some prime , and let be an integer. Suppose that . Then there exists a map into a subset of the complex numbers which is a Freiman field isomorphism of order in the sense that for any and any polynomial of degree at most and integer coefficients of magnitude summing to at most , one has
if and only if
Note that it is necessary to use an algebraically closed field such as for this theorem, in contrast to the integers used in Proposition 1, as can contain objects such as square roots of which can only map to in the complex numbers (once is at least ).
Using Theorem 2, one can transfer results in arithmetic combinatorics (e.g. sum-product or Szemerédi-Trotter type theorems) regarding finite subsets of to analogous results regarding sufficiently small subsets of ; see the paper of Grosu for several examples of this. This should be compared with the paper of Vu, Wood, and Wood, which introduces a converse principle that embeds finite subsets of (or more generally, a characteristic zero integral domain) in a Freiman field-isomorphic fashion into finite subsets of for arbitrarily large primes , allowing one to transfer arithmetic combinatorical facts from the latter setting to the former.
Grosu’s argument uses some quantitative elimination theory, and in particular a quantitative variant of a lemma of Chang that was discussed previously on this blog. In that previous blog post, it was observed that (an ineffective version of) Chang’s theorem could be obtained using only qualitative algebraic geometry (as opposed to quantitative algebraic geometry tools such as elimination theory results with explicit bounds) by means of nonstandard analysis (or, in what amounts to essentially the same thing in this context, the use of ultraproducts). One can then ask whether one can similarly establish an ineffective version of Grosu’s result by nonstandard means. The purpose of this post is to record that this can indeed be done without much difficulty, though the result obtained, being ineffective, is somewhat weaker than that in Theorem 2. More precisely, we obtain
Theorem 3 (Ineffective arithmetic rectification) Let . Then if is a field of characteristic at least for some depending on , and is a subset of of cardinality , then there exists a map into a subset of the complex numbers which is a Freiman field isomorphism of order .
Our arguments will not provide any effective bound on the quantity (though one could in principle eventually extract such a bound by deconstructing the proof of Proposition 4 below), making this result weaker than Theorem 2 (save for the minor generalisation that it can handle fields of prime power order as well as fields of prime order as long as the characteristic remains large).
Following the principle that ultraproducts can be used as a bridge to connect quantitative and qualitative results (as discussed in these previous blog posts), we will deduce Theorem 3 from the following (well-known) qualitative version:
Proposition 4 (Baby Lefschetz principle) Let be a field of characteristic zero that is finitely generated over the rationals. Then there is an isomorphism from to a subfield of .
This principle (first laid out in an appendix of Lefschetz’s book), among other things, often allows one to use the methods of complex analysis (e.g. Riemann surface theory) to study many other fields of characteristic zero. There are many variants and extensions of this principle; see for instance this MathOverflow post for some discussion of these. I used this baby version of the Lefschetz principle recently in a paper on expanding polynomial maps.
Proof: We give two proofs of this fact, one using transcendence bases and the other using Hilbert’s nullstellensatz.
We begin with the former proof. As is finitely generated over , it has finite transcendence degree, thus one can find algebraically independent elements of over such that is a finite extension of , and in particular by the primitive element theorem is generated by and an element which is algebraic over . (Here we use the fact that characteristic zero fields are separable.) If we then define by first mapping to generic (and thus algebraically independent) complex numbers , and then setting to be a complex root of of the minimal polynomial for over after replacing each with the complex number , we obtain a field isomorphism with the required properties.
Now we give the latter proof. Let be elements of that generate that field over , but which are not necessarily algebraically independent. Our task is then equivalent to that of finding complex numbers with the property that, for any polynomial with rational coefficients, one has
if and only if
Let be the collection of all polynomials with rational coefficients with , and be the collection of all polynomials with rational coefficients with . The set
is the intersection of countably many algebraic sets and is thus also an algebraic set (by the Hilbert basis theorem or the Noetherian property of algebraic sets). If the desired claim failed, then could be covered by the algebraic sets for . By decomposing into irreducible varieties and observing (e.g. from the Baire category theorem) that a variety of a given dimension over cannot be covered by countably many varieties of smaller dimension, we conclude that must in fact be covered by a finite number of such sets, thus
for some . By the nullstellensatz, we thus have an identity of the form
for some natural numbers , polynomials , and polynomials with coefficients in . In particular, this identity also holds in the algebraic closure of . Evaluating this identity at we see that the right-hand side is zero but the left-hand side is non-zero, a contradiction, and the claim follows.
From Proposition 4 one can now deduce Theorem 3 by a routine ultraproduct argument (the same one used in these previous blog posts). Suppose for contradiction that Theorem 3 fails. Then there exists natural numbers , a sequence of finite fields of characteristic at least , and subsets of of cardinality such that for each , there does not exist a Freiman field isomorphism of order from to the complex numbers. Now we select a non-principal ultrafilter , and construct the ultraproduct of the finite fields . This is again a field (and is a basic example of what is known as a pseudo-finite field); because the characteristic of goes to infinity as , it is easy to see (using Los’s theorem) that has characteristic zero and can thus be viewed as an extension of the rationals .
Now let be the ultralimit of the , so that is the ultraproduct of the , then is a subset of of cardinality . In particular, if is the field generated by and , then is a finitely generated extension of the rationals and thus, by Proposition 4 there is an isomorphism from to a subfield of the complex numbers. In particular, are complex numbers, and for any polynomial with integer coefficients, one has
if and only if
By Los’s theorem, we then conclude that for all sufficiently close to , one has for all polynomials of degree at most and whose coefficients are integers whose magnitude sums up to , one has
if and only if
But this gives a Freiman field isomorphism of order between and , contradicting the construction of , and Theorem 3 follows.
Much as group theory is the study of groups, or graph theory is the study of graphs, model theory is the study of models (also known as structures) of some language (which, in this post, will always be a single-sorted, first-order language). A structure is a set , equipped with one or more operations, constants, and relations. This is of course an extremely general type of mathematical object, but (quite remarkably) one can still say a substantial number of interesting things about very broad classes of structures.
We will observe the common abuse of notation of using the set as a metonym for the entire structure, much as we usually refer to a group simply as , a vector space simply as , and so forth. Following another common bending of the rules, we also allow some operations on structures (such as the multiplicative inverse operation on a group or field) to only be partially defined, and we allow use of the usual simplifying conventions for mathematical formulas (e.g. writing instead of or , in cases where associativity is known). We will also deviate slightly from the usual practice in logic by emphasising individual structures, rather than the theory of general classes of structures; for instance, we will talk about the theory of a single field such as or , rather than the theory of all fields of a certain type (e.g. real closed fields or algebraically closed fields).
Once one has a structure , one can introduce the notion of a definable subset of , or more generally of a Cartesian power of , defined as a set of the form
for some formula in the language with free variables and any number of constants from (that is, is a well-formed formula built up from a finite number of constants in , the relations and operations on , logical connectives such as , , , and the quantifiers ). Thus, for instance, in the theory of the arithmetic of the natural numbers , the set of primes is a definable set, since we have
In the theory of the field of reals , the unit circle is an example of a definable set,
but so is the the complement of the circle,
and the interval :
Due to the unlimited use of constants, any finite subset of a power of any structure is, by our conventions, definable in that structure. (One can of course also consider definability without parameters (also known as -definability), in which arbitrary constants are not permitted, but we will not do so here.)
We can isolate some special subclasses of definable sets:
- An atomic definable set is a set of the form (1) in which is an atomic formula (i.e. it does not contain any logical connectives or quantifiers).
- A quantifier-free definable set is a set of the form (1) in which is quantifier-free (i.e. it can contain logical connectives, but does not contain the quantifiers ).
Example 1 In the theory of a field such as , an atomic definable set is the same thing as an affine algebraic set (also known as an affine algebraic variety, with the understanding that varieties are not necessarily assumed to be irreducible), and a quantifier-free definable set is known as a constructible set; thus we see that algebraic geometry can be viewed in some sense as a special case of model theory. (Conversely, it can in fact be quite profitable to think of model theory as an abstraction of algebraic geometry; for instance, the concepts of Morley rank and Morley degree in model theory (discussed in this previous blog post) directly generalises the concepts of dimension and degree in algebraic geometry.) Over , the interval is a definable set, but not a quantifier-free definable set (and certainly not an atomic definable set); and similarly for the primes over .
A quantifier-free definable set in is nothing more than a finite boolean combination of atomic definable sets; in other words, the class of quantifier-free definable sets over is the smallest class that contains the atomic definable sets and is closed under boolean operations such as complementation and union (which generate all the other boolean operations). Similarly, the class of definable sets over is the smallest class that contains the quantifier-free definable sets, and is also closed under the operation of projection from to for every natural number , where is the map .
Some structures have the property of enjoying quantifier elimination, which means that every definable set is in fact a quantifier-free definable set, or equivalently that the projection of a quantifier-free definable set is again quantifier-free. For instance, an algebraically closed field (with the field operations) has quantifier elimination (i.e. the projection of a constructible set is again constructible); this fact can be proven by the classical tool of resultants, and among other things can be used to give a proof of Hilbert’s nullstellensatz. (Note though that projection does not necessary preserve the property of being atomic; for instance, the projection of the atomic set is the non-atomic, but still quantifier-free definable, set .) In the converse direction, it is not difficult to use the nullstellensatz to deduce quantifier elimination. For theory of the real field , which is not algebraically closed, one does not have quantifier elimination, as one can see from the example of the unit circle (which is a quantifier-free definable set) projecting down to the interval (which is definable, but not quantifer-free definable). However, if one adds the additional operation of order to the reals, giving it the language of an ordered field rather than just a field, then quantifier elimination is recovered (the class of quantifier-free definable sets now enlarges to match the class of definable sets, which in this case is also the class of semi-algebraic sets); this is the famous Tarski-Seidenberg theorem.
On the other hand, many important structures do not have quantifier elimination; typically, the projection of a quantifier-free definable set is not, in general, quantifier-free definable. This failure of the projection property also shows up in many contexts outside of model theory; for instance, Lebesgue famously made the error of thinking that the projection of a Borel measurable set remained Borel measurable (it is merely an analytic set instead). Turing’s halting theorem can be viewed as an assertion that the projection of a decidable set (also known as a computable or recursive set) is not necessarily decidable (it is merely semi-decidable (or recursively enumerable) instead). The notorious P=NP problem can also be essentially viewed in this spirit; roughly speaking (and glossing over the placement of some quantifiers), it asks whether the projection of a polynomial-time decidable set is again polynomial-time decidable. And so forth. (See this blog post of Dick Lipton for further discussion of the subtleties of projections.)
Now we consider the status of quantifier elimination for the theory of a finite field . If interpreted naively, quantifier elimination is trivial for a finite field , since every subset of is finite and thus quantifier-free definable. However, we can recover an interesting question in one of two (essentially equivalent) ways. One is to work in the asymptotic regime in which the field is large, but the length of the formulae used to construct one’s definable sets stays bounded uniformly in the size of (where we view any constant in as contributing a unit amount to the length of a formula, no matter how large is). A simple counting argument then shows that only a small number of subsets of become definable in the asymptotic limit , since the number of definable sets clearly grows at most polynomially in for any fixed bound on the formula length, while the number of all subsets of grows exponentially in .
Another way to proceed is to work not with a single finite field , or even with a sequence of finite fields, but with the ultraproduct of a sequence of finite fields, and to study the properties of definable sets over this ultraproduct. (We will be using the notation of ultraproducts and nonstandard analysis from this previous blog post.) This approach is equivalent to the more finitary approach mentioned in the previous paragraph, at least if one does not care to track of the exact bounds on the length of the formulae involved. Indeed, thanks to Los’s theorem, a definable subset of is nothing more than the ultraproduct of definable subsets of for all sufficiently close to , with the length of the formulae used to define uniformly bounded in . In the language of nonstandard analysis, one can view as a nonstandard finite field.
The ultraproduct of finite fields is an important example of a pseudo-finite field – a field that obeys all the sentences in the languages of fields that finite fields do, but is not necessarily itself a finite field. The model theory of pseudo-finite fields was first studied systematically by Ax (in the same paper where the Ax-Grothendieck theorem, discussed previously on this blog, was established), with important further contributions by Kiefe, by Fried-Sacerdote, by two papers of Chatzidakis-van den Dries-Macintyre, and many other authors.
As mentioned before, quantifier elimination trivially holds for finite fields. But for infinite pseudo-finite fields, such as the ultraproduct of finite fields with going to infinity, quantifier elimination fails. For instance, in a finite field , the set of quadratic residues is a definable set, with a bounded formula length, and so in the ultraproduct , the set of nonstandard quadratic residues is also a definable set. However, in one dimension, we see from the factor theorem that the only atomic definable sets are either finite or the whole field , and so the only constructible sets (i.e. the only quantifier-free definable sets) are either finite or cofinite in . Since the quadratic residues have asymptotic density in a large finite field, they cannot form a quantifier-free definable set, despite being definable.
Nevertheless, there is a very nice almost quantifier elimination result for these fields, in characteristic zero at least, which we phrase here as follows:
Theorem 1 (Almost quantifier elimination) Let be a nonstandard finite field of characteristic zero, and let be a definable set over . Then is the union of finitely many sets of the form
where is an atomic definable subset of (i.e. the -points of an algebraic variety defined over in ) and is a polynomial.
Results of this type were first obtained essentially due to Catarina Kiefe, although the formulation here is closer to that of Chatzidakis-van den Dries-Macintyre.
Informally, this theorem says that while we cannot quite eliminate all quantifiers from a definable set over a nonstandard finite field, we can eliminate all but one existential quantifier. Note that negation has also been eliminated in this theorem; for instance, the definable set uses a negation, but can also be described using a single existential quantifier as .) I believe that there are more complicated analogues of this result in positive characteristic, but I have not studied this case in detail (Kiefe’s result does not assume characteristic zero, but her conclusion is slightly different from the one given here). In the one-dimensional case , the only varieties are the affine line and finite sets, and we can simplify the above statement, namely that any definable subset of takes the form for some polynomial (i.e. definable sets in are nothing more than the projections of the -points of a plane curve).
There is an equivalent formulation of this theorem for standard finite fields, namely that if is a finite field and is definable using a formula of length at most , then can be expressed in the form (2) with the degree of bounded by some quantity depending on and , assuming that the characteristic of is sufficiently large depending on .
The theorem gives quite a satisfactory description of definable sets in either standard or nonstandard finite fields (at least if one does not care about effective bounds in some of the constants, and if one is willing to exclude the small characteristic case); for instance, in conjunction with the Lang-Weil bound discussed in this recent blog post, it shows that any non-empty definable subset of a nonstandard finite field has a nonstandard cardinality of for some positive standard rational and integer . Equivalently, any non-empty definable subset of for some standard finite field using a formula of length at most has a standard cardinality of for some positive rational of height and some natural number between and . (For instance, in the example of the quadratic residues given above, is equal to and equal to .) There is a more precise statement to this effect, namely that the Poincaré series of a definable set is rational; see Kiefe’s paper for details.
Below the fold I give a proof of Theorem 1, which relies primarily on the Lang-Weil bound mentioned above.
I had another long plane flight recently, so I decided to try making another game, to explore exactly what types of mathematical reasoning might be amenable to gamification. I decided to start with one of the simplest types of logical argument (and one of the few that avoids the disjunction problem mentioned in the previous post), namely the Aristotelian logic of syllogistic reasoning, most famously exemplified by the classic syllogism:
- Major premise: All men are mortal.
- Minor premise: Socrates is a man.
- Conclusion: Socrates is a mortal.
There is a classic collection of logic puzzles of Lewis Carroll (from his book on symbolic logic), in which he presents a set of premises and asks to derive a conclusion using all of the premises. Here are four examples of such sets:
- Babies are illogical;
- Nobody is despised who can manage a crocodile;
- Illogical persons are despised.
- My saucepans are the only things that I have that are made of tin;
- I find all your presents very useful;
- None of my saucepans are of the slightest use.
- No potatoes of mine, that are new, have been boiled;
- All of my potatoes in this dish are fit to eat;
- No unboiled potatoes of mine are fit to eat.
- No ducks waltz;
- No officers ever decline to waltz;
- All my poultry are ducks.
After a certain amount of effort, I was able to gamify the solution process to these sort of puzzles in a Scratch game, although I am not fully satisfied with the results (in part due to the inherent technical limitations of the Scratch software, but also because I have not yet found a smooth user interface for this process). In order to not have to build a natural language parser, I modified Lewis Carroll’s sentences somewhat in order to be machine-readable. Here is a typical screenshot:
Unfortunately, the gameplay is somewhat clunkier than in the algebra game, basically because one needs three or four clicks and a keyboard press in order to make a move, whereas in the algebra game each click corresponded to one move. This is in part due to Scratch not having an easy way to have drag-and-drop or right-click commands, but even with a fully featured GUI, I am unsure how to make an interface that would make the process of performing a deduction easy; one may need a “smart” interface that is able to guess some possible intended moves from a minimal amount of input from the user, and then suggest these choices (somewhat similarly to the “auto-complete” feature in a search box). This would require more effort than I could expend on a plane trip, though (as well as the use of a more powerful language than Scratch).
There are of course several existing proof assistants one could try to use as a model (Coq, Isabelle, etc.), but my impression is that the syntax for such assistants would only be easily mastered by someone who already is quite experienced with computer languages as well as proof writing, which would defeat the purpose of the games I have in mind. But perhaps it is possible to create a proof assistant for a very restricted logic (such as one without disjunction) that can be easily used by non-experts…
Nonstandard analysis is a mathematical framework in which one extends the standard mathematical universe of standard numbers, standard sets, standard functions, etc. into a larger nonstandard universe of nonstandard numbers, nonstandard sets, nonstandard functions, etc., somewhat analogously to how one places the real numbers inside the complex numbers, or the rationals inside the reals. This nonstandard universe enjoys many of the same properties as the standard one; in particular, we have the transfer principle that asserts that any statement in the language of first order logic is true in the standard universe if and only if it is true in the nonstandard one. (For instance, because Fermat’s last theorem is known to be true for standard natural numbers, it is automatically true for nonstandard natural numbers as well.) However, the nonstandard universe also enjoys some additional useful properties that the standard one does not, most notably the countable saturation property, which is a property somewhat analogous to the completeness property of a metric space; much as metric completeness allows one to assert that the intersection of a countable family of nested closed balls is non-empty, countable saturation allows one to assert that the intersection of a countable family of nested satisfiable formulae is simultaneously satisfiable. (See this previous blog post for more on the analogy between the use of nonstandard analysis and the use of metric completions.) Furthermore, by viewing both the standard and nonstandard universes externally (placing them both inside a larger metatheory, such as a model of Zermelo-Frankel-Choice (ZFC) set theory; in some more advanced set-theoretic applications one may also wish to add some large cardinal axioms), one can place some useful additional definitions and constructions on these universes, such as defining the concept of an infinitesimal nonstandard number (a number which is smaller in magnitude than any positive standard number). The ability to rigorously manipulate infinitesimals is of course one of the most well-known advantages of working with nonstandard analysis.
To build a nonstandard universe from a standard one , the most common approach is to take an ultrapower of with respect to some non-principal ultrafilter over the natural numbers; see e.g. this blog post for details. Once one is comfortable with ultrafilters and ultrapowers, this becomes quite a simple and elegant construction, and greatly demystifies the nature of nonstandard analysis.
On the other hand, nonprincipal ultrafilters do have some unappealing features. The most notable one is that their very existence requires the axiom of choice (or more precisely, a weaker form of this axiom known as the boolean prime ideal theorem). Closely related to this is the fact that one cannot actually write down any explicit example of a nonprincipal ultrafilter, but must instead rely on nonconstructive tools such as Zorn’s lemma, the Hahn-Banach theorem, Tychonoff’s theorem, the Stone-Cech compactification, or the boolean prime ideal theorem to locate one. As such, ultrafilters definitely belong to the “infinitary” side of mathematics, and one may feel that it is inappropriate to use such tools for “finitary” mathematical applications, such as those which arise in hard analysis. From a more practical viewpoint, because of the presence of the infinitary ultrafilter, it can be quite difficult (though usually not impossible, with sufficient patience and effort) to take a finitary result proven via nonstandard analysis and coax an effective quantitative bound from it.
There is however a “cheap” version of nonstandard analysis which is less powerful than the full version, but is not as infinitary in that it is constructive (in the sense of not requiring any sort of choice-type axiom), and which can be translated into standard analysis somewhat more easily than a fully nonstandard argument; indeed, a cheap nonstandard argument can often be presented (by judicious use of asymptotic notation) in a way which is nearly indistinguishable from a standard one. It is obtained by replacing the nonprincipal ultrafilter in fully nonstandard analysis with the more classical Fréchet filter of cofinite subsets of the natural numbers, which is the filter that implicitly underlies the concept of the classical limit of a sequence when the underlying asymptotic parameter goes off to infinity. As such, “cheap nonstandard analysis” aligns very well with traditional mathematics, in which one often allows one’s objects to be parameterised by some external parameter such as , which is then allowed to approach some limit such as . The catch is that the Fréchet filter is merely a filter and not an ultrafilter, and as such some of the key features of fully nonstandard analysis are lost. Most notably, the law of the excluded middle does not transfer over perfectly from standard analysis to cheap nonstandard analysis; much as there exist bounded sequences of real numbers (such as ) which do not converge to a (classical) limit, there exist statements in cheap nonstandard analysis which are neither true nor false (at least without passing to a subsequence, see below). The loss of such a fundamental law of mathematical reasoning may seem like a major disadvantage for cheap nonstandard analysis, and it does indeed make cheap nonstandard analysis somewhat weaker than fully nonstandard analysis. But in some situations (particularly when one is reasoning in a “constructivist” or “intuitionistic” fashion, and in particular if one is avoiding too much reliance on set theory) it turns out that one can survive the loss of this law; and furthermore, the law of the excluded middle is still available for standard analysis, and so one can often proceed by working from time to time in the standard universe to temporarily take advantage of this law, and then transferring the results obtained there back to the cheap nonstandard universe once one no longer needs to invoke the law of the excluded middle. Furthermore, the law of the excluded middle can be recovered by adopting the freedom to pass to subsequences with regards to the asymptotic parameter ; this technique is already in widespread use in the analysis of partial differential equations, although it is generally referred to by names such as “the compactness method” rather than as “cheap nonstandard analysis”.
Below the fold, I would like to describe this cheap version of nonstandard analysis, which I think can serve as a pedagogical stepping stone towards fully nonstandard analysis, as it is formally similar to (though weaker than) fully nonstandard analysis, but on the other hand is closer in practice to standard analysis. As we shall see below, the relation between cheap nonstandard analysis and standard analysis is analogous in many ways to the relation between probabilistic reasoning and deterministic reasoning; it also resembles somewhat the preference in much of modern mathematics for viewing mathematical objects as belonging to families (or to categories) to be manipulated en masse, rather than treating each object individually. (For instance, nonstandard analysis can be used as a partial substitute for scheme theory in order to obtain uniformly quantitative results in algebraic geometry, as discussed for instance in this previous blog post.)
In the previous set of notes, we introduced the notion of an ultra approximate group – an ultraproduct of finite -approximate groups for some independent of , where each -approximate group may lie in a distinct ambient group . Although these objects arise initially from the “finitary” objects , it turns out that ultra approximate groups can be profitably analysed by means of infinitary groups (and in particular, locally compact groups or Lie groups ), by means of certain models of (or of the group generated by ). We will define precisely what we mean by a model later, but as a first approximation one can view a model as a representation of the ultra approximate group (or of ) that is “macroscopically faithful” in that it accurately describes the “large scale” behaviour of (or equivalently, that the kernel of the representation is “microscopic” in some sense). In the next section we will see how one can use “Gleason lemma” technology to convert this macroscopic control of an ultra approximate group into microscopic control, which will be the key to classifying approximate groups.
Models of ultra approximate groups can be viewed as the multiplicative combinatorics analogue of the more well known concept of an ultralimit of metric spaces, which we briefly review below the fold as motivation.
The crucial observation is that ultra approximate groups enjoy a local compactness property which allows them to be usefully modeled by locally compact groups (and hence, through the Gleason-Yamabe theorem from previous notes, by Lie groups also). As per the Heine-Borel theorem, the local compactness will come from a combination of a completeness property and a local total boundedness property. The completeness property turns out to be a direct consequence of the countable saturation property of ultraproducts, thus illustrating one of the key advantages of the ultraproduct setting. The local total boundedness property is more interesting. Roughly speaking, it asserts that “large bounded sets” (such as or ) can be covered by finitely many translates of “small bounded sets” , where “small” is a topological group sense, implying in particular that large powers of lie inside a set such as or . The easiest way to obtain such a property comes from the following lemma of Sanders:
Lemma 1 (Sanders lemma) Let be a finite -approximate group in a (global) group , and let . Then there exists a symmetric subset of with containing the identity such that .
This lemma has an elementary combinatorial proof, and is the key to endowing an ultra approximate group with locally compact structure. There is also a closely related lemma of Croot and Sisask which can achieve similar results, and which will also be discussed below. (The locally compact structure can also be established more abstractly using the much more general methods of definability theory, as was first done by Hrushovski, but we will not discuss this approach here.)
By combining the locally compact structure of ultra approximate groups with the Gleason-Yamabe theorem, one ends up being able to model a large “ultra approximate subgroup” of by a Lie group . Such Lie models serve a number of important purposes in the structure theory of approximate groups. Firstly, as all Lie groups have a dimension which is a natural number, they allow one to assign a natural number “dimension” to ultra approximate groups, which opens up the ability to perform “induction on dimension” arguments. Secondly, Lie groups have an escape property (which is in fact equivalent to no small subgroups property): if a group element lies outside of a very small ball , then some power of it will escape a somewhat larger ball . Or equivalently: if a long orbit lies inside the larger ball , one can deduce that the original element lies inside the small ball . Because all Lie groups have this property, we will be able to show that all ultra approximate groups “essentially” have a similar property, in that they are “controlled” by a nearby ultra approximate group which obeys a number of escape-type properties analogous to those enjoyed by small balls in a Lie group, and which we will call a strong ultra approximate group. This will be discussed in the next set of notes, where we will also see how these escape-type properties can be exploited to create a metric structure on strong approximate groups analogous to the Gleason metrics studied in previous notes, which can in turn be exploited (together with an induction on dimension argument) to fully classify such approximate groups (in the finite case, at least).
There are some cases where the analysis is particularly simple. For instance, in the bounded torsion case, one can show that the associated Lie model is necessarily zero-dimensional, which allows for a easy classification of approximate groups of bounded torsion.
Some of the material here is drawn from my recent paper with Ben Green and Emmanuel Breuillard, which is in turn inspired by a previous paper of Hrushovski.
Roughly speaking, mathematical analysis can be divided into two major styles, namely hard analysis and soft analysis. The precise distinction between the two types of analysis is imprecise (and in some cases one may use a blend the two styles), but some key differences can be listed as follows.
- Hard analysis tends to be concerned with quantitative or effective properties such as estimates, upper and lower bounds, convergence rates, and growth rates or decay rates. In contrast, soft analysis tends to be concerned with qualitative or ineffective properties such as existence and uniqueness, finiteness, measurability, continuity, differentiability, connectedness, or compactness.
- Hard analysis tends to be focused on finitary, finite-dimensional or discrete objects, such as finite sets, finitely generated groups, finite Boolean combination of boxes or balls, or “finite-complexity” functions, such as polynomials or functions on a finite set. In contrast, soft analysis tends to be focused on infinitary, infinite-dimensional, or continuous objects, such as arbitrary measurable sets or measurable functions, or abstract locally compact groups.
- Hard analysis tends to involve explicit use of many parameters such as , , , etc. In contrast, soft analysis tends to rely instead on properties such as continuity, differentiability, compactness, etc., which implicitly are defined using a similar set of parameters, but whose parameters often do not make an explicit appearance in arguments.
- In hard analysis, it is often the case that a key lemma in the literature is not quite optimised for the application at hand, and one has to reprove a slight variant of that lemma (using a variant of the proof of the original lemma) in order for it to be suitable for applications. In contrast, in soft analysis, key results can often be used as “black boxes”, without need of further modification or inspection of the proof.
- The properties in soft analysis tend to enjoy precise closure properties; for instance, the composition or linear combination of continuous functions is again continuous, and similarly for measurability, differentiability, etc. In contrast, the closure properties in hard analysis tend to be fuzzier, in that the parameters in the conclusion are often different from the parameters in the hypotheses. For instance, the composition of two Lipschitz functions with Lipschitz constant is still Lipschitz, but now with Lipschitz constant instead of . These changes in parameters mean that hard analysis arguments often require more “bookkeeping” than their soft analysis counterparts, and are less able to utilise algebraic constructions (e.g. quotient space constructions) that rely heavily on precise closure properties.
In the lectures so far, focusing on the theory surrounding Hilbert’s fifth problem, the results and techniques have fallen well inside the category of soft analysis. However, we will now turn to the theory of approximate groups, which is a topic which is traditionally studied using the methods of hard analysis. (Later we will also study groups of polynomial growth, which lies on an intermediate position in the spectrum between hard and soft analysis, and which can be profitably analysed using both styles of analysis.)
Despite the superficial differences between hard and soft analysis, though, there are a number of important correspondences between results in hard analysis and results in soft analysis. For instance, if one has some sort of uniform quantitative bound on some expression relating to finitary objects, one can often use limiting arguments to then conclude a qualitative bound on analogous expressions on infinitary objects, by viewing the latter objects as some sort of “limit” of the former objects. Conversely, if one has a qualitative bound on infinitary objects, one can often use compactness and contradiction arguments to recover uniform quantitative bounds on finitary objects as a corollary.
Remark 1 Another type of correspondence between hard analysis and soft analysis, which is “syntactical” rather than “semantical” in nature, arises by taking the proofs of a soft analysis result, and translating such a qualitative proof somehow (e.g. by carefully manipulating quantifiers) into a quantitative proof of an analogous hard analysis result. This type of technique is sometimes referred to as proof mining in the proof theory literature, and is discussed in this previous blog post (and its comments). We will however not employ systematic proof mining techniques here, although in later posts we will informally borrow arguments from infinitary settings (such as the methods used to construct Gleason metrics) and adapt them to finitary ones.
Let us illustrate the correspondence between hard and soft analysis results with a simple example.
Proposition 1 Let be a sequentially compact topological space, let be a dense subset of , and let be a continuous function (giving the extended half-line the usual order topology). Then the following statements are equivalent:
- (i) (Qualitative bound on infinitary objects) For all , one has .
- (ii) (Quantitative bound on finitary objects) There exists such that for all .
In applications, is typically a (non-compact) set of “finitary” (or “finite complexity”) objects of a certain class, and is some sort of “completion” or “compactification” of which admits additional “infinitary” objects that may be viewed as limits of finitary objects.
Proof: To see that (ii) implies (i), observe from density that every point in is adherent to , and so given any neighbourhood of , there exists . Since , we conclude from the continuity of that also, and the claim follows.
Conversely, to show that (i) implies (ii), we use the “compactness and contradiction” argument. Suppose for sake of contradiction that (ii) failed. Then for any natural number , there exists such that . (Here we have used the axiom of choice, which we will assume throughout this course.) Using sequential compactness, and passing to a subsequence if necessary, we may assume that the converge to a limit . By continuity of , this implies that , contradicting (i).
Remark 2 Note that the above deduction of (ii) from (i) is ineffective in that it gives no explicit bound on the uniform bound in (ii). Without any further information on how the qualitative bound (i) is proven, this is the best one can do in general (and this is one of the most significant weaknesses of infinitary methods when used to solve finitary problems); but if one has access to the proof of (i), one can often finitise or proof mine that argument to extract an effective bound for , although often the bound one obtains in the process is quite poor (particularly if the proof of (i) relied extensively on infinitary tools, such as limits). See this blog post for some related discussion.
The above simple example illustrates that in order to get from an “infinitary” statement such as (i) to a “finitary” statement such as (ii), a key step is to be able to take a sequence (or in some cases, a more general net ) of finitary objects and extract a suitable infinitary limit object . In the literature, there are three main ways in which one can extract such a limit:
- (Topological limit) If the are all elements of some topological space (e.g. an incomplete function space) which has a suitable “compactification” or “completion” (e.g. a Banach space), then (after passing to a subsequence if necessary) one can often ensure the converge in a topological sense (or in a metrical sense) to a limit . The use of this type of limit to pass between quantitative/finitary and qualitative/infinitary results is particularly common in the more analytical areas of mathematics (such as ergodic theory, asymptotic combinatorics, or PDE), due to the abundance of useful compactness results in analysis such as the (sequential) Banach-Alaoglu theorem, Prokhorov’s theorem, the Helly selection theorem, the Arzelá-Ascoli theorem, or even the humble Bolzano-Weierstrass theorem. However, one often has to take care with the nature of convergence, as many compactness theorems only guarantee convergence in a weak sense rather than in a strong one.
- (Categorical limit) If the are all objects in some category (e.g. metric spaces, groups, fields, etc.) with a number of morphisms between the (e.g. morphisms from to , or vice versa), then one can often form a direct limit or inverse limit of these objects to form a limiting object . The use of these types of limits to connect quantitative and qualitative results is common in subjects such as algebraic geometry that are particularly amenable to categorical ways of thinking. (We have seen inverse limits appear in the discussion of Hilbert’s fifth problem, although in that context they were not really used to connect quantitative and qualitative results together.)
- (Logical limit) If the are all distinct spaces (or elements or subsets of distinct spaces), with few morphisms connecting them together, then topological and categorical limits are often unavailable or unhelpful. In such cases, however, one can still tie together such objects using an ultraproduct construction (or similar device) to create a limiting object or limiting space that is a logical limit of the , in the sense that various properties of the (particularly those that can be phrased using the language of first-order logic) are preserved in the limit. As such, logical limits are often very well suited for the task of connecting finitary and infinitary mathematics together. Ultralimit type constructions are of course used extensively in logic (particularly in model theory), but are also popular in metric geometry. They can also be used in many of the previously mentioned areas of mathematics, such as algebraic geometry (as discussed in this previous post).
The three types of limits are analogous in many ways, with a number of connections between them. For instance, in the study of groups of polynomial growth, both topological limits (using the metric notion of Gromov-Hausdorff convergence) and logical limits (using the ultralimit construction) are commonly used, and to some extent the two constructions are at least partially interchangeable in this setting. (See also these previous posts for the use of ultralimits as a substitute for topological limits.) In the theory of approximate groups, though, it was observed by Hrushovski that logical limits (and in particular, ultraproducts) are the most useful type of limit to connect finitary approximate groups to their infinitary counterparts. One reason for this is that one is often interested in obtaining results on approximate groups that are uniform in the choice of ambient group . As such, one often seeks to take a limit of approximate groups that lie in completely unrelated ambient groups , with no obvious morphisms or metrics tying the to each other. As such, the topological and categorical limits are not easily usable, whereas the logical limits can still be employed without much difficulty.
Logical limits are closely tied with non-standard analysis. Indeed, by applying an ultraproduct construction to standard number systems such as the natural numbers or the reals , one can obtain nonstandard number systems such as the nonstandard natural numbers or the nonstandard real numbers (or hyperreals) . These nonstandard number systems behave very similarly to their standard counterparts, but also enjoy the advantage of containing the standard number systems as proper subsystems (e.g. is a subring of ), which allows for some convenient algebraic manipulations (such as the quotient space construction to create spaces such as ) which are not easily accessible in the purely standard universe. Nonstandard spaces also enjoy a useful completeness property, known as countable saturation, which is analogous to metric completeness (as discussed in this previous blog post) and which will be particularly useful for us in tying together the theory of approximate groups with the theory of Hilbert’s fifth problem. See this previous post for more discussion on ultrafilters and nonstandard analysis.
In these notes, we lay out the basic theory of ultraproducts and ultralimits (in particular, proving Los’s theorem, which roughly speaking asserts that ultralimits are limits in a logical sense, as well as the countable saturation property alluded to earlier). We also lay out some of the basic foundations of nonstandard analysis, although we will not rely too heavily on nonstandard tools in this course. Finally, we apply this general theory to approximate groups, to connect finite approximate groups to an infinitary type of approximate group which we will call an ultra approximate group. We will then study these ultra approximate groups (and models of such groups) in more detail in the next set of notes.
Remark 3 Throughout these notes (and in the rest of the course), we will assume the axiom of choice, in order to easily use ultrafilter-based tools. If one really wanted to expend the effort, though, one could eliminate the axiom of choice from the proofs of the final “finitary” results that one is ultimately interested in proving, at the cost of making the proofs significantly lengthier. Indeed, there is a general result of Gödel that any result which can be stated in the language of Peano arithmetic (which, roughly speaking, means that the result is “finitary” in nature), and can be proven in set theory using the axiom of choice (or more precisely, in the ZFC axiom system), can also be proven in set theory without the axiom of choice (i.e. in the ZF system). As this course is not focused on foundations, we shall simply assume the axiom of choice henceforth to avoid further distraction by such issues.
Recent Comments