You are currently browsing the tag archive for the ‘countable saturation’ tag.

Roughly speaking, mathematical analysis can be divided into two major styles, namely hard analysis and soft analysis. The precise distinction between the two types of analysis is imprecise (and in some cases one may use a blend the two styles), but some key differences can be listed as follows.

  • Hard analysis tends to be concerned with quantitative or effective properties such as estimates, upper and lower bounds, convergence rates, and growth rates or decay rates. In contrast, soft analysis tends to be concerned with qualitative or ineffective properties such as existence and uniqueness, finiteness, measurability, continuity, differentiability, connectedness, or compactness.
  • Hard analysis tends to be focused on finitary, finite-dimensional or discrete objects, such as finite sets, finitely generated groups, finite Boolean combination of boxes or balls, or “finite-complexity” functions, such as polynomials or functions on a finite set. In contrast, soft analysis tends to be focused on infinitary, infinite-dimensional, or continuous objects, such as arbitrary measurable sets or measurable functions, or abstract locally compact groups.
  • Hard analysis tends to involve explicit use of many parameters such as {\epsilon}, {\delta}, {N}, etc. In contrast, soft analysis tends to rely instead on properties such as continuity, differentiability, compactness, etc., which implicitly are defined using a similar set of parameters, but whose parameters often do not make an explicit appearance in arguments.
  • In hard analysis, it is often the case that a key lemma in the literature is not quite optimised for the application at hand, and one has to reprove a slight variant of that lemma (using a variant of the proof of the original lemma) in order for it to be suitable for applications. In contrast, in soft analysis, key results can often be used as “black boxes”, without need of further modification or inspection of the proof.
  • The properties in soft analysis tend to enjoy precise closure properties; for instance, the composition or linear combination of continuous functions is again continuous, and similarly for measurability, differentiability, etc. In contrast, the closure properties in hard analysis tend to be fuzzier, in that the parameters in the conclusion are often different from the parameters in the hypotheses. For instance, the composition of two Lipschitz functions with Lipschitz constant {K} is still Lipschitz, but now with Lipschitz constant {K^2} instead of {K}. These changes in parameters mean that hard analysis arguments often require more “bookkeeping” than their soft analysis counterparts, and are less able to utilise algebraic constructions (e.g. quotient space constructions) that rely heavily on precise closure properties.

In the lectures so far, focusing on the theory surrounding Hilbert’s fifth problem, the results and techniques have fallen well inside the category of soft analysis. However, we will now turn to the theory of approximate groups, which is a topic which is traditionally studied using the methods of hard analysis. (Later we will also study groups of polynomial growth, which lies on an intermediate position in the spectrum between hard and soft analysis, and which can be profitably analysed using both styles of analysis.)

Despite the superficial differences between hard and soft analysis, though, there are a number of important correspondences between results in hard analysis and results in soft analysis. For instance, if one has some sort of uniform quantitative bound on some expression relating to finitary objects, one can often use limiting arguments to then conclude a qualitative bound on analogous expressions on infinitary objects, by viewing the latter objects as some sort of “limit” of the former objects. Conversely, if one has a qualitative bound on infinitary objects, one can often use compactness and contradiction arguments to recover uniform quantitative bounds on finitary objects as a corollary.

Remark 1 Another type of correspondence between hard analysis and soft analysis, which is “syntactical” rather than “semantical” in nature, arises by taking the proofs of a soft analysis result, and translating such a qualitative proof somehow (e.g. by carefully manipulating quantifiers) into a quantitative proof of an analogous hard analysis result. This type of technique is sometimes referred to as proof mining in the proof theory literature, and is discussed in this previous blog post (and its comments). We will however not employ systematic proof mining techniques here, although in later posts we will informally borrow arguments from infinitary settings (such as the methods used to construct Gleason metrics) and adapt them to finitary ones.

Let us illustrate the correspondence between hard and soft analysis results with a simple example.

Proposition 1 Let {X} be a sequentially compact topological space, let {S} be a dense subset of {X}, and let {f: X \rightarrow [0,+\infty]} be a continuous function (giving the extended half-line {[0,+\infty]} the usual order topology). Then the following statements are equivalent:

  • (i) (Qualitative bound on infinitary objects) For all {x \in X}, one has {f(x) < +\infty}.
  • (ii) (Quantitative bound on finitary objects) There exists {M < +\infty} such that {f(x) \leq M} for all {x \in S}.

In applications, {S} is typically a (non-compact) set of “finitary” (or “finite complexity”) objects of a certain class, and {X} is some sort of “completion” or “compactification” of {S} which admits additional “infinitary” objects that may be viewed as limits of finitary objects.

Proof: To see that (ii) implies (i), observe from density that every point {x} in {X} is adherent to {S}, and so given any neighbourhood {U} of {x}, there exists {y \in S \cap U}. Since {f(y) \leq M}, we conclude from the continuity of {f} that {f(x) \leq M} also, and the claim follows.

Conversely, to show that (i) implies (ii), we use the “compactness and contradiction” argument. Suppose for sake of contradiction that (ii) failed. Then for any natural number {n}, there exists {x_n \in S} such that {f(x_n) \geq n}. (Here we have used the axiom of choice, which we will assume throughout this course.) Using sequential compactness, and passing to a subsequence if necessary, we may assume that the {x_n} converge to a limit {x \in X}. By continuity of {f}, this implies that {f(x) = +\infty}, contradicting (i). \Box

Remark 2 Note that the above deduction of (ii) from (i) is ineffective in that it gives no explicit bound on the uniform bound {M} in (ii). Without any further information on how the qualitative bound (i) is proven, this is the best one can do in general (and this is one of the most significant weaknesses of infinitary methods when used to solve finitary problems); but if one has access to the proof of (i), one can often finitise or proof mine that argument to extract an effective bound for {M}, although often the bound one obtains in the process is quite poor (particularly if the proof of (i) relied extensively on infinitary tools, such as limits). See this blog post for some related discussion.

The above simple example illustrates that in order to get from an “infinitary” statement such as (i) to a “finitary” statement such as (ii), a key step is to be able to take a sequence {(x_n)_{n \in {\bf N}}} (or in some cases, a more general net {(x_\alpha)_{\alpha \in A}}) of finitary objects and extract a suitable infinitary limit object {x}. In the literature, there are three main ways in which one can extract such a limit:

  • (Topological limit) If the {x_n} are all elements of some topological space {S} (e.g. an incomplete function space) which has a suitable “compactification” or “completion” {X} (e.g. a Banach space), then (after passing to a subsequence if necessary) one can often ensure the {x_n} converge in a topological sense (or in a metrical sense) to a limit {x}. The use of this type of limit to pass between quantitative/finitary and qualitative/infinitary results is particularly common in the more analytical areas of mathematics (such as ergodic theory, asymptotic combinatorics, or PDE), due to the abundance of useful compactness results in analysis such as the (sequential) Banach-Alaoglu theorem, Prokhorov’s theorem, the Helly selection theorem, the Arzelá-Ascoli theorem, or even the humble Bolzano-Weierstrass theorem. However, one often has to take care with the nature of convergence, as many compactness theorems only guarantee convergence in a weak sense rather than in a strong one.
  • (Categorical limit) If the {x_n} are all objects in some category (e.g. metric spaces, groups, fields, etc.) with a number of morphisms between the {x_n} (e.g. morphisms from {x_{n+1}} to {x_n}, or vice versa), then one can often form a direct limit {\lim_{\rightarrow} x_n} or inverse limit {\lim_{\leftarrow} x_n} of these objects to form a limiting object {x}. The use of these types of limits to connect quantitative and qualitative results is common in subjects such as algebraic geometry that are particularly amenable to categorical ways of thinking. (We have seen inverse limits appear in the discussion of Hilbert’s fifth problem, although in that context they were not really used to connect quantitative and qualitative results together.)
  • (Logical limit) If the {x_n} are all distinct spaces (or elements or subsets of distinct spaces), with few morphisms connecting them together, then topological and categorical limits are often unavailable or unhelpful. In such cases, however, one can still tie together such objects using an ultraproduct construction (or similar device) to create a limiting object {\lim_{n \rightarrow \alpha} x_n} or limiting space {\prod_{n \rightarrow \alpha} x_n} that is a logical limit of the {x_n}, in the sense that various properties of the {x_n} (particularly those that can be phrased using the language of first-order logic) are preserved in the limit. As such, logical limits are often very well suited for the task of connecting finitary and infinitary mathematics together. Ultralimit type constructions are of course used extensively in logic (particularly in model theory), but are also popular in metric geometry. They can also be used in many of the previously mentioned areas of mathematics, such as algebraic geometry (as discussed in this previous post).

The three types of limits are analogous in many ways, with a number of connections between them. For instance, in the study of groups of polynomial growth, both topological limits (using the metric notion of Gromov-Hausdorff convergence) and logical limits (using the ultralimit construction) are commonly used, and to some extent the two constructions are at least partially interchangeable in this setting. (See also these previous posts for the use of ultralimits as a substitute for topological limits.) In the theory of approximate groups, though, it was observed by Hrushovski that logical limits (and in particular, ultraproducts) are the most useful type of limit to connect finitary approximate groups to their infinitary counterparts. One reason for this is that one is often interested in obtaining results on approximate groups {A} that are uniform in the choice of ambient group {G}. As such, one often seeks to take a limit of approximate groups {A_n} that lie in completely unrelated ambient groups {G_n}, with no obvious morphisms or metrics tying the {G_n} to each other. As such, the topological and categorical limits are not easily usable, whereas the logical limits can still be employed without much difficulty.

Logical limits are closely tied with non-standard analysis. Indeed, by applying an ultraproduct construction to standard number systems such as the natural numbers {{\bf N}} or the reals {{\bf R}}, one can obtain nonstandard number systems such as the nonstandard natural numbers {{}^* {\bf N}} or the nonstandard real numbers (or hyperreals) {{}^* {\bf R}}. These nonstandard number systems behave very similarly to their standard counterparts, but also enjoy the advantage of containing the standard number systems as proper subsystems (e.g. {{\bf R}} is a subring of {{}^* {\bf R}}), which allows for some convenient algebraic manipulations (such as the quotient space construction to create spaces such as {{}^* {\bf R} / {\bf R}}) which are not easily accessible in the purely standard universe. Nonstandard spaces also enjoy a useful completeness property, known as countable saturation, which is analogous to metric completeness (as discussed in this previous blog post) and which will be particularly useful for us in tying together the theory of approximate groups with the theory of Hilbert’s fifth problem. See this previous post for more discussion on ultrafilters and nonstandard analysis.

In these notes, we lay out the basic theory of ultraproducts and ultralimits (in particular, proving Los’s theorem, which roughly speaking asserts that ultralimits are limits in a logical sense, as well as the countable saturation property alluded to earlier). We also lay out some of the basic foundations of nonstandard analysis, although we will not rely too heavily on nonstandard tools in this course. Finally, we apply this general theory to approximate groups, to connect finite approximate groups to an infinitary type of approximate group which we will call an ultra approximate group. We will then study these ultra approximate groups (and models of such groups) in more detail in the next set of notes.

Remark 3 Throughout these notes (and in the rest of the course), we will assume the axiom of choice, in order to easily use ultrafilter-based tools. If one really wanted to expend the effort, though, one could eliminate the axiom of choice from the proofs of the final “finitary” results that one is ultimately interested in proving, at the cost of making the proofs significantly lengthier. Indeed, there is a general result of Gödel that any result which can be stated in the language of Peano arithmetic (which, roughly speaking, means that the result is “finitary” in nature), and can be proven in set theory using the axiom of choice (or more precisely, in the ZFC axiom system), can also be proven in set theory without the axiom of choice (i.e. in the ZF system). As this course is not focused on foundations, we shall simply assume the axiom of choice henceforth to avoid further distraction by such issues.

Read the rest of this entry »

Many structures in mathematics are incomplete in one or more ways. For instance, the field of rationals {{\bf Q}} or the reals {{\bf R}} are algebraically incomplete, because there are some non-trivial algebraic equations (such as {x^2=2} in the case of the rationals, or {x^2=-1} in the case of the reals) which could potentially have solutions (because they do not imply a necessarily false statement, such as {1=0}, just using the laws of algebra), but do not actually have solutions in the specified field.

Similarly, the rationals {{\bf Q}}, when viewed now as a metric space rather than as a field, are also metrically incomplete, beause there exist sequences in the rationals (e.g. the decimal approximations {3, 3.1, 3.14, 3.141, \ldots} of the irrational number {\pi}) which could potentially converge to a limit (because they form a Cauchy sequence), but do not actually converge in the specified metric space.

A third type of incompleteness is that of logical incompleteness, which applies now to formal theories rather than to fields or metric spaces. For instance, Zermelo-Frankel-Choice (ZFC) set theory is logically incomplete, because there exist statements (such as the consistency of ZFC) which could potentially be provable by the theory (because it does not lead to a contradiction, or at least so we believe, just from the axioms and deductive rules of the theory), but is not actually provable in this theory.

A fourth type of incompleteness, which is slightly less well known than the above three, is what I will call elementary incompleteness (and which model theorists call the failure of the countable saturation property). It applies to any structure that is describable by a first-order language, such as a field, a metric space, or a universe of sets. For instance, in the language of ordered real fields, the real line {{\bf R}} is elementarily incomplete, because there exists a sequence of statements (such as the statements {0 < x < 1/n} for natural numbers {n=1,2,\ldots}) in this language which are potentially simultaneously satisfiable (in the sense that any finite number of these statements can be satisfied by some real number {x}) but are not actually simultaneously satisfiable in this theory.

In each of these cases, though, it is possible to start with an incomplete structure and complete it to a much larger structure to eliminate the incompleteness. For instance, starting with an arbitrary field {k}, one can take its algebraic completion (or algebraic closure) {\overline{k}}; for instance, {{\bf C} = \overline{{\bf R}}} can be viewed as the algebraic completion of {{\bf R}}. This field is usually significantly larger than the original field {k}, but contains {k} as a subfield, and every element of {\overline{k}} can be described as the solution to some polynomial equation with coefficients in {k}. Furthermore, {\overline{k}} is now algebraically complete (or algebraically closed): every polynomial equation in {\overline{k}} which is potentially satisfiable (in the sense that it does not lead to a contradiction such as {1=0} from the laws of algebra), is actually satisfiable in {\overline{k}}.

Similarly, starting with an arbitrary metric space {X}, one can take its metric completion {\overline{X}}; for instance, {{\bf R} = \overline{{\bf Q}}} can be viewed as the metric completion of {{\bf Q}}. Again, the completion {\overline{X}} is usually much larger than the original metric space {X}, but contains {X} as a subspace, and every element of {\overline{X}} can be described as the limit of some Cauchy sequence in {X}. Furthermore, {\overline{X}} is now a complete metric space: every sequence in {\overline{X}} which is potentially convergent (in the sense of being a Cauchy sequence), is now actually convegent in {\overline{X}}.

In a similar vein, we have the Gödel completeness theorem, which implies (among other things) that for any consistent first-order theory {T} for a first-order language {L}, there exists at least one completion {\overline{T}} of that theory {T}, which is a consistent theory in which every sentence in {L} which is potentially true in {\overline{T}} (because it does not lead to a contradiction in {\overline{T}}) is actually true in {\overline{T}}. Indeed, the completeness theorem provides at least one model (or structure) {{\mathfrak U}} of the consistent theory {T}, and then the completion {\overline{T} = \hbox{Th}({\mathfrak U})} can be formed by interpreting every sentence in {L} using {{\mathfrak U}} to determine its truth value. Note, in contrast to the previous two examples, that the completion is usually not unique in any way; a theory {T} can have multiple inequivalent models {{\mathfrak U}}, giving rise to distinct completions of the same theory.

Finally, if one starts with an arbitrary structure {{\mathfrak U}}, one can form an elementary completion {{}^* {\mathfrak U}} of it, which is a significantly larger structure which contains {{\mathfrak U}} as a substructure, and such that every element of {{}^* {\mathfrak U}} is an elementary limit of a sequence of elements in {{\mathfrak U}} (I will define this term shortly). Furthermore, {{}^* {\mathfrak U}} is elementarily complete; any sequence of statements that are potentially simultaneously satisfiable in {{}^* {\mathfrak U}} (in the sense that any finite number of statements in this collection are simultaneously satisfiable), will actually be simultaneously satisfiable. As we shall see, one can form such an elementary completion by taking an ultrapower of the original structure {{\mathfrak U}}. If {{\mathfrak U}} is the standard universe of all the standard objects one considers in mathematics, then its elementary completion {{}^* {\mathfrak U}} is known as the nonstandard universe, and is the setting for nonstandard analysis.

As mentioned earlier, completion tends to make a space much larger and more complicated. If one algebraically completes a finite field, for instance, one necessarily obtains an infinite field as a consequence. If one metrically completes a countable metric space with no isolated points, such as {{\bf Q}}, then one necessarily obtains an uncountable metric space (thanks to the Baire category theorem). If one takes a logical completion of a consistent first-order theory that can model true arithmetic, then this completion is no longer describable by a recursively enumerable schema of axioms, thanks to Gödel’s incompleteness theorem. And if one takes the elementary completion of a countable structure, such as the integers {{\bf Z}}, then the resulting completion {{}^* {\bf Z}} will necessarily be uncountable.

However, there are substantial benefits to working in the completed structure which can make it well worth the massive increase in size. For instance, by working in the algebraic completion of a field, one gains access to the full power of algebraic geometry. By working in the metric completion of a metric space, one gains access to powerful tools of real analysis, such as the Baire category theorem, the Heine-Borel theorem, and (in the case of Euclidean completions) the Bolzano-Weierstrass theorem. By working in a logically and elementarily completed theory (aka a saturated model) of a first-order theory, one gains access to the branch of model theory known as definability theory, which allows one to analyse the structure of definable sets in much the same way that algebraic geometry allows one to analyse the structure of algebraic sets. Finally, when working in an elementary completion of a structure, one gains a sequential compactness property, analogous to the Bolzano-Weierstrass theorem, which can be interpreted as the foundation for much of nonstandard analysis, as well as providing a unifying framework to describe various correspondence principles between finitary and infinitary mathematics.

In this post, I wish to expand upon these above points with regard to elementary completion, and to present nonstandard analysis as a completion of standard analysis in much the same way as, say, complex algebra is a completion of real algebra, or real metric geometry is a completion of rational metric geometry.

Read the rest of this entry »

In the course of the ongoing logic reading seminar at UCLA, I learned about the property of countable saturation. A model {M} of a language {L} is countably saturated if, every countable sequence {P_1(x), P_2(x), \ldots} of formulae in {L} (involving countably many constants in {M}) which is finitely satisfiable in {M} (i.e. any finite collection {P_1(x),\ldots,P_n(x)} in the sequence has a solution {x} in {M}), is automatically satisfiable in {M} (i.e. there is a solution {x} to all {P_n(x)} simultaneously). Equivalently, a model is countably saturated if the topology generated by the definable sets is countably compact.

Update, Nov 19: I have learned that the above terminology is not quite accurate; countable saturation allows for an uncountable sequence of formulae, as long as the constants used remain finite. So, the discussion here involves a weaker property than countable saturation, which I do not know the official term for. If one chooses a special type of ultrafilter, namely a “countably incomplete” ultrafilter, one can recover the full strength of countable saturation, though it is not needed for the remarks here. Most models are not countably saturated. Consider for instance the standard natural numbers {{\Bbb N}} as a model for arithmetic. Then the sequence of formulae “{x > n}” for {n=1,2,3,\ldots} is finitely satisfiable in {{\Bbb N}}, but not satisfiable.

However, if one takes a model {M} of {L} and passes to an ultrapower {*M}, whose elements {x} consist of sequences {(x_n)_{n \in {\Bbb N}}} in {M}, modulo equivalence with respect to some fixed non-principal ultrafilter {p}, then it turns out that such models are automatically countably compact. Indeed, if {P_1(x), P_2(x), \ldots} are finitely satisfiable in {*M}, then they are also finitely satisfiable in {M} (either by inspection, or by appeal to Los’s theorem and/or the transfer principle in non-standard analysis), so for each {n} there exists {x_n \in M} which satisfies {P_1,\ldots,P_n}. Letting {x = (x_n)_{n \in {\Bbb N}} \in *M} be the ultralimit of the {x_n}, we see that {x} satisfies all of the {P_n} at once.

In particular, non-standard models of mathematics, such as the non-standard model {*{\Bbb N}} of the natural numbers, are automatically countably saturated.

This has some cute consequences. For instance, suppose one has a non-standard metric space {*X} (an ultralimit of standard metric spaces), and suppose one has a standard sequence {(x_n)_{n \in {\mathbb N}}} of elements of {*X} which are standard-Cauchy, in the sense that for any standard {\varepsilon > 0} one has {d( x_n, x_m ) < \varepsilon} for all sufficiently large {n,m}. Then there exists a non-standard element {x \in *X} such that {x_n} standard-converges to {x} in the sense that for every standard {\varepsilon > 0} one has {d(x_n, x) < \varepsilon} for all sufficiently large {n}. Indeed, from the standard-Cauchy hypothesis, one can find a standard {\varepsilon(n) > 0} for each standard {n} that goes to zero (in the standard sense), such that the formulae “{d(x_n,x) < \varepsilon(n)}” are finitely satisfiable, and hence satisfiable by countable saturation. Thus we see that non-standard metric spaces are automatically “standardly complete” in some sense.

This leads to a non-standard structure theorem for Hilbert spaces, analogous to the orthogonal decomposition in Hilbert spaces:

Theorem 1 (Non-standard structure theorem for Hilbert spaces) Let {*H} be a non-standard Hilbert space, let {S} be a bounded (external) subset of {*H}, and let {x \in H}. Then there exists a decomposition {x = x_S + x_{S^\perp}}, where {x_S \in *H} is “almost standard-generated by {S}” in the sense that for every standard {\varepsilon > 0}, there exists a standard finite linear combination of elements of {S} which is within {\varepsilon} of {S}, and {x_{S^\perp} \in *H} is “standard-orthogonal to {S}” in the sense that {\langle x_{S^\perp}, s\rangle = o(1)} for all {s \in S}.

Proof: Let {d} be the infimum of all the (standard) distances from {x} to a standard linear combination of elements of {S}, then for every standard {n} one can find a standard linear combination {x_n} of elements of {S} which lie within {d+1/n} of {x}. From the parallelogram law we see that {x_n} is standard-Cauchy, and thus standard-converges to some limit {x_S \in *H}, which is then almost standard-generated by {S} by construction. An application of Pythagoras then shows that {x_{S^\perp} := x-x_S} is standard-orthogonal to every element of {S}. \Box

This is the non-standard analogue of a combinatorial structure theorem for Hilbert spaces (see e.g. Theorem 2.6 of my FOCS paper). There is an analogous non-standard structure theorem for {\sigma}-algebras (the counterpart of Theorem 3.6 from that paper) which I will not discuss here, but I will give just one sample corollary:

Theorem 2 (Non-standard arithmetic regularity lemma) Let {*G} be a non-standardly finite abelian group, and let {f: *G \rightarrow [0,1]} be a function. Then one can split {f = f_{U^\perp} + f_U}, where {f_U: *G \rightarrow [-1,1]} is standard-uniform in the sense that all Fourier coefficients are (uniformly) {o(1)}, and {f_{U^\perp}: *G \rightarrow [0,1]} is standard-almost periodic in the sense that for every standard {\varepsilon > 0}, one can approximate {f_{U^\perp}} to error {\varepsilon} in {L^1(*G)} norm by a standard linear combination of characters (which is also bounded).

This can be used for instance to give a non-standard proof of Roth’s theorem (which is not much different from the “finitary ergodic” proof of Roth’s theorem, given for instance in Section 10.5 of my book with Van Vu). There is also a non-standard version of the Szemerédi regularity lemma which can be used, among other things, to prove the hypergraph removal lemma (the proof then becomes rather close to the infinitary proof of this lemma in this paper of mine). More generally, the above structure theorem can be used as a substitute for various “energy increment arguments” in the combinatorial literature, though it does not seem that there is a significant saving in complexity in doing so unless one is performing quite a large number of these arguments.

One can also cast density increment arguments in a nonstandard framework. Here is a typical example. Call a non-standard subset {X} of a non-standard finite set {Y} dense if one has {|X| \geq \varepsilon |Y|} for some standard {\varepsilon > 0}.

Theorem 3 Suppose Szemerédi’s theorem (every set of integers of positive upper density contains an arithmetic progression of length {k}) fails for some {k}. Then there exists an unbounded non-standard integer {N}, a dense subset {A} of {[N] := \{1,\ldots,N\}} with no progressions of length {k}, and with the additional property that

\displaystyle  \frac{|A \cap P|}{|P|} \leq \frac{|A \cap [N]|}{N} + o(1)

for any subprogression {P} of {[N]} of unbounded size (thus there is no sizeable density increment on any large progression).

Proof: Let {B \subset {\Bbb N}} be a (standard) set of positive upper density which contains no progression of length {k}. Let {\delta := \limsup_{|P| \rightarrow \infty} |B \cap P|/|P|} be the asymptotic maximal density of {B} inside a long progression, thus {\delta > 0}. For any {n > 0}, one can then find a standard integer {N_n \geq n} and a standard subset {A_n} of {[N_n]} of density at least {\delta-1/n} such that {A_n} can be embedded (after a linear transformation) inside {B}, so in particular {A_n} has no progressions of length {k}. Applying the saturation property, one can then find an unbounded {N} and a set {A} of {[N]} of density at least {\delta-1/n} for every standard {n} (i.e. of density at least {\delta-o(1)}) with no progressions of length {k}. By construction, we also see that for any subprogression {P} of {[N]} of unbounded size, {A} hs density at most {\delta+1/n} for any standard {n}, thus has density at most {\delta+o(1)}, and the claim follows. \Box

This can be used as the starting point for any density-increment based proof of Szemerédi’s theorem for a fixed {k}, e.g. Roth’s proof for {k=3}, Gowers’ proof for arbitrary {k}, or Szemerédi’s proof for arbitrary {k}. (It is likely that Szemerédi’s proof, in particular, simplifies a little bit when translated to the non-standard setting, though the savings are likely to be modest.)

I’m also hoping that the recent results of Hrushovski on the noncommutative Freiman problem require only countable saturation, as this makes it more likely that they can be translated to a non-standard setting and thence to a purely finitary framework.

Archives