You are currently browsing the tag archive for the ‘ultrafilters’ tag.

A major topic of interest of analytic number theory is the asymptotic behaviour of the Riemann zeta function {\zeta} in the critical strip {\{ \sigma+it: 0 < \sigma < 1; t \in {\bf R} \}} in the limit {t \rightarrow +\infty}. For the purposes of this set of notes, it is a little simpler technically to work with the log-magnitude {\log |\zeta|: {\bf C} \rightarrow [-\infty,+\infty]} of the zeta function. (In principle, one can reconstruct a branch of {\log \zeta}, and hence {\zeta} itself, from {\log |\zeta|} using the Cauchy-Riemann equations, or tools such as the Borel-Carathéodory theorem, see Exercise 40 of Supplement 2.)

One has the classical estimate

\displaystyle \zeta(\sigma+it) = O( t^{O(1)} )

when {\sigma = O(1)} and {t \geq 10} (say), so that

\displaystyle \log |\zeta(\sigma+it)| \leq O( \log t ). \ \ \ \ \ (1)

 

(See e.g. Exercise 37 from Supplement 3.) In view of this, let us define the normalised log-magnitudes {F_T: {\bf C} \rightarrow [-\infty,+\infty]} for any {T \geq 10} by the formula

\displaystyle F_T( \sigma + it ) := \frac{1}{\log T} \log |\zeta( \sigma + i(T + t) )|;

informally, this is a normalised window into {\log |\zeta|} near {iT}. One can rephrase several assertions about the zeta function in terms of the asymptotic behaviour of {F_T}. For instance:

  • (i) The bound (1) implies that {F_T} is asymptotically locally bounded from above in the limit {T \rightarrow \infty}, thus for any compact set {K \subset {\bf C}} we have {F_T(\sigma+it) \leq O_K(1)} for {\sigma+it \in K} and {T} sufficiently large. In fact the implied constant in {K} only depends on the projection of {K} to the real axis.
  • (ii) For {\sigma > 1}, we have the bounds

    \displaystyle |\zeta(\sigma+it)|, \frac{1}{|\zeta(\sigma+it)|} \leq \zeta(\sigma)

    which implies that {F_T} converges locally uniformly as {T \rightarrow +\infty} to zero in the region {\{ \sigma+it: \sigma > 1, t \in {\bf R} \}}.

  • (iii) The functional equation, together with the symmetry {\zeta(\sigma-it) = \overline{\zeta(\sigma+it)}}, implies that

    \displaystyle |\zeta(\sigma+it)| = 2^\sigma \pi^{\sigma-1} |\sin \frac{\pi(\sigma+it)}{2}| |\Gamma(1-\sigma-it)| |\zeta(1-\sigma+it)|

    which by Exercise 17 of Supplement 3 shows that

    \displaystyle F_T( 1-\sigma+it ) = \frac{1}{2}-\sigma + F_T(\sigma+it) + o(1)

    as {T \rightarrow \infty}, locally uniformly in {\sigma+it}. In particular, when combined with the previous item, we see that {F_T(\sigma+it)} converges locally uniformly as {T \rightarrow +\infty} to {\frac{1}{2}-\sigma} in the region {\{ \sigma+it: \sigma < 0, t \in {\bf R}\}}.

  • (iv) From Jensen’s formula (Theorem 16 of Supplement 2) we see that {\log|\zeta|} is a subharmonic function, and thus {F_T} is subharmonic as well. In particular we have the mean value inequality

    \displaystyle F_T( z_0 ) \leq \frac{1}{\pi r^2} \int_{z: |z-z_0| \leq r} F_T(z)

    for any disk {\{ z: |z-z_0| \leq r \}}, where the integral is with respect to area measure. From this and (ii) we conclude that

    \displaystyle \int_{z: |z-z_0| \leq r} F_T(z) \geq O_{z_0,r}(1)

    for any disk with {\hbox{Re}(z_0)>1} and sufficiently large {T}; combining this with (i) we conclude that {F_T} is asymptotically locally bounded in {L^1} in the limit {T \rightarrow \infty}, thus for any compact set {K \subset {\bf C}} we have {\int_K |F_T| \ll_K 1} for sufficiently large {T}.

From (iv) and the usual Arzela-Ascoli diagonalisation argument, we see that the {F_T} are asymptotically compact in the topology of distributions: given any sequence {T_n} tending to {+\infty}, one can extract a subsequence such that the {F_T} converge in the sense of distributions. Let us then define a normalised limit profile of {\log|\zeta|} to be a distributional limit {F} of a sequence of {F_T}; they are analogous to limiting profiles in PDE, and also to the more recent introduction of “graphons” in the theory of graph limits. Then by taking limits in (i)-(iv) we can say a lot about such normalised limit profiles {F} (up to almost everywhere equivalence, which is an issue we will address shortly):

  • (i) {F} is bounded from above in the critical strip {\{ \sigma+it: 0 \leq \sigma \leq 1 \}}.
  • (ii) {F} vanishes on {\{ \sigma+it: \sigma \geq 1\}}.
  • (iii) We have the functional equation {F(1-\sigma+it) = \frac{1}{2}-\sigma + F(\sigma+it)} for all {\sigma+it}. In particular {F(\sigma+it) = \frac{1}{2}-\sigma} for {\sigma<0}.
  • (iv) {F} is subharmonic.

Unfortunately, (i)-(iv) fail to characterise {F} completely. For instance, one could have {F(\sigma+it) = f(\sigma)} for any convex function {f(\sigma)} of {\sigma} that equals {0} for {\sigma \geq 1}, {\frac{1}{2}-\sigma} for {\sigma \leq 1}, and obeys the functional equation {f(1-\sigma) = \frac{1}{2}-\sigma+f(\sigma)}, and this would be consistent with (i)-(iv). One can also perturb such examples in a region where {f} is strictly convex to create further examples of functions obeying (i)-(iv). Note from subharmonicity that the function {\sigma \mapsto \sup_t F(\sigma+it)} is always going to be convex in {\sigma}; this can be seen as a limiting case of the Hadamard three-lines theorem (Exercise 41 of Supplement 2).

We pause to address one minor technicality. We have defined {F} as a distributional limit, and as such it is a priori only defined up to almost everywhere equivalence. However, due to subharmonicity, there is a unique upper semi-continuous representative of {F} (taking values in {[-\infty,+\infty)}), defined by the formula

\displaystyle F(z_0) = \lim_{r \rightarrow 0^+} \frac{1}{\pi r^2} \int_{B(z_0,r)} F(z)\ dz

for any {z_0 \in {\bf C}} (note from subharmonicity that the expression in the limit is monotone nonincreasing as {r \rightarrow 0}, and is also continuous in {z_0}). We will now view this upper semi-continuous representative of {F} as the canonical representative of {F}, so that {F} is now defined everywhere, rather than up to almost everywhere equivalence.

By a classical theorem of Riesz, a function {F} is subharmonic if and only if the distribution {-\Delta F} is a non-negative measure, where {\Delta := \frac{\partial^2}{\partial \sigma^2} + \frac{\partial^2}{\partial t^2}} is the Laplacian in the {\sigma,t} coordinates. Jensen’s formula (or Greens’ theorem), when interpreted distributionally, tells us that

\displaystyle -\Delta \log |\zeta| = \frac{1}{2\pi} \sum_\rho \delta_\rho

away from the real axis, where {\rho} ranges over the non-trivial zeroes of {\zeta}. Thus, if {F} is a normalised limit profile for {\log |\zeta|} that is the distributional limit of {F_{T_n}}, then we have

\displaystyle -\Delta F = \nu

where {\nu} is a non-negative measure which is the limit in the vague topology of the measures

\displaystyle \nu_{T_n} := \frac{1}{2\pi \log T_n} \sum_\rho \delta_{\rho - T_n}.

Thus {\nu} is a normalised limit profile of the zeroes of the Riemann zeta function.

Using this machinery, we can recover many classical theorems about the Riemann zeta function by “soft” arguments that do not require extensive calculation. Here are some examples:

Theorem 1 The Riemann hypothesis implies the Lindelöf hypothesis.

Proof: It suffices to show that any limiting profile {F} (arising as the limit of some {F_{T_n}}) vanishes on the critical line {\{1/2+it: t \in {\bf R}\}}. But if the Riemann hypothesis holds, then the measures {\nu_{T_n}} are supported on the critical line {\{1/2+it: t \in {\bf R}\}}, so the normalised limit profile {\nu} is also supported on this line. This implies that {F} is harmonic outside of the critical line. By (ii) and unique continuation for harmonic functions, this implies that {F} vanishes on the half-space {\{ \sigma+it: \sigma \geq \frac{1}{2} \}} (and equals {\frac{1}{2}-\sigma} on the complementary half-space, by (iii)), giving the claim. \Box

In fact, we have the following sharper statement:

Theorem 2 (Backlund) The Lindelöf hypothesis is equivalent to the assertion that for any fixed {\sigma_0 > \frac{1}{2}}, the number of zeroes in the region {\{ \sigma+it: \sigma > \sigma_0, T \leq t \leq T+1 \}} is {o(\log T)} as {T \rightarrow \infty}.

Proof: If the latter claim holds, then for any {T_n \rightarrow \infty}, the measures {\nu_{T_n}} assign a mass of {o(1)} to any region of the form {\{ \sigma+it: \sigma > \sigma_0; t_0 \leq t \leq t_0+1 \}} as {n \rightarrow \infty} for any fixed {\sigma_0>\frac{1}{2}} and {t_0 \in {\bf R}}. Thus the normalised limiting profile measure {\nu} is supported on the critical line, and we can repeat the previous argument.

Conversely, suppose the claim fails, then we can find a sequence {T_n} and {\sigma_0>0} such that {\nu_{T_n}} assigns a mass of {\gg 1} to the region {\{ \sigma+it: \sigma > \sigma_0; 0\leq t \leq 1 \}}. Extracting a normalised limiting profile, we conclude that the normalised limiting profile measure {\nu} is non-trivial somewhere to the right of the critical line, so the associated subharmonic function {F} is not harmonic everywhere to the right of the critical line. From the maximum principle and (ii) this implies that {F} has to be positive somewhere on the critical line, but this contradicts the Lindelöf hypothesis. (One has to take a bit of care in the last step since {F_{T_n}} only converges to {F} in the sense of distributions, but it turns out that the subharmonicity of all the functions involved gives enough regularity to justify the argument; we omit the details here.) \Box

Theorem 3 (Littlewood) Assume the Lindelöf hypothesis. Then for any fixed {\alpha>0}, the number of zeroes in the region {\{ \sigma+it: T \leq t \leq T+\alpha \}} is {(2\pi \alpha+o(1)) \log T} as {T \rightarrow +\infty}.

Proof: By the previous arguments, the only possible normalised limiting profile for {\log |\zeta|} is {\max( 0, \frac{1}{2}-\sigma )}. Taking distributional Laplacians, we see that the only possible normalised limiting profile for the zeroes is Lebesgue measure on the critical line. Thus, {\nu_T( \{\sigma+it: T \leq t \leq T+\alpha \} )} can only converge to {\alpha} as {T \rightarrow +\infty}, and the claim follows. \Box

Even without the Lindelöf hypothesis, we have the following result:

Theorem 4 (Titchmarsh) For any fixed {\alpha>0}, there are {\gg_\alpha \log T} zeroes in the region {\{ \sigma+it: T \leq t \leq T+\alpha \}} for sufficiently large {T}.

Among other things, this theorem recovers a classical result of Littlewood that the gaps between the imaginary parts of the zeroes goes to zero, even without assuming unproven conjectures such as the Riemann or Lindelöf hypotheses.

Proof: Suppose for contradiction that this were not the case, then we can find {\alpha > 0} and a sequence {T_n \rightarrow \infty} such that {\{ \sigma+it: T_n \leq t \leq T_n+\alpha \}} contains {o(\log T)} zeroes. Passing to a subsequence to extract a limit profile, we conclude that the normalised limit profile measure {\nu} assigns no mass to the horizontal strip {\{ \sigma+it: 0 \leq t \leq\alpha \}}. Thus the associated subharmonic function {F} is actually harmonic on this strip. But by (ii) and unique continuation this forces {F} to vanish on this strip, contradicting the functional equation (iii). \Box

Exercise 5 Use limiting profiles to obtain the matching upper bound of {O_\alpha(\log T)} for the number of zeroes in {\{ \sigma+it: T \leq t \leq T+\alpha \}} for sufficiently large {T}.

Remark 6 One can remove the need to take limiting profiles in the above arguments if one can come up with quantitative (or “hard”) substitutes for qualitative (or “soft”) results such as the unique continuation property for harmonic functions. This would also allow one to replace the qualitative decay rates {o(1)} with more quantitative decay rates such as {1/\log \log T} or {1/\log\log\log T}. Indeed, the classical proofs of the above theorems come with quantitative bounds that are typically of this form (see e.g. the text of Titchmarsh for details).

Exercise 7 Let {S(T)} denote the quantity {S(T) := \frac{1}{\pi} \hbox{arg} \zeta(\frac{1}{2}+iT)}, where the branch of the argument is taken by using a line segment connecting {\frac{1}{2}+iT} to (say) {2+iT}, and then to {2}. If we have a sequence {T_n \rightarrow \infty} producing normalised limit profiles {F, \nu} for {\log|\zeta|} and the zeroes respectively, show that {t \mapsto \frac{1}{\log T_n} S(T_n + t)} converges in the sense of distributions to the function {t \mapsto \frac{1}{\pi} \int_{1/2}^1 \frac{\partial F}{\partial t}(\sigma+it)\ d\sigma}, or equivalently

\displaystyle t \mapsto \frac{1}{2\pi} \frac{\partial}{\partial t} \int_0^1 F(\sigma+it)\ d\sigma.

Conclude in particular that if the Lindelöf hypothesis holds, then {S(T) = o(\log T)} as {T \rightarrow \infty}.

A little bit more about the normalised limit profiles {F} are known unconditionally, beyond (i)-(iv). For instance, from Exercise 3 of Notes 5 we have {\zeta(1/2 + it ) = O( t^{1/6+o(1)} )} as {t \rightarrow +\infty}, which implies that any normalised limit profile {F} for {\log|\zeta|} is bounded by {1/6} on the critical line, beating the bound of {1/4} coming from convexity and (ii), (iii), and then convexity can be used to further bound {F} away from the critical line also. Some further small improvements of this type are known (coming from various methods for estimating exponential sums), though they fall well short of determining {F} completely at our current level of understanding. Of course, given that we believe the Riemann hypothesis (and hence the Lindelöf hypothesis) to be true, the only actual limit profile that should exist is {\max(0,\frac{1}{2}-\sigma)} (in fact this assertion is equivalent to the Lindelöf hypothesis, by the arguments above).

Better control on limiting profiles is available if we do not insist on controlling {\zeta} for all values of the height parameter {T}, but only for most such values, thanks to the existence of several mean value theorems for the zeta function, as discussed in Notes 6; we discuss this below the fold.

Read the rest of this entry »

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

(Discrete) (Continuous) (Limit method)
Ramsey theory Topological dynamics Compactness
Density Ramsey theory Ergodic theory Furstenberg correspondence principle
Graph/hypergraph regularity Measure theory Graph limits
Polynomial regularity Linear algebra Ultralimits
Structural decompositions Hilbert space geometry Ultralimits
Fourier analysis Spectral theory Direct and inverse limits
Quantitative algebraic geometry Algebraic geometry Schemes
Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits
Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

  • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects {x_n} in a common space {X}, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object {\lim_{n \rightarrow \infty} x_n}, which remains in the same space, and is “close” to many of the original objects {x_n} with respect to the given metric or topology.
  • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects {x_n} in a category {X}, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit {\varinjlim x_n} or the inverse limit {\varprojlim x_n} of these objects, which is another object in the same category {X}, and is connected to the original objects {x_n} by various morphisms.
  • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects {x_{\bf n}} or of spaces {X_{\bf n}}, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, {X_{\bf n}} might be groups and {x_{\bf n}} might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}} or a new space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}, which is still a model of the same language (e.g. if the spaces {X_{\bf n}} were all groups, then the limiting space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} is an abelian group, then the {X_{\bf n}} will also be abelian groups for many {{\bf n}}.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects {x_{\bf n}} to all lie in a common space {X} in order to form an ultralimit {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; they are permitted to lie in different spaces {X_{\bf n}}; this is more natural in many discrete contexts, e.g. when considering graphs on {{\bf n}} vertices in the limit when {{\bf n}} goes to infinity. Also, no convergence properties on the {x_{\bf n}} are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces {X_{\bf n}} involved are required in order to construct the ultraproduct.

With so few requirements on the objects {x_{\bf n}} or spaces {X_{\bf n}}, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the {x_{\bf n}}, will be exactly obeyed by the limit object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Read the rest of this entry »

I’ve just uploaded to the arXiv my joint paper with Vitaly Bergelson, “Multiple recurrence in quasirandom groups“, which is submitted to Geom. Func. Anal.. This paper builds upon a paper of Gowers in which he introduced the concept of a quasirandom group, and established some mixing (or recurrence) properties of such groups. A {D}-quasirandom group is a finite group with no non-trivial unitary representations of dimension at most {D}. We will informally refer to a “quasirandom group” as a {D}-quasirandom group with the quasirandomness parameter {D} large (more formally, one can work with a sequence of {D_n}-quasirandom groups with {D_n} going to infinity). A typical example of a quasirandom group is {SL_2(F_p)} where {p} is a large prime. Quasirandom groups are discussed in depth in this blog post. One of the key properties of quasirandom groups established in Gowers’ paper is the following “weak mixing” property: if {A, B} are subsets of {G}, then for “almost all” {g \in G}, one has

\displaystyle  \mu( A \cap gB ) \approx \mu(A) \mu(B) \ \ \ \ \ (1)

where {\mu(A) := |A|/|G|} denotes the density of {A} in {G}. Here, we use {x \approx y} to informally represent an estimate of the form {x=y+o(1)} (where {o(1)} is a quantity that goes to zero when the quasirandomness parameter {D} goes to infinity), and “almost all {g \in G}” denotes “for all {g} in a subset of {G} of density {1-o(1)}“. As a corollary, if {A,B,C} have positive density in {G} (by which we mean that {\mu(A)} is bounded away from zero, uniformly in the quasirandomness parameter {D}, and similarly for {B,C}), then (if the quasirandomness parameter {D} is sufficiently large) we can find elements {g, x \in G} such that {g \in A}, {x \in B}, {gx \in C}. In fact we can find approximately {\mu(A)\mu(B)\mu(C) |G|^2} such pairs {(g,x)}. To put it another way: if we choose {g,x} uniformly and independently at random from {G}, then the events {g \in A}, {x \in B}, {gx \in C} are approximately independent (thus the random variable {(g,x,gx) \in G^3} resembles a uniformly distributed random variable on {G^3} in some weak sense). One can also express this mixing property in integral form as

\displaystyle  \int_G \int_G f_1(g) f_2(x) f_3(gx)\ d\mu(g) d\mu(x) \approx (\int_G f_1\ d\mu) (\int_G f_2\ d\mu) (\int_G f_3\ d\mu)

for any bounded functions {f_1,f_2,f_3: G \rightarrow {\bf R}}. (Of course, with {G} being finite, one could replace the integrals here by finite averages if desired.) Or in probabilistic language, we have

\displaystyle  \mathop{\bf E} f_1(g) f_2(x) f_3(gx) \approx \mathop{\bf E} f_1(x_1) f_2(x_2) f_3(x_3)

where {g, x, x_1, x_2, x_3} are drawn uniformly and independently at random from {G}.

As observed in Gowers’ paper, one can iterate this observation to find “parallelopipeds” of any given dimension in dense subsets of {G}. For instance, applying (1) with {A,B,C} replaced by {A \cap hB}, {C \cap hD}, and {E \cap hF} one can assert (after some relabeling) that for {g,h,x} chosen uniformly and independently at random from {G}, the events {g \in A}, {h \in B}, {gh \in C}, {x \in D}, {gx \in E}, {hx \in F}, {ghx \in H} are approximately independent whenever {A,B,C,D,E,F,H} are dense subsets of {G}; thus the tuple {(g,h,gh,x,gh,hx,ghx)} resebles a uniformly distributed random variable in {G^7} in some weak sense.

However, there are other tuples for which the above iteration argument does not seem to apply. One of the simplest tuples in this vein is the tuple {(g, x, xg, gx)} in {G^4}, when {g, x} are drawn uniformly at random from a quasirandom group {G}. Here, one does not expect the tuple to behave as if it were uniformly distributed in {G^4}, because there is an obvious constraint connecting the last two components {gx, xg} of this tuple: they must lie in the same conjugacy class! In particular, if {A} is a subset of {G} that is the union of conjugacy classes, then the events {gx \in A}, {xg \in A} are perfectly correlated, so that {\mu( gx \in A, xg \in A)} is equal to {\mu(A)} rather than {\mu(A)^2}. Our main result, though, is that in a quasirandom group, this is (approximately) the only constraint on the tuple. More precisely, we have

Theorem 1 Let {G} be a {D}-quasirandom group, and let {g, x} be drawn uniformly at random from {G}. Then for any {f_1,f_2,f_3,f_4: G \rightarrow [-1,1]}, we have

\displaystyle  \mathop{\bf E} f_1(g) f_2(x) f_3(gx) f_4(xg) = \mathop{\bf E} f_1(x_1) f_2(x_2) f_3(x_3) f_4(x_4) + o(1)

where {o(1)} goes to zero as {D \rightarrow \infty}, {x_1,x_2,x_3} are drawn uniformly and independently at random from {G}, and {x_4} is drawn uniformly at random from the conjugates of {x_3} for each fixed choice of {x_1,x_2,x_3}.

This is the probabilistic formulation of the above theorem; one can also phrase the theorem in other formulations (such as an integral formulation), and this is detailed in the paper. This theorem leads to a number of recurrence results; for instance, as a corollary of this result, we have

\displaystyle  \mu(A) \mu(B)^2 - o(1) \leq \mu( A \cap gB \cap Bg ) \leq \mu(A) \mu(B) + o(1)

for almost all {g \in G}, and any dense subsets {A, B} of {G}; the lower and upper bounds are sharp, with the lower bound being attained when {B} is randomly distributed, and the upper bound when {B} is conjugation-invariant.

To me, the more interesting thing here is not the result itself, but how it is proven. Vitaly and I were not able to find a purely finitary way to establish this mixing theorem. Instead, we had to first use the machinery of ultraproducts (as discussed in this previous post) to convert the finitary statement about a quasirandom group to an infinitary statement about a type of infinite group which we call an ultra quasirandom group (basically, an ultraproduct of increasingly quasirandom finite groups). This is analogous to how the Furstenberg correspondence principle is used to convert a finitary combinatorial problem into an infinitary ergodic theory problem.

Ultra quasirandom groups come equipped with a finite, countably additive measure known as Loeb measure {\mu_G}, which is very analogous to the Haar measure of a compact group, except that in the case of ultra quasirandom groups one does not quite have a topological structure that would give compactness. Instead, one has a slightly weaker structure known as a {\sigma}-topology, which is like a topology except that open sets are only closed under countable unions rather than arbitrary ones. There are some interesting measure-theoretic and topological issues regarding the distinction between topologies and {\sigma}-topologies (and between Haar measure and Loeb measure), but for this post it is perhaps best to gloss over these issues and pretend that ultra quasirandom groups {G} come with a Haar measure. One can then recast Theorem 1 as a mixing theorem for the left and right actions of the ultra approximate group {G} on itself, which roughly speaking is the assertion that

\displaystyle  \int_G f_1(x) L_g f_2(x) L_g R_g f_3(x)\ d\mu_G(x) \approx 0 \ \ \ \ \ (2)

for “almost all” {g \in G}, if {f_1, f_2, f_3} are bounded measurable functions on {G}, with {f_3} having zero mean on all conjugacy classes of {G}, where {L_g, R_g} are the left and right translation operators

\displaystyle  L_g f(x) := f(g^{-1} x); \quad R_g f(x) := f(xg).

To establish this mixing theorem, we use the machinery of idempotent ultrafilters, which is a particularly useful tool for understanding the ergodic theory of actions of countable groups {G} that need not be amenable; in the non-amenable setting the classical ergodic averages do not make much sense, but ultrafilter-based averages are still available. To oversimplify substantially, the idempotent ultrafilter arguments let one establish mixing estimates of the form (2) for “many” elements {g} of an infinite-dimensional parallelopiped known as an IP system (provided that the actions {L_g,R_g} of this IP system obey some technical mixing hypotheses, but let’s ignore that for sake of this discussion). The claim then follows by using the quasirandomness hypothesis to show that if the estimate (2) failed for a large set of {g \in G}, then this large set would then contain an IP system, contradicting the previous claim.

Idempotent ultrafilters are an extremely infinitary type of mathematical object (one has to use Zorn’s lemma no fewer than three times just to construct one of these objects!). So it is quite remarkable that they can be used to establish a finitary theorem such as Theorem 1, though as is often the case with such infinitary arguments, one gets absolutely no quantitative control whatsoever on the error terms {o(1)} appearing in that theorem. (It is also mildly amusing to note that our arguments involve the use of ultrafilters in two completely different ways: firstly in order to set up the ultraproduct that converts the finitary mixing problem to an infinitary one, and secondly to solve the infinitary mixing problem. Despite some superficial similarities, there appear to be no substantial commonalities between these two usages of ultrafilters.) There is already a fair amount of literature on using idempotent ultrafilter methods in infinitary ergodic theory, and perhaps by further development of ultraproduct correspondence principles, one can use such methods to obtain further finitary consequences (although the state of the art for idempotent ultrafilter ergodic theory has not advanced much beyond the analysis of two commuting shifts {L_g, R_g} currently, which is the main reason why our arguments only handle the pattern {(g,x,xg,gx)} and not more sophisticated patterns).

We also have some miscellaneous other results in the paper. It turns out that by using the triangle removal lemma from graph theory, one can obtain a recurrence result that asserts that whenever {A} is a dense subset of a finite group {G} (not necessarily quasirandom), then there are {\gg |G|^2} pairs {(x,g)} such that {x, gx, xg} all lie in {A}. Using a hypergraph generalisation of the triangle removal lemma known as the hypergraph removal lemma, one can obtain more complicated versions of this statement; for instance, if {A} is a dense subset of {G^2}, then one can find {\gg |G|^2} triples {(x,y,g)} such that {(x,y), (gx, y), (gx, gy), (gxg^{-1}, gyg^{-1})} all lie in {A}. But the method is tailored to the specific types of patterns given here, and we do not have a general method for obtaining recurrence or mixing properties for arbitrary patterns of words in some finite alphabet such as {g,x,y}.

We also give some properties of a model example of an ultra quasirandom group, namely the ultraproduct {SL_2(F)} of {SL_2(F_{p_n})} where {p_n} is a sequence of primes going off to infinity. Thanks to the substantial recent progress (by Helfgott, Bourgain, Gamburd, Breuillard, and others) on understanding the expansion properties of the finite groups {SL_2(F_{p_n})}, we have a fair amount of knowledge on the ultraproduct {SL_2(F)} as well; for instance any two elements of {SL_2(F)} will almost surely generate a spectral gap. We don’t have any direct application of this particular ultra quasirandom group, but it might be interesting to study it further.

Roughly speaking, mathematical analysis can be divided into two major styles, namely hard analysis and soft analysis. The precise distinction between the two types of analysis is imprecise (and in some cases one may use a blend the two styles), but some key differences can be listed as follows.

  • Hard analysis tends to be concerned with quantitative or effective properties such as estimates, upper and lower bounds, convergence rates, and growth rates or decay rates. In contrast, soft analysis tends to be concerned with qualitative or ineffective properties such as existence and uniqueness, finiteness, measurability, continuity, differentiability, connectedness, or compactness.
  • Hard analysis tends to be focused on finitary, finite-dimensional or discrete objects, such as finite sets, finitely generated groups, finite Boolean combination of boxes or balls, or “finite-complexity” functions, such as polynomials or functions on a finite set. In contrast, soft analysis tends to be focused on infinitary, infinite-dimensional, or continuous objects, such as arbitrary measurable sets or measurable functions, or abstract locally compact groups.
  • Hard analysis tends to involve explicit use of many parameters such as {\epsilon}, {\delta}, {N}, etc. In contrast, soft analysis tends to rely instead on properties such as continuity, differentiability, compactness, etc., which implicitly are defined using a similar set of parameters, but whose parameters often do not make an explicit appearance in arguments.
  • In hard analysis, it is often the case that a key lemma in the literature is not quite optimised for the application at hand, and one has to reprove a slight variant of that lemma (using a variant of the proof of the original lemma) in order for it to be suitable for applications. In contrast, in soft analysis, key results can often be used as “black boxes”, without need of further modification or inspection of the proof.
  • The properties in soft analysis tend to enjoy precise closure properties; for instance, the composition or linear combination of continuous functions is again continuous, and similarly for measurability, differentiability, etc. In contrast, the closure properties in hard analysis tend to be fuzzier, in that the parameters in the conclusion are often different from the parameters in the hypotheses. For instance, the composition of two Lipschitz functions with Lipschitz constant {K} is still Lipschitz, but now with Lipschitz constant {K^2} instead of {K}. These changes in parameters mean that hard analysis arguments often require more “bookkeeping” than their soft analysis counterparts, and are less able to utilise algebraic constructions (e.g. quotient space constructions) that rely heavily on precise closure properties.

In the lectures so far, focusing on the theory surrounding Hilbert’s fifth problem, the results and techniques have fallen well inside the category of soft analysis. However, we will now turn to the theory of approximate groups, which is a topic which is traditionally studied using the methods of hard analysis. (Later we will also study groups of polynomial growth, which lies on an intermediate position in the spectrum between hard and soft analysis, and which can be profitably analysed using both styles of analysis.)

Despite the superficial differences between hard and soft analysis, though, there are a number of important correspondences between results in hard analysis and results in soft analysis. For instance, if one has some sort of uniform quantitative bound on some expression relating to finitary objects, one can often use limiting arguments to then conclude a qualitative bound on analogous expressions on infinitary objects, by viewing the latter objects as some sort of “limit” of the former objects. Conversely, if one has a qualitative bound on infinitary objects, one can often use compactness and contradiction arguments to recover uniform quantitative bounds on finitary objects as a corollary.

Remark 1 Another type of correspondence between hard analysis and soft analysis, which is “syntactical” rather than “semantical” in nature, arises by taking the proofs of a soft analysis result, and translating such a qualitative proof somehow (e.g. by carefully manipulating quantifiers) into a quantitative proof of an analogous hard analysis result. This type of technique is sometimes referred to as proof mining in the proof theory literature, and is discussed in this previous blog post (and its comments). We will however not employ systematic proof mining techniques here, although in later posts we will informally borrow arguments from infinitary settings (such as the methods used to construct Gleason metrics) and adapt them to finitary ones.

Let us illustrate the correspondence between hard and soft analysis results with a simple example.

Proposition 1 Let {X} be a sequentially compact topological space, let {S} be a dense subset of {X}, and let {f: X \rightarrow [0,+\infty]} be a continuous function (giving the extended half-line {[0,+\infty]} the usual order topology). Then the following statements are equivalent:

  • (i) (Qualitative bound on infinitary objects) For all {x \in X}, one has {f(x) < +\infty}.
  • (ii) (Quantitative bound on finitary objects) There exists {M < +\infty} such that {f(x) \leq M} for all {x \in S}.

In applications, {S} is typically a (non-compact) set of “finitary” (or “finite complexity”) objects of a certain class, and {X} is some sort of “completion” or “compactification” of {S} which admits additional “infinitary” objects that may be viewed as limits of finitary objects.

Proof: To see that (ii) implies (i), observe from density that every point {x} in {X} is adherent to {S}, and so given any neighbourhood {U} of {x}, there exists {y \in S \cap U}. Since {f(y) \leq M}, we conclude from the continuity of {f} that {f(x) \leq M} also, and the claim follows.

Conversely, to show that (i) implies (ii), we use the “compactness and contradiction” argument. Suppose for sake of contradiction that (ii) failed. Then for any natural number {n}, there exists {x_n \in S} such that {f(x_n) \geq n}. (Here we have used the axiom of choice, which we will assume throughout this course.) Using sequential compactness, and passing to a subsequence if necessary, we may assume that the {x_n} converge to a limit {x \in X}. By continuity of {f}, this implies that {f(x) = +\infty}, contradicting (i). \Box

Remark 2 Note that the above deduction of (ii) from (i) is ineffective in that it gives no explicit bound on the uniform bound {M} in (ii). Without any further information on how the qualitative bound (i) is proven, this is the best one can do in general (and this is one of the most significant weaknesses of infinitary methods when used to solve finitary problems); but if one has access to the proof of (i), one can often finitise or proof mine that argument to extract an effective bound for {M}, although often the bound one obtains in the process is quite poor (particularly if the proof of (i) relied extensively on infinitary tools, such as limits). See this blog post for some related discussion.

The above simple example illustrates that in order to get from an “infinitary” statement such as (i) to a “finitary” statement such as (ii), a key step is to be able to take a sequence {(x_n)_{n \in {\bf N}}} (or in some cases, a more general net {(x_\alpha)_{\alpha \in A}}) of finitary objects and extract a suitable infinitary limit object {x}. In the literature, there are three main ways in which one can extract such a limit:

  • (Topological limit) If the {x_n} are all elements of some topological space {S} (e.g. an incomplete function space) which has a suitable “compactification” or “completion” {X} (e.g. a Banach space), then (after passing to a subsequence if necessary) one can often ensure the {x_n} converge in a topological sense (or in a metrical sense) to a limit {x}. The use of this type of limit to pass between quantitative/finitary and qualitative/infinitary results is particularly common in the more analytical areas of mathematics (such as ergodic theory, asymptotic combinatorics, or PDE), due to the abundance of useful compactness results in analysis such as the (sequential) Banach-Alaoglu theorem, Prokhorov’s theorem, the Helly selection theorem, the Arzelá-Ascoli theorem, or even the humble Bolzano-Weierstrass theorem. However, one often has to take care with the nature of convergence, as many compactness theorems only guarantee convergence in a weak sense rather than in a strong one.
  • (Categorical limit) If the {x_n} are all objects in some category (e.g. metric spaces, groups, fields, etc.) with a number of morphisms between the {x_n} (e.g. morphisms from {x_{n+1}} to {x_n}, or vice versa), then one can often form a direct limit {\lim_{\rightarrow} x_n} or inverse limit {\lim_{\leftarrow} x_n} of these objects to form a limiting object {x}. The use of these types of limits to connect quantitative and qualitative results is common in subjects such as algebraic geometry that are particularly amenable to categorical ways of thinking. (We have seen inverse limits appear in the discussion of Hilbert’s fifth problem, although in that context they were not really used to connect quantitative and qualitative results together.)
  • (Logical limit) If the {x_n} are all distinct spaces (or elements or subsets of distinct spaces), with few morphisms connecting them together, then topological and categorical limits are often unavailable or unhelpful. In such cases, however, one can still tie together such objects using an ultraproduct construction (or similar device) to create a limiting object {\lim_{n \rightarrow \alpha} x_n} or limiting space {\prod_{n \rightarrow \alpha} x_n} that is a logical limit of the {x_n}, in the sense that various properties of the {x_n} (particularly those that can be phrased using the language of first-order logic) are preserved in the limit. As such, logical limits are often very well suited for the task of connecting finitary and infinitary mathematics together. Ultralimit type constructions are of course used extensively in logic (particularly in model theory), but are also popular in metric geometry. They can also be used in many of the previously mentioned areas of mathematics, such as algebraic geometry (as discussed in this previous post).

The three types of limits are analogous in many ways, with a number of connections between them. For instance, in the study of groups of polynomial growth, both topological limits (using the metric notion of Gromov-Hausdorff convergence) and logical limits (using the ultralimit construction) are commonly used, and to some extent the two constructions are at least partially interchangeable in this setting. (See also these previous posts for the use of ultralimits as a substitute for topological limits.) In the theory of approximate groups, though, it was observed by Hrushovski that logical limits (and in particular, ultraproducts) are the most useful type of limit to connect finitary approximate groups to their infinitary counterparts. One reason for this is that one is often interested in obtaining results on approximate groups {A} that are uniform in the choice of ambient group {G}. As such, one often seeks to take a limit of approximate groups {A_n} that lie in completely unrelated ambient groups {G_n}, with no obvious morphisms or metrics tying the {G_n} to each other. As such, the topological and categorical limits are not easily usable, whereas the logical limits can still be employed without much difficulty.

Logical limits are closely tied with non-standard analysis. Indeed, by applying an ultraproduct construction to standard number systems such as the natural numbers {{\bf N}} or the reals {{\bf R}}, one can obtain nonstandard number systems such as the nonstandard natural numbers {{}^* {\bf N}} or the nonstandard real numbers (or hyperreals) {{}^* {\bf R}}. These nonstandard number systems behave very similarly to their standard counterparts, but also enjoy the advantage of containing the standard number systems as proper subsystems (e.g. {{\bf R}} is a subring of {{}^* {\bf R}}), which allows for some convenient algebraic manipulations (such as the quotient space construction to create spaces such as {{}^* {\bf R} / {\bf R}}) which are not easily accessible in the purely standard universe. Nonstandard spaces also enjoy a useful completeness property, known as countable saturation, which is analogous to metric completeness (as discussed in this previous blog post) and which will be particularly useful for us in tying together the theory of approximate groups with the theory of Hilbert’s fifth problem. See this previous post for more discussion on ultrafilters and nonstandard analysis.

In these notes, we lay out the basic theory of ultraproducts and ultralimits (in particular, proving Los’s theorem, which roughly speaking asserts that ultralimits are limits in a logical sense, as well as the countable saturation property alluded to earlier). We also lay out some of the basic foundations of nonstandard analysis, although we will not rely too heavily on nonstandard tools in this course. Finally, we apply this general theory to approximate groups, to connect finite approximate groups to an infinitary type of approximate group which we will call an ultra approximate group. We will then study these ultra approximate groups (and models of such groups) in more detail in the next set of notes.

Remark 3 Throughout these notes (and in the rest of the course), we will assume the axiom of choice, in order to easily use ultrafilter-based tools. If one really wanted to expend the effort, though, one could eliminate the axiom of choice from the proofs of the final “finitary” results that one is ultimately interested in proving, at the cost of making the proofs significantly lengthier. Indeed, there is a general result of Gödel that any result which can be stated in the language of Peano arithmetic (which, roughly speaking, means that the result is “finitary” in nature), and can be proven in set theory using the axiom of choice (or more precisely, in the ZFC axiom system), can also be proven in set theory without the axiom of choice (i.e. in the ZF system). As this course is not focused on foundations, we shall simply assume the axiom of choice henceforth to avoid further distraction by such issues.

Read the rest of this entry »

I have blogged a number of times in the past about the relationship between finitary (or “hard”, or “quantitative”) analysis, and infinitary (or “soft”, or “qualitative”) analysis. One way to connect the two types of analysis is via compactness arguments (and more specifically, contradiction and compactness arguments); such arguments can convert qualitative properties (such as continuity) to quantitative properties (such as bounded), basically because of the fundamental fact that continuous functions on a compact space are bounded (or the closely related fact that sequentially continuous functions on a sequentially compact space are bounded).

A key stage in any such compactness argument is the following: one has a sequence {X_n} of “quantitative” or “finitary” objects or spaces, and one has to somehow end up with a “qualitative” or “infinitary” limit object {X} or limit space. One common way to achieve this is to embed everything inside some universal space and then use some weak compactness property of that space, such as the Banach-Alaoglu theorem (or its sequential counterpart). This is for instance the idea behind the Furstenberg correspondence principle relating ergodic theory to combinatorics; see for instance this post of mine on this topic.

However, there is a slightly different approach, which I will call ultralimit analysis, which proceeds via the machinery of ultrafilters and ultraproducts; typically, the limit objects {X} one constructs are now the ultraproducts (or ultralimits) of the original objects {X_\alpha}. There are two main facts that make ultralimit analysis powerful. The first is that one can take ultralimits of arbitrary sequences of objects, as opposed to more traditional tools such as metric completions, which only allow one to take limits of Cauchy sequences of objects. The second fact is Los’s theorem, which tells us that {X} is an elementary limit of the {X_\alpha} (i.e. every sentence in first-order logic which is true for the {X_\alpha} for {\alpha} large enough, is true for {X}). This existence of elementary limits is a manifestation of the compactness theorem in logic; see this earlier blog post for more discussion. So we see that compactness methods and ultrafilter methods are closely intertwined. (See also my earlier class notes for a related connection between ultrafilters and compactness.)

Ultralimit analysis is very closely related to nonstandard analysis. I already discussed some aspects of this relationship in an earlier post, and will expand upon it at the bottom of this post. Roughly speaking, the relationship between ultralimit analysis and nonstandard analysis is analogous to the relationship between measure theory and probability theory.

To illustrate how ultralimit analysis is actually used in practice, I will show later in this post how to take a qualitative infinitary theory – in this case, basic algebraic geometry – and apply ultralimit analysis to then deduce a quantitative version of this theory, in which the complexity of the various algebraic sets and varieties that appear as outputs are controlled uniformly by the complexity of the inputs. The point of this exercise is to show how ultralimit analysis allows for a relatively painless conversion back and forth between the quantitative and qualitative worlds, though in some cases the quantitative translation of a qualitative result (or vice versa) may be somewhat unexpected. In an upcoming paper of myself, Ben Green, and Emmanuel Breuillard (announced in the previous blog post), we will rely on ultralimit analysis to reduce the messiness of various quantitative arguments by replacing them with a qualitative setting in which the theory becomes significantly cleaner.

For sake of completeness, I also redo some earlier instances of the correspondence principle via ultralimit analysis, namely the deduction of the quantitative Gromov theorem from the qualitative one, and of Szemerédi’s theorem from the Furstenberg recurrence theorem, to illustrate how close the two techniques are to each other.

Read the rest of this entry »

One way to study a general class of mathematical objects is to embed them into a more structured class of mathematical objects; for instance, one could study manifolds by embedding them into Euclidean spaces. In these (optional) notes we study two (related) embedding theorems for topological spaces:

Read the rest of this entry »

A (concrete) Boolean algebra is a pair (X, {\mathcal B}), where X is a set, and {\mathcal B} is a collection of subsets of X which contain the empty set \emptyset, and which is closed under unions A, B \mapsto A \cup B, intersections A, B \mapsto A \cap B, and complements A \mapsto A^c := X \backslash A. The subset relation \subset also gives a relation on {\mathcal B}. Because the {\mathcal B} is concretely represented as subsets of a space X, these relations automatically obey various axioms, in particular, for any A,B,C \in {\mathcal B}, we have:

  1. \subset is a partial ordering on {\mathcal B}, and A and B have join A \cup B and meet A \cap B.
  2. We have the distributive laws A \cup (B \cap C) = (A \cup B) \cap (A \cup C) and A \cap (B \cup C) = (A \cap B) \cup (A \cap C).
  3. \emptyset is the minimal element of the partial ordering \subset, and \emptyset^c is the maximal element.
  4. A \cap A^c = \emptyset and A \cup A^c = \emptyset^c.

(More succinctly: {\mathcal B} is a lattice which is distributive, bounded, and complemented.)

We can then define an abstract Boolean algebra {\mathcal B} = ({\mathcal B}, \emptyset, \cdot^c, \cup, \cap, \subset) to be an abstract set {\mathcal B} with the specified objects, operations, and relations that obey the axioms 1-4. [Of course, some of these operations are redundant; for instance, intersection can be defined in terms of complement and union by de Morgan’s laws. In the literature, different authors select different initial operations and axioms when defining an abstract Boolean algebra, but they are all easily seen to be equivalent to each other. To emphasise the abstract nature of these algebras, the symbols \emptyset, \cdot^c, \cup, \cap, \subset are often replaced with other symbols such as 0, \overline{\cdot}, \vee, \wedge, <.]

Clearly, every concrete Boolean algebra is an abstract Boolean algebra. In the converse direction, we have Stone’s representation theorem (see below), which asserts (among other things) that every abstract Boolean algebra is isomorphic to a concrete one (and even constructs this concrete representation of the abstract Boolean algebra canonically). So, up to (abstract) isomorphism, there is really no difference between a concrete Boolean algebra and an abstract one.

Now let us turn from Boolean algebras to \sigma-algebras.

A concrete \sigma-algebra (also known as a measurable space) is a pair (X,{\mathcal B}), where X is a set, and {\mathcal B} is a collection of subsets of X which contains \emptyset and are closed under countable unions, countable intersections, and complements; thus every concrete \sigma-algebra is a concrete Boolean algebra, but not conversely. As before, concrete \sigma-algebras come equipped with the structures \emptyset, \cdot^c, \cup, \cap, \subset which obey axioms 1-4, but they also come with the operations of countable union (A_n)_{n=1}^\infty \mapsto \bigcup_{n=1}^\infty A_n and countable intersection (A_n)_{n=1}^\infty \mapsto \bigcap_{n=1}^\infty A_n, which obey an additional axiom:

5. Any countable family A_1, A_2, \ldots of elements of {\mathcal B} has supremum \bigcup_{n=1}^\infty A_n and infimum \bigcap_{n=1}^\infty A_n.

As with Boolean algebras, one can now define an abstract \sigma-algebra to be a set {\mathcal B} = ({\mathcal B}, \emptyset, \cdot^c, \cup, \cap, \subset, \bigcup_{n=1}^\infty, \bigcap_{n=1}^\infty ) with the indicated objects, operations, and relations, which obeys axioms 1-5. Again, every concrete \sigma-algebra is an abstract one; but is it still true that every abstract \sigma-algebra is representable as a concrete one?

The answer turns out to be no, but the obstruction can be described precisely (namely, one needs to quotient out an ideal of “null sets” from the concrete \sigma-algebra), and there is a satisfactory representation theorem, namely the Loomis-Sikorski representation theorem (see below). As a corollary of this representation theorem, one can also represent abstract measure spaces ({\mathcal B},\mu) (also known as measure algebras) by concrete measure spaces, (X, {\mathcal B}, \mu), after quotienting out by null sets.

In the rest of this post, I will state and prove these representation theorems. They are not actually used directly in the rest of the course (and they will also require some results that we haven’t proven yet, most notably Tychonoff’s theorem), and so these notes are optional reading; but these theorems do help explain why it is “safe” to focus attention primarily on concrete \sigma-algebras and measure spaces when doing measure theory, since the abstract analogues of these mathematical concepts are largely equivalent to their concrete counterparts. (The situation is quite different for non-commutative measure theories, such as quantum probability, in which there is basically no good representation theorem available to equate the abstract with the classically concrete, but I will not discuss these theories here.)

Read the rest of this entry »

This post is in some ways an antithesis of my previous postings on hard and soft analysis. In those posts, the emphasis was on taking a result in soft analysis and converting it into a hard analysis statement (making it more “quantitative” or “effective”); here we shall be focusing on the reverse procedure, in which one harnesses the power of infinitary mathematics – in particular, ultrafilters and nonstandard analysis – to facilitate the proof of finitary statements.

Arguments in hard analysis are notorious for their profusion of “epsilons and deltas”. In the more sophisticated arguments of this type, one can end up having an entire army of epsilons \epsilon_1, \epsilon_2, \epsilon_3, \ldots that one needs to manage, in particular choosing each epsilon carefully to be sufficiently small compared to other parameters (including other epsilons), while of course avoiding an impossibly circular situation in which a parameter is ultimately required to be small with respect to itself, which is absurd. This art of epsilon management, once mastered, is not terribly difficult – it basically requires one to mentally keep track of which quantities are “small”, “very small”, “very very small”, and so forth – but when these arguments get particularly lengthy, then epsilon management can get rather tedious, and also has the effect of making these arguments unpleasant to read. In particular, any given assertion in hard analysis usually comes with a number of unsightly quantifiers (For every \epsilon there exists an N…) which can require some thought for a reader to parse. This is in contrast with soft analysis, in which most of the quantifiers (and the epsilons) can be cleanly concealed via the deployment of some very useful terminology; consider for instance how many quantifiers and epsilons are hidden within, say, the Heine-Borel theorem: “a subset of a Euclidean space is compact if and only if it is closed and bounded”.

For those who practice hard analysis for a living (such as myself), it is natural to wonder if one can somehow “clean up” or “automate” all the epsilon management which one is required to do, and attain levels of elegance and conceptual clarity comparable to those in soft analysis, hopefully without sacrificing too much of the “elementary” or “finitary” nature of hard analysis in the process.

Read the rest of this entry »

Archives