You are currently browsing the category archive for the ‘math.CO’ category.

A core foundation of the subject now known as arithmetic combinatorics (and particularly the subfield of additive combinatorics) are the elementary sum set estimates (sometimes known as “Ruzsa calculus”) that relate the cardinality of various sum sets

\displaystyle  A+B := \{ a+b: a \in A, b \in B \}

and difference sets

\displaystyle  A-B := \{ a-b: a \in A, b \in B \},

as well as iterated sumsets such as {3A=A+A+A}, {2A-2A=A+A-A-A}, and so forth. Here, {A, B} are finite non-empty subsets of some additive group {G = (G,+)} (classically one took {G={\bf Z}} or {G={\bf R}}, but nowadays one usually considers more general additive groups). Some basic estimates in this vein are the following:

Lemma 1 (Ruzsa covering lemma) Let {A, B} be finite non-empty subsets of {G}. Then {A} may be covered by at most {\frac{|A+B|}{|B|}} translates of {B-B}.

Proof: Consider a maximal set of disjoint translates {a+B} of {B} by elements {a \in A}. These translates have cardinality {|B|}, are disjoint, and lie in {A+B}, so there are at most {\frac{|A+B|}{|B|}} of them. By maximality, for any {a' \in A}, {a'+B} must intersect at least one of the selected {a+B}, thus {a' \in a+B-B}, and the claim follows. \Box

Lemma 2 (Ruzsa triangle inequality) Let {A,B,C} be finite non-empty subsets of {G}. Then {|A-C| \leq \frac{|A-B| |B-C|}{|B|}}.

Proof: Consider the addition map {+: (x,y) \mapsto x+y} from {(A-B) \times (B-C)} to {G}. Every element {a-c} of {A - C} has a preimage {\{ (x,y) \in (A-B) \times (B-C)\}} of this map of cardinality at least {|B|}, thanks to the obvious identity {a-c = (a-b) + (b-c)} for each {b \in B}. Since {(A-B) \times (B-C)} has cardinality {|A-B| |B-C|}, the claim follows. \Box

Such estimates (which are covered, incidentally, in Section 2 of my book with Van Vu) are particularly useful for controlling finite sets {A} of small doubling, in the sense that {|A+A| \leq K|A|} for some bounded {K}. (There are deeper theorems, most notably Freiman’s theorem, which give more control than what elementary Ruzsa calculus does, however the known bounds in the latter theorem are worse than polynomial in {K} (although it is conjectured otherwise), whereas the elementary estimates are almost all polynomial in {K}.)

However, there are some settings in which the standard sum set estimates are not quite applicable. One such setting is the continuous setting, where one is dealing with bounded open sets in an additive Lie group (e.g. {{\bf R}^n} or a torus {({\bf R}/{\bf Z})^n}) rather than a finite setting. Here, one can largely replicate the discrete sum set estimates by working with a Haar measure in place of cardinality; this is the approach taken for instance in this paper of mine. However, there is another setting, which one might dub the “discretised” setting (as opposed to the “discrete” setting or “continuous” setting), in which the sets {A} remain finite (or at least discretisable to be finite), but for which there is a certain amount of “roundoff error” coming from the discretisation. As a typical example (working now in a non-commutative multiplicative setting rather than an additive one), consider the orthogonal group {O_n({\bf R})} of orthogonal {n \times n} matrices, and let {A} be the matrices obtained by starting with all of the orthogonal matrice in {O_n({\bf R})} and rounding each coefficient of each matrix in this set to the nearest multiple of {\epsilon}, for some small {\epsilon>0}. This forms a finite set (whose cardinality grows as {\epsilon\rightarrow 0} like a certain negative power of {\epsilon}). In the limit {\epsilon \rightarrow 0}, the set {A} is not a set of small doubling in the discrete sense. However, {A \cdot A} is still close to {A} in a metric sense, being contained in the {O_n(\epsilon)}-neighbourhood of {A}. Another key example comes from graphs {\Gamma := \{ (x, f(x)): x \in G \}} of maps {f: A \rightarrow H} from a subset {A} of one additive group {G = (G,+)} to another {H = (H,+)}. If {f} is “approximately additive” in the sense that for all {x,y \in G}, {f(x+y)} is close to {f(x)+f(y)} in some metric, then {\Gamma} might not have small doubling in the discrete sense (because {f(x+y)-f(x)-f(y)} could take a large number of values), but could be considered a set of small doubling in a discretised sense.

One would like to have a sum set (or product set) theory that can handle these cases, particularly in “high-dimensional” settings in which the standard methods of passing back and forth between continuous, discrete, or discretised settings behave poorly from a quantitative point of view due to the exponentially large doubling constant of balls. One way to do this is to impose a translation invariant metric {d} on the underlying group {G = (G,+)} (reverting back to additive notation), and replace the notion of cardinality by that of metric entropy. There are a number of almost equivalent ways to define this concept:

Definition 3 Let {(X,d)} be a metric space, let {E} be a subset of {X}, and let {r>0} be a radius.

  • The packing number {N^{pack}_r(E)} is the largest number of points {x_1,\dots,x_n} one can pack inside {E} such that the balls {B(x_1,r),\dots,B(x_n,r)} are disjoint.
  • The internal covering number {N^{int}_r(E)} is the fewest number of points {x_1,\dots,x_n \in E} such that the balls {B(x_1,r),\dots,B(x_n,r)} cover {E}.
  • The external covering number {N^{ext}_r(E)} is the fewest number of points {x_1,\dots,x_n \in X} such that the balls {B(x_1,r),\dots,B(x_n,r)} cover {E}.
  • The metric entropy {N^{ent}_r(E)} is the largest number of points {x_1,\dots,x_n} one can find in {E} that are {r}-separated, thus {d(x_i,x_j) \geq r} for all {i \neq j}.

It is an easy exercise to verify the inequalities

\displaystyle  N^{ent}_{2r}(E) \leq N^{pack}_r(E) \leq N^{ext}_r(E) \leq N^{int}_r(E) \leq N^{ent}_r(E)

for any {r>0}, and that {N^*_r(E)} is non-increasing in {r} and non-decreasing in {E} for the three choices {* = pack,ext,ent} (but monotonicity in {E} can fail for {*=int}!). It turns out that the external covering number {N^{ent}_r(E)} is slightly more convenient than the other notions of metric entropy, so we will abbreviate {N_r(E) = N^{ent}_r(E)}. The cardinality {|E|} can be viewed as the limit of the entropies {N^*_r(E)} as {r \rightarrow 0}.

If we have the bounded doubling property that {B(0,2r)} is covered by {O(1)} translates of {B(0,r)} for each {r>0}, and one has a Haar measure {m} on {G} which assigns a positive finite mass to each ball, then any of the above entropies {N^*_r(E)} is comparable to {m( E + B(0,r) ) / m(B(0,r))}, as can be seen by simple volume packing arguments. Thus in the bounded doubling setting one can usually use the measure-theoretic sum set theory to derive entropy-theoretic sumset bounds (see e.g. this paper of mine for an example of this). However, it turns out that even in the absence of bounded doubling, one still has an entropy analogue of most of the elementary sum set theory, except that one has to accept some degradation in the radius parameter {r} by some absolute constant. Such losses can be acceptable in applications in which the underlying sets {A} are largely “transverse” to the balls {B(0,r)}, so that the {N_r}-entropy of {A} is largely independent of {A}; this is a situation which arises in particular in the case of graphs {\Gamma = \{ (x,f(x)): x \in G \}} discussed above, if one works with “vertical” metrics whose balls extend primarily in the vertical direction. (I hope to present a specific application of this type here in the near future.)

Henceforth we work in an additive group {G} equipped with a translation-invariant metric {d}. (One can also generalise things slightly by allowing the metric to attain the values {0} or {+\infty}, without changing much of the analysis below.) By the Heine-Borel theorem, any precompact set {E} will have finite entropy {N_r(E)} for any {r>0}. We now have analogues of the two basic Ruzsa lemmas above:

Lemma 4 (Ruzsa covering lemma) Let {A, B} be precompact non-empty subsets of {G}, and let {r>0}. Then {A} may be covered by at most {\frac{N_r(A+B)}{N_r(B)}} translates of {B-B+B(0,2r)}.

Proof: Let {a_1,\dots,a_n \in A} be a maximal set of points such that the sets {a_i + B + B(0,r)} are all disjoint. Then the sets {a_i+B} are disjoint in {A+B} and have entropy {N_r(a_i+B)=N_r(B)}, and furthermore any ball of radius {r} can intersect at most one of the {a_i+B}. We conclude that {N_r(A+B) \geq n N_r(B)}, so {n \leq \frac{N_r(A+B)}{N_r(B)}}. If {a \in A}, then {a+B+B(0,r)} must intersect one of the {a_i + B + B(0,r)}, so {a \in a_i + B-B + B(0,2r)}, and the claim follows. \Box

Lemma 5 (Ruzsa triangle inequality) Let {A,B,C} be precompact non-empty subsets of {G}, and let {r>0}. Then {N_{4r}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(B)}}.

Proof: Consider the addition map {+: (x,y) \mapsto x+y} from {(A-B) \times (B-C)} to {G}. The domain {(A-B) \times (B-C)} may be covered by {N_r(A-B) N_r(B-C)} product balls {B(x,r) \times B(y,r)}. Every element {a-c} of {A - C} has a preimage {\{ (x,y) \in (A-B) \times (B-C)\}} of this map which projects to a translate of {B}, and thus must meet at least {N_r(B)} of these product balls. However, if two elements of {A-C} are separated by a distance of at least {4r}, then no product ball can intersect both preimages. We thus see that {N_{4r}^{ent}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(A-C)}}, and the claim follows. \Box

Below the fold we will record some further metric entropy analogues of sum set estimates (basically redoing much of Chapter 2 of my book with Van Vu). Unfortunately there does not seem to be a direct way to abstractly deduce metric entropy results from their sum set analogues (basically due to the failure of a certain strong version of Freiman’s theorem, as discussed in this previous post); nevertheless, the proofs of the discrete arguments are elementary enough that they can be modified with a small amount of effort to handle the entropy case. (In fact, there should be a very general model-theoretic framework in which both the discrete and entropy arguments can be processed in a unified manner; see this paper of Hrushovski for one such framework.)

It is also likely that many of the arguments here extend to the non-commutative setting, but for simplicity we will not pursue such generalisations here.

Read the rest of this entry »

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

(Discrete) (Continuous) (Limit method)
Ramsey theory Topological dynamics Compactness
Density Ramsey theory Ergodic theory Furstenberg correspondence principle
Graph/hypergraph regularity Measure theory Graph limits
Polynomial regularity Linear algebra Ultralimits
Structural decompositions Hilbert space geometry Ultralimits
Fourier analysis Spectral theory Direct and inverse limits
Quantitative algebraic geometry Algebraic geometry Schemes
Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits
Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

  • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects {x_n} in a common space {X}, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object {\lim_{n \rightarrow \infty} x_n}, which remains in the same space, and is “close” to many of the original objects {x_n} with respect to the given metric or topology.
  • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects {x_n} in a category {X}, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit {\varinjlim x_n} or the inverse limit {\varprojlim x_n} of these objects, which is another object in the same category {X}, and is connected to the original objects {x_n} by various morphisms.
  • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects {x_{\bf n}} or of spaces {X_{\bf n}}, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, {X_{\bf n}} might be groups and {x_{\bf n}} might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}} or a new space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}, which is still a model of the same language (e.g. if the spaces {X_{\bf n}} were all groups, then the limiting space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} is an abelian group, then the {X_{\bf n}} will also be abelian groups for many {{\bf n}}.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects {x_{\bf n}} to all lie in a common space {X} in order to form an ultralimit {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; they are permitted to lie in different spaces {X_{\bf n}}; this is more natural in many discrete contexts, e.g. when considering graphs on {{\bf n}} vertices in the limit when {{\bf n}} goes to infinity. Also, no convergence properties on the {x_{\bf n}} are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces {X_{\bf n}} involved are required in order to construct the ultraproduct.

With so few requirements on the objects {x_{\bf n}} or spaces {X_{\bf n}}, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the {x_{\bf n}}, will be exactly obeyed by the limit object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Read the rest of this entry »

Let {F} be a field. A definable set over {F} is a set of the form

\displaystyle  \{ x \in F^n | \phi(x) \hbox{ is true} \} \ \ \ \ \ (1)

where {n} is a natural number, and {\phi(x)} is a predicate involving the ring operations {+,\times} of {F}, the equality symbol {=}, an arbitrary number of constants and free variables in {F}, the quantifiers {\forall, \exists}, boolean operators such as {\vee,\wedge,\neg}, and parentheses and colons, where the quantifiers are always understood to be over the field {F}. Thus, for instance, the set of quadratic residues

\displaystyle  \{ x \in F | \exists y: x = y \times y \}

is definable over {F}, and any algebraic variety over {F} is also a definable set over {F}. Henceforth we will abbreviate “definable over {F}” simply as “definable”.

If {F} is a finite field, then every subset of {F^n} is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the complexity of a definable set. We say that {E \subset F^n} is a definable set of complexity at most {M} if {n \leq M}, and {E} can be written in the form (1) for some predicate {\phi} of length at most {M} (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in {n} dimensions of degree {d} would be a definable set of complexity {O_{n,d}(1)}. We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large.

In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells:

Lemma 1 (Algebraic regularity lemma) Let {F} be a finite field, let {V,W} be definable non-empty sets of complexity at most {M}, and let {E \subset V \times W} also be definable with complexity at most {M}. Assume that the characteristic of {F} is sufficiently large depending on {M}. Then we may partition {V = V_1 \cup \ldots \cup V_m} and {W = W_1 \cup \ldots \cup W_n} with {m,n = O_M(1)}, with the following properties:

  • (Definability) Each of the {V_1,\ldots,V_m,W_1,\ldots,W_n} are definable of complexity {O_M(1)}.
  • (Size) We have {|V_i| \gg_M |V|} and {|W_j| \gg_M |W|} for all {i=1,\ldots,m} and {j=1,\ldots,n}.
  • (Regularity) We have

    \displaystyle  |E \cap (A \times B)| = d_{ij} |A| |B| + O_M( |F|^{-1/4} |V| |W| ) \ \ \ \ \ (2)

    for all {i=1,\ldots,m}, {j=1,\ldots,n}, {A \subset V_i}, and {B\subset W_j}, where {d_{ij}} is a rational number in {[0,1]} with numerator and denominator {O_M(1)}.

My original proof of this lemma was quite complicated, based on an explicit calculation of the “square”

\displaystyle  \mu(w,w') := \{ v \in V: (v,w), (v,w') \in E \}

of {E} using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.)

Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields {F} (although its content is trivial if the field is not sufficiently large depending on the complexity at most {M}).

Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre:

Proposition 2 Let {F} be a finite field, and let {M > 0}.

  • (Discretised cardinality) If {E} is a non-empty definable set of complexity at most {M}, then one has

    \displaystyle  |E| = c |F|^d + O_M( |F|^{d-1/2} ) \ \ \ \ \ (3)

    where {d = O_M(1)} is a natural number, and {c} is a positive rational number with numerator and denominator {O_M(1)}. In particular, we have {|F|^d \ll_M |E| \ll_M |F|^d}.

  • (Definable cardinality) Assume {|F|} is sufficiently large depending on {M}. If {V, W}, and {E \subset V \times W} are definable sets of complexity at most {M}, so that {E_w := \{ v \in V: (v,w) \in W \}} can be viewed as a definable subset of {V} that is definably parameterised by {w \in W}, then for each natural number {d = O_M(1)} and each positive rational {c} with numerator and denominator {O_M(1)}, the set

    \displaystyle  \{ w \in W: |E_w| = c |F|^d + O_M( |F|^{d-1/2} ) \} \ \ \ \ \ (4)

    is definable with complexity {O_M(1)}, where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” {d} and “measure” {c} of {E_w} depends definably on {w}.)

We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.)

The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most {M} and cardinality {|F|^{1/2}}, if {|F|} is sufficiently large depending on {M}. If {E \subset V} are definable sets of complexity at most {M}, it shows that {|E| = (c+ O_M(|F|^{-1/2})) |V|} for some rational {0\leq c \leq 1} with numerator and denominator {O_M(1)}; furthermore, if {c=0}, we may improve this bound to {|E| = O_M( |F|^{-1} |V|)}. In particular, we obtain the following “self-improving” properties:

  • If {E \subset V} are definable of complexity at most {M} and {|E| \leq \epsilon |V|} for some {\epsilon>0}, then (if {\epsilon} is sufficiently small depending on {M} and {F} is sufficiently large depending on {M}) this forces {|E| = O_M( |F|^{-1} |V| )}.
  • If {E \subset V} are definable of complexity at most {M} and {||E| - c |V|| \leq \epsilon |V|} for some {\epsilon>0} and positive rational {c}, then (if {\epsilon} is sufficiently small depending on {M,c} and {F} is sufficiently large depending on {M,c}) this forces {|E| = c |V| + O_M( |F|^{-1/2} |V| )}.

It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to {E}) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski.

Read the rest of this entry »

I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“.  This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”.   There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also.

The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method.  More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem.  One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet.

This is something of a first draft, so comments and suggestions are even more welcome than usual.  (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.)

Define a partition of {1} to be a finite or infinite multiset {\Sigma} of real numbers in the interval {I \in (0,1]} (that is, an unordered set of real numbers in {I}, possibly with multiplicity) whose total sum is {1}: {\sum_{t \in \Sigma}t = 1}. For instance, {\{1/2,1/4,1/8,1/16,\ldots\}} is a partition of {1}. Such partitions arise naturally when trying to decompose a large object into smaller ones, for instance:

  1. (Prime factorisation) Given a natural number {n}, one can decompose it into prime factors {n = p_1 \ldots p_k} (counting multiplicity), and then the multiset

    \displaystyle  \Sigma_{PF}(n) := \{ \frac{\log p_1}{\log n}, \ldots,\frac{\log p_k}{\log n} \}

    is a partition of {1}.

  2. (Cycle decomposition) Given a permutation {\sigma \in S_n} on {n} labels {\{1,\ldots,n\}}, one can decompose {\sigma} into cycles {C_1,\ldots,C_k}, and then the multiset

    \displaystyle  \Sigma_{CD}(\sigma) := \{ \frac{|C_1|}{n}, \ldots, \frac{|C_k|}{n} \}

    is a partition of {1}.

  3. (Normalisation) Given a multiset {\Gamma} of positive real numbers whose sum {S := \sum_{x\in \Gamma}x} is finite and non-zero, the multiset

    \displaystyle  \Sigma_N( \Gamma) := \frac{1}{S} \cdot \Gamma = \{ \frac{x}{S}: x \in \Gamma \}

    is a partition of {1}.

In the spirit of the universality phenomenon, one can ask what is the natural distribution for what a “typical” partition should look like; thus one seeks a natural probability distribution on the space of all partitions, analogous to (say) the gaussian distributions on the real line, or GUE distributions on point processes on the line, and so forth. It turns out that there is one natural such distribution which is related to all three examples above, known as the Poisson-Dirichlet distribution. To describe this distribution, we first have to deal with the problem that it is not immediately obvious how to cleanly parameterise the space of partitions, given that the cardinality of the partition can be finite or infinite, that multiplicity is allowed, and that we would like to identify two partitions that are permutations of each other

One way to proceed is to random partition {\Sigma} as a type of point process on the interval {I}, with the constraint that {\sum_{x \in \Sigma} x = 1}, in which case one can study statistics such as the counting functions

\displaystyle  N_{[a,b]} := |\Sigma \cap [a,b]| = \sum_{x \in\Sigma} 1_{[a,b]}(x)

(where the cardinality here counts multiplicity). This can certainly be done, although in the case of the Poisson-Dirichlet process, the formulae for the joint distribution of such counting functions is moderately complicated. Another way to proceed is to order the elements of {\Sigma} in decreasing order

\displaystyle  t_1 \geq t_2 \geq t_3 \geq \ldots \geq 0,

with the convention that one pads the sequence {t_n} by an infinite number of zeroes if {\Sigma} is finite; this identifies the space of partitions with an infinite dimensional simplex

\displaystyle  \{ (t_1,t_2,\ldots) \in [0,1]^{\bf N}: t_1 \geq t_2 \geq \ldots; \sum_{n=1}^\infty t_n = 1 \}.

However, it turns out that the process of ordering the elements is not “smooth” (basically because functions such as {(x,y) \mapsto \max(x,y)} and {(x,y) \mapsto \min(x,y)} are not smooth) and the formulae for the joint distribution in the case of the Poisson-Dirichlet process is again complicated.

It turns out that there is a better (or at least “smoother”) way to enumerate the elements {u_1,(1-u_1)u_2,(1-u_1)(1-u_2)u_3,\ldots} of a partition {\Sigma} than the ordered method, although it is random rather than deterministic. This procedure (which I learned from this paper of Donnelly and Grimmett) works as follows.

  1. Given a partition {\Sigma}, let {u_1} be an element of {\Sigma} chosen at random, with each element {t\in \Sigma} having a probability {t} of being chosen as {u_1} (so if {t \in \Sigma} occurs with multiplicity {m}, the net probability that {t} is chosen as {u_1} is actually {mt}). Note that this is well-defined since the elements of {\Sigma} sum to {1}.
  2. Now suppose {u_1} is chosen. If {\Sigma \backslash \{u_1\}} is empty, we set {u_2,u_3,\ldots} all equal to zero and stop. Otherwise, let {u_2} be an element of {\frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})} chosen at random, with each element {t \in \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})} having a probability {t} of being chosen as {u_2}. (For instance, if {u_1} occurred with some multiplicity {m>1} in {\Sigma}, then {u_2} can equal {\frac{u_1}{1-u_1}} with probability {(m-1)u_1/(1-u_1)}.)
  3. Now suppose {u_1,u_2} are both chosen. If {\Sigma \backslash \{u_1,u_2\}} is empty, we set {u_3, u_4, \ldots} all equal to zero and stop. Otherwise, let {u_3} be an element of {\frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}, with ech element {t \in \frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})} having a probability {t} of being chosen as {u_3}.
  4. We continue this process indefinitely to create elements {u_1,u_2,u_3,\ldots \in [0,1]}.

We denote the random sequence {Enum(\Sigma) := (u_1,u_2,\ldots) \in [0,1]^{\bf N}} formed from a partition {\Sigma} in the above manner as the random normalised enumeration of {\Sigma}; this is a random variable in the infinite unit cube {[0,1]^{\bf N}}, and can be defined recursively by the formula

\displaystyle  Enum(\Sigma) = (u_1, Enum(\frac{1}{1-u_1} \cdot (\Sigma\backslash \{u_1\})))

with {u_1} drawn randomly from {\Sigma}, with each element {t \in \Sigma} chosen with probability {t}, except when {\Sigma =\{1\}} in which case we instead have

\displaystyle  Enum(\{1\}) = (1, 0,0,\ldots).

Note that one can recover {\Sigma} from any of its random normalised enumerations {Enum(\Sigma) := (u_1,u_2,\ldots)} by the formula

\displaystyle  \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\} \ \ \ \ \ (1)

with the convention that one discards any zero elements on the right-hand side. Thus {Enum} can be viewed as a (stochastic) parameterisation of the space of partitions by the unit cube {[0,1]^{\bf N}}, which is a simpler domain to work with than the infinite-dimensional simplex mentioned earlier.

Note that this random enumeration procedure can also be adapted to the three models described earlier:

  1. Given a natural number {n}, one can randomly enumerate its prime factors {n =p'_1 p'_2 \ldots p'_k} by letting each prime factor {p} of {n} be equal to {p'_1} with probability {\frac{\log p}{\log n}}, then once {p'_1} is chosen, let each remaining prime factor {p} of {n/p'_1} be equal to {p'_2} with probability {\frac{\log p}{\log n/p'_1}}, and so forth.
  2. Given a permutation {\sigma\in S_n}, one can randomly enumerate its cycles {C'_1,\ldots,C'_k} by letting each cycle {C} in {\sigma} be equal to {C'_1} with probability {\frac{|C|}{n}}, and once {C'_1} is chosen, let each remaining cycle {C} be equalto {C'_2} with probability {\frac{|C|}{n-|C'_1|}}, and so forth. Alternatively, one traverse the elements of {\{1,\ldots,n\}} in random order, then let {C'_1} be the first cycle one encounters when performing this traversal, let {C'_2} be the next cycle (not equal to {C'_1} one encounters when performing this traversal, and so forth.
  3. Given a multiset {\Gamma} of positive real numbers whose sum {S := \sum_{x\in\Gamma} x} is finite, we can randomly enumerate {x'_1,x'_2,\ldots} the elements of this sequence by letting each {x \in \Gamma} have a {\frac{x}{S}} probability of being set equal to {x'_1}, and then once {x'_1} is chosen, let each remaining {x \in \Gamma\backslash \{x'_1\}} have a {\frac{x_i}{S-x'_1}} probability of being set equal to {x'_2}, and so forth.

We then have the following result:

Proposition 1 (Existence of the Poisson-Dirichlet process) There exists a random partition {\Sigma} whose random enumeration {Enum(\Sigma) = (u_1,u_2,\ldots)} has the uniform distribution on {[0,1]^{\bf N}}, thus {u_1,u_2,\ldots} are independently and identically distributed copies of the uniform distribution on {[0,1]}.

A random partition {\Sigma} with this property will be called the Poisson-Dirichlet process. This process, first introduced by Kingman, can be described explicitly using (1) as

\displaystyle  \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\},

where {u_1,u_2,\ldots} are iid copies of the uniform distribution of {[0,1]}, although it is not immediately obvious from this definition that {Enum(\Sigma)} is indeed uniformly distributed on {[0,1]^{\bf N}}. We prove this proposition below the fold.

An equivalent definition of a Poisson-Dirichlet process is a random partition {\Sigma} with the property that

\displaystyle  (u_1, \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})) \equiv (U, \Sigma) \ \ \ \ \ (2)

where {u_1} is a random element of {\Sigma} with each {t \in\Sigma} having a probability {t} of being equal to {u_1}, {U} is a uniform variable on {[0,1]} that is independent of {\Sigma}, and {\equiv} denotes equality of distribution. This can be viewed as a sort of stochastic self-similarity property of {\Sigma}: if one randomly removes one element from {\Sigma} and rescales, one gets a new copy of {\Sigma}.

It turns out that each of the three ways to generate partitions listed above can lead to the Poisson-Dirichlet process, either directly or in a suitable limit. We begin with the third way, namely by normalising a Poisson process to have sum {1}:

Proposition 2 (Poisson-Dirichlet processes via Poisson processes) Let {a>0}, and let {\Gamma_a} be a Poisson process on {(0,+\infty)} with intensity function {t \mapsto \frac{1}{t} e^{-at}}. Then the sum {S :=\sum_{x \in \Gamma_a} x} is almost surely finite, and the normalisation {\Sigma_N(\Gamma_a) = \frac{1}{S} \cdot \Gamma_a} is a Poisson-Dirichlet process.

Again, we prove this proposition below the fold. Now we turn to the second way (a topic, incidentally, that was briefly touched upon in this previous blog post):

Proposition 3 (Large cycles of a typical permutation) For each natural number {n}, let {\sigma} be a permutation drawn uniformly at random from {S_n}. Then the random partition {\Sigma_{CD}(\sigma)} converges in the limit {n \rightarrow\infty} to a Poisson-Dirichlet process {\Sigma_{PF}} in the following sense: given any fixed sequence of intervals {[a_1,b_1],\ldots,[a_k,b_k] \subset I} (independent of {n}), the joint discrete random variable {(N_{[a_1,b_1]}(\Sigma_{CD}(\sigma)),\ldots,N_{[a_k,b_k]}(\Sigma_{CD}(\sigma)))} converges in distribution to {(N_{[a_1,b_1]}(\Sigma),\ldots,N_{[a_k,b_k]}(\Sigma))}.

Finally, we turn to the first way:

Proposition 4 (Large prime factors of a typical number) Let {x > 0}, and let {N_x} be a random natural number chosen according to one of the following three rules:

  1. (Uniform distribution) {N_x} is drawn uniformly at random from the natural numbers in {[1,x]}.
  2. (Shifted uniform distribution) {N_x} is drawn uniformly at random from the natural numbers in {[x,2x]}.
  3. (Zeta distribution) Each natural number {n} has a probability {\frac{1}{\zeta(s)}\frac{1}{n^s}} of being equal to {N_x}, where {s := 1 +\frac{1}{\log x}}and {\zeta(s):=\sum_{n=1}^\infty \frac{1}{n^s}}.

Then {\Sigma_{PF}(N_x)} converges as {x \rightarrow \infty} to a Poisson-Dirichlet process {\Sigma} in the same fashion as in Proposition 3.

The process {\Sigma_{PF}(N_x)} was first studied by Billingsley (and also later by Knuth-Trabb Pardo and by Vershik, but the formulae were initially rather complicated; the proposition above is due to of Donnelly and Grimmett, although the third case of the proposition is substantially easier and appears in the earlier work of Lloyd. We prove the proposition below the fold.

The previous two propositions suggests an interesting analogy between large random integers and large random permutations; see this ICM article of Vershik and this non-technical article of Granville (which, incidentally, was once adapted into a play) for further discussion.

As a sample application, consider the problem of estimating the number {\pi(x,x^{1/u})} of integers up to {x} which are not divisible by any prime larger than {x^{1/u}} (i.e. they are {x^{1/u}}-smooth), where {u>0} is a fixed real number. This is essentially (modulo some inessential technicalities concerning the distinction between the intervals {[x,2x]} and {[1,x]}) the probability that {\Sigma} avoids {[1/u,1]}, which by the above theorem converges to the probability {\rho(u)} that {\Sigma_{PF}} avoids {[1/u,1]}. Below the fold we will show that this function is given by the Dickman function, defined by setting {\rho(u)=1} for {u < 1} and {u\rho'(u) = \rho(u-1)} for {u \geq 1}, thus recovering the classical result of Dickman that {\pi(x,x^{1/u}) = (\rho(u)+o(1))x}.

I thank Andrew Granville and Anatoly Vershik for showing me the nice link between prime factors and the Poisson-Dirichlet process. The material here is standard, and (like many of the other notes on this blog) was primarily written for my own benefit, but it may be of interest to some readers. In preparing this article I found this exposition by Kingman to be helpful.

Note: this article will emphasise the computations rather than rigour, and in particular will rely on informal use of infinitesimals to avoid dealing with stochastic calculus or other technicalities. We adopt the convention that we will neglect higher order terms in infinitesimal calculations, e.g. if {dt} is infinitesimal then we will abbreviate {dt + o(dt)} simply as {dt}.

Read the rest of this entry »

Emmanuel Breuillard, Ben Green, Bob Guralnick, and I have just uploaded to the arXiv our joint paper “Expansion in finite simple groups of Lie type“. This long-delayed paper (announced way back in 2010!) is a followup to our previous paper in which we showed that, with one possible exception, generic pairs of elements of a simple algebraic group (over an uncountable field) generated a free group which was strongly dense in the sense that any nonabelian subgroup of this group was Zariski dense. The main result of this paper is to establish the analogous result for finite simple groups of Lie type (as defined in the previous blog post) and bounded rank, namely that almost all pairs {a,b} of elements of such a group generate a Cayley graph which is a (two-sided) expander, with expansion constant bounded below by a quantity depending on the rank of the group. (Informally, this means that the random walk generated by {a,b} spreads out in logarithmic time to be essentially uniformly distributed across the group, as opposed for instance to being largely trapped in an algebraic subgroup. Thus if generic elements did not generate a strongly dense group, one would probably expect expansion to fail.)

There are also some related results established in the paper. Firstly, as we discovered after writing our first paper, there was one class of algebraic groups for which our demonstration of strongly dense subgroups broke down, namely the {Sp_4} groups in characteristic three. In the current paper we provide in a pair of appendices a new argument that covers this case (or more generally, {Sp_4} in odd characteristic), by first reducing to the case of affine groups {k^2 \rtimes SL_2(k)} (which can be found inside {Sp_4} as a subgroup) and then using a ping-pong argument (in a p-adic metric) in the latter context.

Secondly, we show that the distinction between one-sided expansion and two-sided expansion (see this set of lecture notes of mine for definitions) is erased in the context of Cayley graphs of bounded degree, in the sense that such graphs are one-sided expanders if and only if they are two-sided expanders (perhaps with slightly different expansion constants). The argument turns out to be an elementary combinatorial one, based on the “pivot” argument discussed in these lecture notes of mine.

Now to the main result of the paper, namely the expansion of random Cayley graphs. This result had previously been established for {SL_2} by Bourgain and Gamburd, and Ben, Emmanuel and I had used the Bourgain-Gamburd method to achieve the same result for Suzuki groups. For the other finite simple groups of Lie type, expander graphs had been constructed by Kassabov, Lubotzky, and Nikolov, but they required more than two generators, which were placed deterministically rather than randomly. (Here, I am skipping over a large number of other results on expanding Cayley graphs; see this survey of Lubotzsky for a fairly recent summary of developments.) The current paper also uses the “Bourgain-Gamburd machine”, as discussed in these lecture notes of mine, to demonstrate expansion. This machine shows how expansion of a Cayley graph follows from three basic ingredients, which we state informally as follows:

  • Non-concentration (A random walk in this graph does not concentrate in a proper subgroup);
  • Product theorem (A medium-sized subset of this group which is not trapped in a proper subgroup will expand under multiplication); and
  • Quasirandomness (The group has no small non-trivial linear representations).

Quasirandomness of arbitrary finite simple groups of Lie type was established many years ago (predating, in fact, the introduction of the term “quasirandomness” by Gowers for this property) by Landazuri-Seitz and Seitz-Zalesskii, and the product theorem was already established by Pyber-Szabo and independently by Breuillard, Green, and myself. So the main problem is to establish non-concentration: that for a random Cayley graph on a finite simple group {G} of Lie type, random walks did not concentrate in proper subgroups.

The first step was to classify the proper subgroups of {G}. Fortunately, these are all known; in particular, such groups are either contained in proper algebraic subgroups of the algebraic group containing {G} (or a bounded cover thereof) with bounded complexity, or are else arising (up to conjugacy) from a version {G(F')} of the same group {G =G(F)} associated to a proper subfield {F'} of the field {F} respectively; this follows for instance from the work of Larsen and Pink, but also can be deduced using the classification of finite simple groups, together with some work of Aschbacher, Liebeck-Seitz, and Nori. We refer to the two types of subgroups here as “structural subgroups” and “subfield subgroups”.

To preclude concentration in a structural subgroup, we use our previous result that generic elements of an algebraic group generate a strongly dense subgroup, and so do not concentrate in any algebraic subgroup. To translate this result from the algebraic group setting to the finite group setting, we need a Schwarz-Zippel lemma for finite simple groups of Lie type. This is straightforward for Chevalley groups, but turns out to be a bit trickier for the Steinberg and Suzuki-Ree groups, and we have to go back to the Chevalley-type parameterisation of such groups in terms of (twisted) one-parameter subgroups, that can be found for instance in the text of Carter; this “twisted Schwartz-Zippel lemma” may possibly have further application to analysis on twisted simple groups of Lie type. Unfortunately, the Schwartz-Zippel estimate becomes weaker in twisted settings, and particularly in the case of triality groups {{}^3 D_4(q)}, which require a somewhat ad hoc additional treatment that relies on passing to a simpler subgroup present in a triality group, namely a central product of two different {SL_2}‘s.

To rule out concentration in a conjugate of a subfield group, we repeat an argument we introduced in our Suzuki paper and pass to a matrix model and analyse the coefficients of the characteristic polynomial of words in this Cayley graph, to prevent them from concentrating in a subfield. (Note that these coefficients are conjugation-invariant.)

The purpose of this post is to isolate a combinatorial optimisation problem regarding subset sums; any improvement upon the current known bounds for this problem would lead to numerical improvements for the quantities pursued in the Polymath8 project. (UPDATE: Unfortunately no purely combinatorial improvement is possible, see comments.) We will also record the number-theoretic details of how this combinatorial problem is used in Zhang’s argument establishing bounded prime gaps.

First, some (rough) motivational background, omitting all the number-theoretic details and focusing on the combinatorics. (But readers who just want to see the combinatorial problem can skip the motivation and jump ahead to Lemma 5.) As part of the Polymath8 project we are trying to establish a certain estimate called {MPZ[\varpi,\delta]} for as wide a range of {\varpi,\delta > 0} as possible. Currently the best result we have is:

Theorem 1 (Zhang’s theorem, numerically optimised) {MPZ[\varpi,\delta]} holds whenever {207\varpi + 43\delta< \frac{1}{4}}.

Enlarging this region would lead to a better value of certain parameters {k_0}, {H} which in turn control the best bound on asymptotic gaps between consecutive primes. See this previous post for more discussion of this. At present, the best value {k_0=23,283} of {k_0} is applied by taking {(\varpi,\delta)} sufficiently close to {(1/899,71/154628)}, so improving Theorem 1 in the neighbourhood of this value is particularly desirable.

I’ll state exactly what {MPZ[\varpi,\delta]} is below the fold. For now, suffice to say that it involves a certain number-theoretic function, the von Mangoldt function {\Lambda}. To prove the theorem, the first step is to use a certain identity (the Heath-Brown identity) to decompose {\Lambda} into a lot of pieces, which take the form

\displaystyle  \alpha_{1} \ast \ldots \ast \alpha_{n} \ \ \ \ \ (1)

for some bounded {n} (in Zhang’s paper {n} never exceeds {20}) and various weights {\alpha_{1},\ldots,\alpha_n} supported at various scales {N_1,\ldots,N_n \geq 1} that multiply up to approximately {x}:

\displaystyle  N_1 \ldots N_n \sim x.

We can write {N_i = x^{t_i}}, thus ignoring negligible errors, {t_1,\ldots,t_n} are non-negative real numbers that add up to {1}:

\displaystyle  t_1 + \ldots + t_n = 1.

A key technical feature of the Heath-Brown identity is that the weight {\alpha_i} associated to sufficiently large values of {t_i} (e.g. {t_i \geq 1/10}) are “smooth” in a certain sense; this will be detailed below the fold.

The operation {\ast} is Dirichlet convolution, which is commutative and associative. We can thus regroup the convolution (1) in a number of ways. For instance, given any partition {\{1,\ldots,n\} = S \cup T} into disjoint sets {S,T}, we can rewrite (1) as

\displaystyle  \alpha_S \ast \alpha_T

where {\alpha_S} is the convolution of those {\alpha_i} with {i \in S}, and similarly for {\alpha_T}.

Zhang’s argument splits into two major pieces, in which certain classes of (1) are established. Cheating a little bit, the following three results are established:

Theorem 2 (Type 0 estimate, informal version) The term (1) gives an acceptable contribution to {MPZ[\varpi,\delta]} whenever

\displaystyle  t_i > \frac{1}{2} + 2 \varpi

for some {i}.

Theorem 3 (Type I/II estimate, informal version) The term (1) gives an acceptable contribution to {MPZ[\varpi,\delta]} whenever one can find a partition {\{1,\ldots,n\} = S \cup T} such that

\displaystyle  \frac{1}{2} - \sigma < \sum_{i \in S} t_i \leq \sum_{i \in T} t_i < \frac{1}{2} + \sigma

where {\sigma} is a quantity such that

\displaystyle  11 \varpi + 3\delta + 2 \sigma < \frac{1}{4}.

Theorem 4 (Type III estimate, informal version) The term (1) gives an acceptable contribution to {MPZ[\varpi,\delta]} whenever one can find {t_i,t_j,t_k} with distinct {i,j,k \in \{1,\ldots,n\}} with

\displaystyle  t_i \leq t_j \leq t_k \leq \frac{1}{2}

and

\displaystyle  4t_k + 4t_j + 5t_i > 4 + 16 \varpi + \delta.

The above assertions are oversimplifications; there are some additional minor smallness hypotheses on {\varpi,\delta} that are needed but at the current (small) values of {\varpi,\delta} under consideration they are not relevant and so will be omitted.

The deduction of Theorem 1 from Theorems 2, 3, 4 is then accomplished from the following, purely combinatorial, lemma:

Lemma 5 (Subset sum lemma) Let {\varpi,\delta > 0} be such that

\displaystyle  207\varpi + 43\delta < \frac{1}{4}. \ \ \ \ \ (2)

Let {t_1,\ldots,t_n} be non-negative reals such that

\displaystyle  t_1 + \ldots + t_n = 1.

Then at least one of the following statements hold:

  • (Type 0) There is {1 \leq i \leq n} such that {t_i > \frac{1}{2} + 2 \varpi}.
  • (Type I/II) There is a partition {\{1,\ldots,n\} = S \cup T} such that

    \displaystyle  \frac{1}{2} - \sigma < \sum_{i \in S} t_i \leq \sum_{i \in T} t_i < \frac{1}{2} + \sigma

    where {\sigma} is a quantity such that

    \displaystyle  11 \varpi + 3\delta + 2 \sigma < \frac{1}{4}.

  • (Type III) One can find distinct {t_i,t_j,t_k} with

    \displaystyle  t_i \leq t_j \leq t_k \leq \frac{1}{2}

    and

    \displaystyle  4t_k + 4t_j + 5t_i > 4 + 16 \varpi + \delta.

The purely combinatorial question is whether the hypothesis (2) can be relaxed here to a weaker condition. This would allow us to improve the ranges for Theorem 1 (and hence for the values of {k_0} and {H} alluded to earlier) without needing further improvement on Theorems 2, 3, 4 (although such improvement is also going to be a focus of Polymath8 investigations in the future).

Let us review how this lemma is currently proven. The key sublemma is the following:

Lemma 6 Let {1/10 < \sigma < 1/2}, and let {t_1,\ldots,t_n} be non-negative numbers summing to {1}. Then one of the following three statements hold:

  • (Type 0) There is a {t_i} with {t_i \geq 1/2 + \sigma}.
  • (Type I/II) There is a partition {\{1,\ldots,n\} = S \cup T} such that

    \displaystyle  \frac{1}{2} - \sigma < \sum_{i \in S} t_i \leq \sum_{i \in T} t_i < \frac{1}{2} + \sigma.

  • (Type III) There exist distinct {i,j,k} with {2\sigma \leq t_i \leq t_j \leq t_k \leq 1/2-\sigma} and {t_i+t_j,t_i+t_k,t_j+t_k \geq 1/2 + \sigma}.

Proof: Suppose Type I/II never occurs, then every partial sum {\sum_{i \in S} t_i} is either “small” in the sense that it is less than or equal to {1/2-\sigma}, or “large” in the sense that it is greater than or equal to {1/2+\sigma}, since otherwise we would be in the Type I/II case either with {S} as is and {T} the complement of {S}, or vice versa.

Call a summand {t_i} “powerless” if it cannot be used to turn a small partial sum into a large partial sum, thus there are no {S \subset \{1,\ldots,n\} \backslash \{i\}} such that {\sum_{j \in S} t_j} is small and {t_i + \sum_{j \in S} t_j} is large. We then split {\{1,\ldots,n\} = A \cup B} where {A} are the powerless elements and {B} are the powerful elements.

By induction we see that if {S \subset B} and {\sum_{i \in S} t_i} is small, then {\sum_{i \in S} t_i + \sum_{i \in A} t_i} is also small. Thus every sum of powerful summand is either less than {1/2-\sigma-\sum_{i \in A} t_i} or larger than {1/2+\sigma}. Since a powerful element must be able to convert a small sum to a large sum (in fact it must be able to convert a small sum of powerful summands to a large sum, by stripping out the powerless summands), we conclude that every powerful element has size greater than {2\sigma + \sum_{i \in A} t_i}. We may assume we are not in Type 0, then every powerful summand is at least {2\sigma + \sum_{i \in A} t_i} and at most {1/2 - \sigma - \sum_{i \in A} t_i}. In particular, there have to be at least three powerful summands, otherwise {\sum_{i=1}^n t_i} cannot be as large as {1}. As {\sigma > 1/10}, we have {4\sigma > 1/2-\sigma}, and we conclude that the sum of any two powerful summands is large (which, incidentally, shows that there are exactly three powerful summands). Taking {t_i,t_j,t_k} to be three powerful summands in increasing order we land in Type III. \Box

Now we see how Lemma 6 implies Lemma 5. Let {\varpi,\delta} be as in Lemma 5. We take {\sigma} almost as large as we can for the Type I/II case, thus we set

\displaystyle  \sigma := \frac{1}{8} - \frac{11}{2} \varpi - \frac{3}{2} \delta - \epsilon \ \ \ \ \ (3)

for some sufficiently small {\epsilon>0}. We observe from (2) that we certainly have

\displaystyle  \sigma > 2 \varpi

and

\displaystyle  \sigma > \frac{1}{10}

with plenty of room to spare. We then apply Lemma 6. The Type 0 case of that lemma then implies the Type 0 case of Lemma 5, while the Type I/II case of Lemma 6 also implies the Type I/II case of Lemma 5. Finally, suppose that we are in the Type III case of Lemma 6. Since

\displaystyle  4t_i + 4t_j + 5 t_k = \frac{5}{2} (t_i+t_k) + \frac{5}{2}(t_j+t_k) + \frac{3}{2} (t_i+t_j)

we thus have

\displaystyle  4t_i + 4t_j + 5 t_k \geq \frac{13}{2} (\frac{1}{2}+\sigma)

and so we will be done if

\displaystyle  \frac{13}{2} (\frac{1}{2}+\sigma) > 4 + 16 \varpi + \delta.

Inserting (3) and taking {\epsilon} small enough, it suffices to verify that

\displaystyle  \frac{13}{2} (\frac{1}{2}+\frac{1}{8} - \frac{11}{2} \varpi - \frac{3}{2}\delta) > 4 + 16 \varpi + \delta

but after some computation this is equivalent to (2).

It seems that there is some slack in this computation; some of the conclusions of the Type III case of Lemma 5, in particular, ended up being “wasted”, and it is possible that one did not fully exploit all the partial sums that could be used to create a Type I/II situation. So there may be a way to make improvements through purely combinatorial arguments. (UPDATE: As it turns out, this is sadly not the case: consderation of the case when {n=4}, {t_1 = 1/4 - 3\sigma/2}, and {t_2=t_3=t_4 = 1/4+\sigma/2} shows that one cannot obtain any further improvement without actually improving the Type I/II and Type III analysis.)

A technical remark: for the application to Theorem 1, it is possible to enforce a bound on the number {n} of summands in Lemma 5. More precisely, we may assume that {n} is an even number of size at most {n \leq 2K} for any natural number {K} we please, at the cost of adding the additioal constraint {t_i > \frac{1}{K}} to the Type III conclusion. Since {t_i} is already at least {2\sigma}, which is at least {\frac{1}{5}}, one can safely take {K=5}, so {n} can be taken to be an even number of size at most {10}, which in principle makes the problem of optimising Lemma 5 a fixed linear programming problem. (Zhang takes {K=10}, but this appears to be overkill. On the other hand, {K} does not appear to be a parameter that overly influences the final numerical bounds.)

Below the fold I give the number-theoretic details of the combinatorial aspects of Zhang’s argument that correspond to the combinatorial problem described above.

Read the rest of this entry »

One of the basic objects of study in combinatorics are finite strings {(a_n)_{n=0}^N} or infinite strings {(a_n)_{n=0}^\infty} of symbols {a_n} from some given alphabet {{\mathcal A}}, which could be either finite or infinite (but which we shall usually take to be compact). For instance, a set {A} of natural numbers can be identified with the infinite string {(1_A(n))_{n=0}^\infty} of {0}s and {1}s formed by the indicator of {A}, e.g. the even numbers can be identified with the string {1010101\ldots} from the alphabet {\{0,1\}}, the multiples of three can be identified with the string {100100100\ldots}, and so forth. One can also consider doubly infinite strings {(a_n)_{n \in {\bf Z}}}, which among other things can be used to describe arbitrary subsets of integers.

On the other hand, the basic object of study in dynamics (and in related fields, such as ergodic theory) is that of a dynamical system {(X,T)}, that is to say a space {X} together with a shift map {T: X \rightarrow X} (which is often assumed to be invertible, although one can certainly study non-invertible dynamical systems as well). One often adds additional structure to this dynamical system, such as topological structure (giving rise topological dynamics), measure-theoretic structure (giving rise to ergodic theory), complex structure (giving rise to complex dynamics), and so forth. A dynamical system gives rise to an action of the natural numbers {{\bf N}} on the space {X} by using the iterates {T^n: X \rightarrow X} of {T} for {n=0,1,2,\ldots}; if {T} is invertible, we can extend this action to an action of the integers {{\bf Z}} on the same space. One can certainly also consider dynamical systems whose underlying group (or semi-group) is something other than {{\bf N}} or {{\bf Z}} (e.g. one can consider continuous dynamical systems in which the evolution group is {{\bf R}}), but we will restrict attention to the classical situation of {{\bf N}} or {{\bf Z}} actions here.

There is a fundamental correspondence principle connecting the study of strings (or subsets of natural numbers or integers) with the study of dynamical systems. In one direction, given a dynamical system {(X,T)}, an observable {c: X \rightarrow {\mathcal A}} taking values in some alphabet {{\mathcal A}}, and some initial datum {x_0 \in X}, we can first form the forward orbit {(T^n x_0)_{n=0}^\infty} of {x_0}, and then observe this orbit using {c} to obtain an infinite string {(c(T^n x_0))_{n=0}^\infty}. If the shift {T} in this system is invertible, one can extend this infinite string into a doubly infinite string {(c(T^n x_0))_{n \in {\bf Z}}}. Thus we see that every quadruplet {(X,T,c,x_0)} consisting of a dynamical system {(X,T)}, an observable {c}, and an initial datum {x_0} creates an infinite string.

Example 1 If {X} is the three-element set {X = {\bf Z}/3{\bf Z}} with the shift map {Tx := x+1}, {c: {\bf Z}/3{\bf Z} \rightarrow \{0,1\}} is the observable that takes the value {1} at the residue class {0 \hbox{ mod } 3} and zero at the other two classes, and one starts with the initial datum {x_0 = 0 \hbox{ mod } 3}, then the observed string {(c(T^n x_0))_{n=0}^\infty} becomes the indicator {100100100\ldots} of the multiples of three.

In the converse direction, every infinite string {(a_n)_{n=0}^\infty} in some alphabet {{\mathcal A}} arises (in a decidedly non-unique fashion) from a quadruple {(X,T,c,x_0)} in the above fashion. This can be easily seen by the following “universal” construction: take {X} to be the set {X:= {\mathcal A}^{\bf N}} of infinite strings {(b_i)_{n=0}^\infty} in the alphabet {{\mathcal A}}, let {T: X \rightarrow X} be the shift map

\displaystyle  T(b_i)_{n=0}^\infty := (b_{i+1})_{n=0}^\infty,

let {c: X \rightarrow {\mathcal A}} be the observable

\displaystyle  c((b_i)_{n=0}^\infty) := b_0,

and let {x_0 \in X} be the initial point

\displaystyle  x_0 := (a_i)_{n=0}^\infty.

Then one easily sees that the observed string {(c(T^n x_0))_{n=0}^\infty} is nothing more than the original string {(a_n)_{n=0}^\infty}. Note also that this construction can easily be adapted to doubly infinite strings by using {{\mathcal A}^{\bf Z}} instead of {{\mathcal A}^{\bf N}}, at which point the shift map {T} now becomes invertible. An important variant of this construction also attaches an invariant probability measure to {X} that is associated to the limiting density of various sets associated to the string {(a_i)_{n=0}^\infty}, and leads to the Furstenberg correspondence principle, discussed for instance in these previous blog posts. Such principles allow one to rigorously pass back and forth between the combinatorics of strings and the dynamics of systems; for instance, Furstenberg famously used his correspondence principle to demonstrate the equivalence of Szemerédi’s theorem on arithmetic progressions with what is now known as the Furstenberg multiple recurrence theorem in ergodic theory.

In the case when the alphabet {{\mathcal A}} is the binary alphabet {\{0,1\}}, and (for technical reasons related to the infamous non-injectivity {0.999\ldots = 1.00\ldots} of the decimal representation system) the string {(a_n)_{n=0}^\infty} does not end with an infinite string of {1}s, then one can reformulate the above universal construction by taking {X} to be the interval {[0,1)}, {T} to be the doubling map {Tx := 2x \hbox{ mod } 1}, {c: X \rightarrow \{0,1\}} to be the observable that takes the value {1} on {[1/2,1)} and {0} on {[0,1/2)} (that is, {c(x)} is the first binary digit of {x}), and {x_0} is the real number {x_0 := \sum_{n=0}^\infty a_n 2^{-n-1}} (that is, {x_0 = 0.a_0a_1\ldots} in binary).

The above universal construction is very easy to describe, and is well suited for “generic” strings {(a_n)_{n=0}^\infty} that have no further obvious structure to them, but it often leads to dynamical systems that are much larger and more complicated than is actually needed to produce the desired string {(a_n)_{n=0}^\infty}, and also often obscures some of the key dynamical features associated to that sequence. For instance, to generate the indicator {100100100\ldots} of the multiples of three that were mentioned previously, the above universal construction requires an uncountable space {X} and a dynamics which does not obviously reflect the key features of the sequence such as its periodicity. (Using the unit interval model, the dynamics arise from the orbit of {2/7} under the doubling map, which is a rather artificial way to describe the indicator function of the multiples of three.)

A related aesthetic objection to the universal construction is that of the four components {X,T,c,x_0} of the quadruplet {(X,T,c,x_0)} used to generate the sequence {(a_n)_{n=0}^\infty}, three of the components {X,T,c} are completely universal (in that they do not depend at all on the sequence {(a_n)_{n=0}^\infty}), leaving only the initial datum {x_0} to carry all the distinctive features of the original sequence. While there is nothing wrong with this mathematically, from a conceptual point of view it would make sense to make all four components of the quadruplet to be adapted to the sequence, in order to take advantage of the accumulated intuition about various special dynamical systems (and special observables), not just special initial data.

One step in this direction can be made by restricting {X} to the orbit {\{ T^n x_0: n \in {\bf N} \}} of the initial datum {x_0} (actually for technical reasons it is better to restrict to the topological closure {\overline{\{ T^n x_0: n \in {\bf N} \}}} of this orbit, in order to keep {X} compact). For instance, starting with the sequence {100100100\ldots}, the orbit now consists of just three points {100100100\ldots}, {010010010\ldots}, {001001001\ldots}, bringing the system more in line with the example in Example 1. Technically, this is the “optimal” representation of the sequence by a quadruplet {(X,T,c,x_0)}, because any other such representation {(X',T',c',x'_0)} is a factor of this representation (in the sense that there is a unique map {\pi: X \rightarrow X'} with {T' \circ \pi = \pi \circ T}, {c' \circ \pi = c}, and {x'_0 = \pi(x_0)}). However, from a conceptual point of view this representation is still somewhat unsatisfactory, given that the elements of the system {X} are interpreted as infinite strings rather than elements of a more geometrically or algebraically rich object (e.g. points in a circle, torus, or other homogeneous space).

For general sequences {(a_n)_{n=0}^\infty}, locating relevant geometric or algebraic structure in a dynamical system generating that sequence is an important but very difficult task (see e.g. this paper of Host and Kra, which is more or less devoted to precisely this task in the context of working out what component of a dynamical system controls the multiple recurrence behaviour of that system). However, for specific examples of sequences {(a_n)_{n=0}^\infty}, one can use an informal procedure of educated guesswork in order to produce a more natural-looking quadruple {(X,T,c,x_0)} that generates that sequence. This is not a particularly difficult or deep operation, but I found it very helpful in internalising the intuition behind the correspondence principle. Being non-rigorous, this procedure does not seem to be emphasised in most presentations of the correspondence principle, so I thought I would describe it here.

Read the rest of this entry »

The rectification principle in arithmetic combinatorics asserts, roughly speaking, that very small subsets (or, alternatively, small structured subsets) of an additive group or a field of large characteristic can be modeled (for the purposes of arithmetic combinatorics) by subsets of a group or field of zero characteristic, such as the integers {{\bf Z}} or the complex numbers {{\bf C}}. The additive form of this principle is known as the Freiman rectification principle; it has several formulations, going back of course to the original work of Freiman. Here is one formulation as given by Bilu, Lev, and Ruzsa:

Proposition 1 (Additive rectification) Let {A} be a subset of the additive group {{\bf Z}/p{\bf Z}} for some prime {p}, and let {s \geq 1} be an integer. Suppose that {|A| \leq \log_{2s} p}. Then there exists a map {\phi: A \rightarrow A'} into a subset {A'} of the integers which is a Freiman isomorphism of order {s} in the sense that for any {x_1,\ldots,x_s,y_1,\ldots,y_s \in A}, one has

\displaystyle  x_1+\ldots+x_s = y_1+\ldots+y_s

if and only if

\displaystyle  \phi(x_1)+\ldots+\phi(x_s) = \phi(y_1)+\ldots+\phi(y_s).

Furthermore {\phi} is a right-inverse of the obvious projection homomorphism from {{\bf Z}} to {{\bf Z}/p{\bf Z}}.

The original version of the rectification principle allowed the sets involved to be substantially larger in size (cardinality up to a small constant multiple of {p}), but with the additional hypothesis of bounded doubling involved; see the above-mentioned papers, as well as this later paper of Green and Ruzsa, for further discussion.

The proof of Proposition 1 is quite short (see Theorem 3.1 of Bilu-Lev-Ruzsa); the main idea is to use Minkowski’s theorem to find a non-trivial dilate {aA} of {A} that is contained in a small neighbourhood of the origin in {{\bf Z}/p{\bf Z}}, at which point the rectification map {\phi} can be constructed by hand.

Very recently, Codrut Grosu obtained an arithmetic analogue of the above theorem, in which the rectification map {\phi} preserves both additive and multiplicative structure:

Theorem 2 (Arithmetic rectification) Let {A} be a subset of the finite field {{\bf F}_p} for some prime {p \geq 3}, and let {s \geq 1} be an integer. Suppose that {|A| < \log_2 \log_{2s} \log_{2s^2} p - 1}. Then there exists a map {\phi: A \rightarrow A'} into a subset {A'} of the complex numbers which is a Freiman field isomorphism of order {s} in the sense that for any {x_1,\ldots,x_n \in A} and any polynomial {P(x_1,\ldots,x_n)} of degree at most {s} and integer coefficients of magnitude summing to at most {s}, one has

\displaystyle  P(x_1,\ldots,x_n)=0

if and only if

\displaystyle  P(\phi(x_1),\ldots,\phi(x_n))=0.

Note that it is necessary to use an algebraically closed field such as {{\bf C}} for this theorem, in contrast to the integers used in Proposition 1, as {{\bf F}_p} can contain objects such as square roots of {-1} which can only map to {\pm i} in the complex numbers (once {s} is at least {2}).

Using Theorem 2, one can transfer results in arithmetic combinatorics (e.g. sum-product or Szemerédi-Trotter type theorems) regarding finite subsets of {{\bf C}} to analogous results regarding sufficiently small subsets of {{\bf F}_p}; see the paper of Grosu for several examples of this. This should be compared with the paper of Vu, Wood, and Wood, which introduces a converse principle that embeds finite subsets of {{\bf C}} (or more generally, a characteristic zero integral domain) in a Freiman field-isomorphic fashion into finite subsets of {{\bf F}_p} for arbitrarily large primes {p}, allowing one to transfer arithmetic combinatorical facts from the latter setting to the former.

Grosu’s argument uses some quantitative elimination theory, and in particular a quantitative variant of a lemma of Chang that was discussed previously on this blog. In that previous blog post, it was observed that (an ineffective version of) Chang’s theorem could be obtained using only qualitative algebraic geometry (as opposed to quantitative algebraic geometry tools such as elimination theory results with explicit bounds) by means of nonstandard analysis (or, in what amounts to essentially the same thing in this context, the use of ultraproducts). One can then ask whether one can similarly establish an ineffective version of Grosu’s result by nonstandard means. The purpose of this post is to record that this can indeed be done without much difficulty, though the result obtained, being ineffective, is somewhat weaker than that in Theorem 2. More precisely, we obtain

Theorem 3 (Ineffective arithmetic rectification) Let {s, n \geq 1}. Then if {{\bf F}} is a field of characteristic at least {C_{s,n}} for some {C_{s,n}} depending on {s,n}, and {A} is a subset of {{\bf F}} of cardinality {n}, then there exists a map {\phi: A \rightarrow A'} into a subset {A'} of the complex numbers which is a Freiman field isomorphism of order {s}.

Our arguments will not provide any effective bound on the quantity {C_{s,n}} (though one could in principle eventually extract such a bound by deconstructing the proof of Proposition 4 below), making this result weaker than Theorem 2 (save for the minor generalisation that it can handle fields of prime power order as well as fields of prime order as long as the characteristic remains large).

Following the principle that ultraproducts can be used as a bridge to connect quantitative and qualitative results (as discussed in these previous blog posts), we will deduce Theorem 3 from the following (well-known) qualitative version:

Proposition 4 (Baby Lefschetz principle) Let {k} be a field of characteristic zero that is finitely generated over the rationals. Then there is an isomorphism {\phi: k \rightarrow \phi(k)} from {k} to a subfield {\phi(k)} of {{\bf C}}.

This principle (first laid out in an appendix of Lefschetz’s book), among other things, often allows one to use the methods of complex analysis (e.g. Riemann surface theory) to study many other fields of characteristic zero. There are many variants and extensions of this principle; see for instance this MathOverflow post for some discussion of these. I used this baby version of the Lefschetz principle recently in a paper on expanding polynomial maps.

Proof: We give two proofs of this fact, one using transcendence bases and the other using Hilbert’s nullstellensatz.

We begin with the former proof. As {k} is finitely generated over {{\bf Q}}, it has finite transcendence degree, thus one can find algebraically independent elements {x_1,\ldots,x_m} of {k} over {{\bf Q}} such that {k} is a finite extension of {{\bf Q}(x_1,\ldots,x_m)}, and in particular by the primitive element theorem {k} is generated by {{\bf Q}(x_1,\ldots,x_m)} and an element {\alpha} which is algebraic over {{\bf Q}(x_1,\ldots,x_m)}. (Here we use the fact that characteristic zero fields are separable.) If we then define {\phi} by first mapping {x_1,\ldots,x_m} to generic (and thus algebraically independent) complex numbers {z_1,\ldots,z_m}, and then setting {\phi(\alpha)} to be a complex root of of the minimal polynomial for {\alpha} over {{\bf Q}(x_1,\ldots,x_m)} after replacing each {x_i} with the complex number {z_i}, we obtain a field isomorphism {\phi: k \rightarrow \phi(k)} with the required properties.

Now we give the latter proof. Let {x_1,\ldots,x_m} be elements of {k} that generate that field over {{\bf Q}}, but which are not necessarily algebraically independent. Our task is then equivalent to that of finding complex numbers {z_1,\ldots,z_m} with the property that, for any polynomial {P(x_1,\ldots,x_m)} with rational coefficients, one has

\displaystyle  P(x_1,\ldots,x_m) = 0

if and only if

\displaystyle  P(z_1,\ldots,z_m) = 0.

Let {{\mathcal P}} be the collection of all polynomials {P} with rational coefficients with {P(x_1,\ldots,x_m)=0}, and {{\mathcal Q}} be the collection of all polynomials {P} with rational coefficients with {P(x_1,\ldots,x_m) \neq 0}. The set

\displaystyle  S := \{ (z_1,\ldots,z_m) \in {\bf C}^m: P(z_1,\ldots,z_m)=0 \hbox{ for all } P \in {\mathcal P} \}

is the intersection of countably many algebraic sets and is thus also an algebraic set (by the Hilbert basis theorem or the Noetherian property of algebraic sets). If the desired claim failed, then {S} could be covered by the algebraic sets {\{ (z_1,\ldots,z_m) \in {\bf C}^m: Q(z_1,\ldots,z_m) = 0 \}} for {Q \in {\mathcal Q}}. By decomposing into irreducible varieties and observing (e.g. from the Baire category theorem) that a variety of a given dimension over {{\bf C}} cannot be covered by countably many varieties of smaller dimension, we conclude that {S} must in fact be covered by a finite number of such sets, thus

\displaystyle  S \subset \bigcup_{i=1}^n \{ (z_1,\ldots,z_m) \in {\bf C}^m: Q_i(z_1,\ldots,z_m) = 0 \}

for some {Q_1,\ldots,Q_n \in {\bf C}^m}. By the nullstellensatz, we thus have an identity of the form

\displaystyle  (Q_1 \ldots Q_n)^l = P_1 R_1 + \ldots + P_r R_r

for some natural numbers {l,r \geq 1}, polynomials {P_1,\ldots,P_r \in {\mathcal P}}, and polynomials {R_1,\ldots,R_r} with coefficients in {\overline{{\bf Q}}}. In particular, this identity also holds in the algebraic closure {\overline{k}} of {k}. Evaluating this identity at {(x_1,\ldots,x_m)} we see that the right-hand side is zero but the left-hand side is non-zero, a contradiction, and the claim follows. \Box

From Proposition 4 one can now deduce Theorem 3 by a routine ultraproduct argument (the same one used in these previous blog posts). Suppose for contradiction that Theorem 3 fails. Then there exists natural numbers {s,n \geq 1}, a sequence of finite fields {{\bf F}_i} of characteristic at least {i}, and subsets {A_i=\{a_{i,1},\ldots,a_{i,n}\}} of {{\bf F}_i} of cardinality {n} such that for each {i}, there does not exist a Freiman field isomorphism of order {s} from {A_i} to the complex numbers. Now we select a non-principal ultrafilter {\alpha \in \beta {\bf N} \backslash {\bf N}}, and construct the ultraproduct {{\bf F} := \prod_{i \rightarrow \alpha} {\bf F}_i} of the finite fields {{\bf F}_i}. This is again a field (and is a basic example of what is known as a pseudo-finite field); because the characteristic of {{\bf F}_i} goes to infinity as {i \rightarrow \infty}, it is easy to see (using Los’s theorem) that {{\bf F}} has characteristic zero and can thus be viewed as an extension of the rationals {{\bf Q}}.

Now let {a_j := \lim_{i \rightarrow \alpha} a_{i,j}} be the ultralimit of the {a_{i,j}}, so that {A := \{a_1,\ldots,a_n\}} is the ultraproduct of the {A_i}, then {A} is a subset of {{\bf F}} of cardinality {n}. In particular, if {k} is the field generated by {{\bf Q}} and {A}, then {k} is a finitely generated extension of the rationals and thus, by Proposition 4 there is an isomorphism {\phi: k \rightarrow \phi(k)} from {k} to a subfield {\phi(k)} of the complex numbers. In particular, {\phi(a_1),\ldots,\phi(a_n)} are complex numbers, and for any polynomial {P(x_1,\ldots,x_n)} with integer coefficients, one has

\displaystyle  P(a_1,\ldots,a_n) = 0

if and only if

\displaystyle  P(\phi(a_1),\ldots,\phi(a_n)) = 0.

By Los’s theorem, we then conclude that for all {i} sufficiently close to {\alpha}, one has for all polynomials {P(x_1,\ldots,x_n)} of degree at most {s} and whose coefficients are integers whose magnitude sums up to {s}, one has

\displaystyle  P(a_{i,1},\ldots,a_{i,n}) = 0

if and only if

\displaystyle  P(\phi(a_1),\ldots,\phi(a_n)) = 0.

But this gives a Freiman field isomorphism of order {s} between {A_i} and {\phi(A)}, contradicting the construction of {A_i}, and Theorem 3 follows.

The following result is due independently to Furstenberg and to Sarkozy:

Theorem 1 (Furstenberg-Sarkozy theorem) Let {\delta > 0}, and suppose that {N} is sufficiently large depending on {\delta}. Then every subset {A} of {[N] := \{1,\ldots,N\}} of density {|A|/N} at least {\delta} contains a pair {n, n+r^2} for some natural numbers {n, r} with {r \neq 0}.

This theorem is of course similar in spirit to results such as Roth’s theorem or Szemerédi’s theorem, in which the pattern {n,n+r^2} is replaced by {n,n+r,n+2r} or {n,n+r,\ldots,n+(k-1)r} for some fixed {k} respectively. There are by now many proofs of this theorem (see this recent paper of Lyall for a survey), but most proofs involve some form of Fourier analysis (or spectral theory). This may be compared with the standard proof of Roth’s theorem, which combines some Fourier analysis with what is now known as the density increment argument.

A few years ago, Ben Green, Tamar Ziegler, and myself observed that it is possible to prove the Furstenberg-Sarkozy theorem by just using the Cauchy-Schwarz inequality (or van der Corput lemma) and the density increment argument, removing all invocations of Fourier analysis, and instead relying on Cauchy-Schwarz to linearise the quadratic shift {r^2}. As such, this theorem can be considered as even more elementary than Roth’s theorem (and its proof can be viewed as a toy model for the proof of Roth’s theorem). We ended up not doing too much with this observation, so decided to share it here.

The first step is to use the density increment argument that goes back to Roth. For any {\delta > 0}, let {P(\delta)} denote the assertion that for {N} sufficiently large, all sets {A \subset [N]} of density at least {\delta} contain a pair {n,n+r^2} with {r} non-zero. Note that {P(\delta)} is vacuously true for {\delta > 1}. We will show that for any {0 < \delta_0 \leq 1}, one has the implication

\displaystyle  P(\delta_0 + c \delta_0^3) \implies P(\delta_0) \ \ \ \ \ (1)

for some absolute constant {c>0}. This implies that {P(\delta)} is true for any {\delta>0} (as can be seen by considering the infimum of all {\delta>0} for which {P(\delta)} holds), which gives Theorem 1.

It remains to establish the implication (1). Suppose for sake of contradiction that we can find {0 < \delta_0 \leq 1} for which {P(\delta_0+c\delta^3_0)} holds (for some sufficiently small absolute constant {c>0}), but {P(\delta_0)} fails. Thus, we can find arbitrarily large {N}, and subsets {A} of {[N]} of density at least {\delta_0}, such that {A} contains no patterns of the form {n,n+r^2} with {r} non-zero. In particular, we have

\displaystyle  \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} 1_A(n) 1_A(n+(r+h)^2) = 0.

(The exact ranges of {r} and {h} are not too important here, and could be replaced by various other small powers of {N} if desired.)

Let {\delta := |A|/N} be the density of {A}, so that {\delta_0 \leq \delta \leq 1}. Observe that

\displaystyle  \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} 1_A(n) \delta 1_{[N]}(n+(r+h)^2) = \delta^2 + O(N^{-1/3})

\displaystyle  \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} \delta 1_{[N]}(n) \delta 1_{[N]}(n+(r+h)^2) = \delta^2 + O(N^{-1/3})

and

\displaystyle  \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} \delta 1_{[N]}(n) 1_A(n+(r+h)^2) = \delta^2 + O( N^{-1/3} ).

If we thus set {f := 1_A - \delta 1_{[N]}}, then

\displaystyle  \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} f(n) f(n+(r+h)^2) = -\delta^2 + O( N^{-1/3} ).

In particular, for {N} large enough,

\displaystyle  \mathop{\bf E}_{n \in [N]} |f(n)| \mathop{\bf E}_{r \in [N^{1/3}]} |\mathop{\bf E}_{h \in [N^{1/100}]} f(n+(r+h)^2)| \gg \delta^2.

On the other hand, one easily sees that

\displaystyle  \mathop{\bf E}_{n \in [N]} |f(n)|^2 = O(\delta)

and hence by the Cauchy-Schwarz inequality

\displaystyle  \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} |\mathop{\bf E}_{h \in [N^{1/100}]} f(n+(r+h)^2)|^2 \gg \delta^3

which we can rearrange as

\displaystyle  |\mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h,h' \in [N^{1/100}]} \mathop{\bf E}_{n \in [N]} f(n+(r+h)^2) f(n+(r+h')^2)| \gg \delta^3.

Shifting {n} by {(r+h)^2} we obtain (again for {N} large enough)

\displaystyle  |\mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h,h' \in [N^{1/100}]} \mathop{\bf E}_{n \in [N]} f(n) f(n+(h'-h)(2r+h'+h))| \gg \delta^3.

In particular, by the pigeonhole principle (and deleting the diagonal case {h=h'}, which we can do for {N} large enough) we can find distinct {h,h' \in [N^{1/100}]} such that

\displaystyle  |\mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{n \in [N]} f(n) f(n+(h'-h)(2r+h'+h))| \gg \delta^3,

so in particular

\displaystyle  \mathop{\bf E}_{n \in [N]} |\mathop{\bf E}_{r \in [N^{1/3}]} f(n+(h'-h)(2r+h'+h))| \gg \delta^3.

If we set {d := 2(h'-h)} and shift {n} by {(h'-h) (h'+h)}, we can simplify this (again for {N} large enough) as

\displaystyle  \mathop{\bf E}_{n \in [N]} |\mathop{\bf E}_{r \in [N^{1/3}]} f(n+dr)| \gg \delta^3. \ \ \ \ \ (2)

On the other hand, since

\displaystyle  \mathop{\bf E}_{n \in [N]} f(n) = 0

we have

\displaystyle  \mathop{\bf E}_{n \in [N]} f(n+dr) = O( N^{-2/3+1/100})

for any {r \in [N^{1/3}]}, and thus

\displaystyle  \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} f(n+dr) = O( N^{-2/3+1/100}).

Averaging this with (2) we conclude that

\displaystyle  \mathop{\bf E}_{n \in [N]} \max( \mathop{\bf E}_{r \in [N^{1/3}]} f(n+dr), 0 ) \gg \delta^3.

In particular, by the pigeonhole principle we can find {n \in [N]} such that

\displaystyle  \mathop{\bf E}_{r \in [N^{1/3}]} f(n+dr) \gg \delta^3,

or equivalently {A} has density at least {\delta+c'\delta^3} on the arithmetic progression {\{ n+dr: r \in [N^{1/3}]\}}, which has length {\lfloor N^{1/3}\rfloor } and spacing {d}, for some absolute constant {c'>0}. By partitioning this progression into subprogressions of spacing {d^2} and length {\lfloor N^{1/4}\rfloor} (plus an error set of size {O(N^{1/4})}, we see from the pigeonhole principle that we can find a progression {\{ n' + d^2 r': r' \in [N^{1/4}]\}} of length {\lfloor N^{1/4}\rfloor} and spacing {d^2} on which {A} has density at least {\delta + c\delta^3} (and hence at least {\delta_0+c\delta_0^3}) for some absolute constant {c>0}. If we then apply the induction hypothesis to the set

\displaystyle  A' := \{ r' \in [N^{1/4}]: n' + d^2 r' \in A \}

we conclude (for {N} large enough) that {A'} contains a pair {m, m+s^2} for some natural numbers {m,s} with {s} non-zero. This implies that {(n'+d^2 m), (n'+d^2 m) + (|d|s)^2} lie in {A}, a contradiction, establishing the implication (1).

A more careful analysis of the above argument reveals a more quantitative version of Theorem 1: for {N \geq 100} (say), any subset of {[N]} of density at least {C/(\log\log N)^{1/2}} for some sufficiently large absolute constant {C} contains a pair {n,n+r^2} with {r} non-zero. This is not the best bound known; a (difficult) result of Pintz, Steiger, and Szemeredi allows the density to be as low as {C / (\log N)^{\frac{1}{4} \log\log\log\log N}}. On the other hand, this already improves on the (simpler) Fourier-analytic argument of Green that works for densities at least {C/(\log\log N)^{1/11}} (although the original argument of Sarkozy, which is a little more intricate, works up to {C (\log\log N)^{2/3}/(\log N)^{1/3}}). In the other direction, a construction of Rusza gives a set of density {\frac{1}{65} N^{-0.267}} without any pairs {n,n+r^2}.

Remark 1 A similar argument also applies with {n,n+r^2} replaced by {n,n+r^k} for fixed {k}, because this sort of pattern is preserved by affine dilations {r' \mapsto n'+d^k r'} into arithmetic progressions whose spacing {d^k} is a {k^{th}} power. By re-introducing Fourier analysis, one can also perform an argument of this type for {n,n+d,n+2d} where {d} is the sum of two squares; see the above-mentioned paper of Green for details. However there seems to be some technical difficulty in extending it to patterns of the form {n,n+P(r)} for polynomials {P} that consist of more than a single monomial (and with the normalisation {P(0)=0}, to avoid local obstructions), because one no longer has this preservation property.

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,314 other followers