You are currently browsing the category archive for the ‘math.CO’ category.
I’ve just uploaded to the arXiv my paper “Cancellation for the multilinear Hilbert transform“, submitted to Collectanea Mathematica. This paper uses methods from additive combinatorics (and more specifically, the arithmetic regularity and counting lemmas from this paper of Ben Green and myself) to obtain a slight amount of progress towards the open problem of obtaining bounds for the trilinear and higher Hilbert transforms (as discussed in this previous blog post). For instance, the trilinear Hilbert transform
is not known to be bounded for any to , although it is conjectured to do so when and . (For well below , one can use additive combinatorics constructions to demonstrate unboundedness; see this paper of Demeter.) One can approach this problem by considering the truncated trilinear Hilbert transforms
for . It is not difficult to show that the boundedness of is equivalent to the boundedness of with bounds that are uniform in and . On the other hand, from Minkowski’s inequality and Hölder’s inequality one can easily obtain the non-uniform bound of for . The main result of this paper is a slight improvement of this trivial bound to as . Roughly speaking, the way this gain is established is as follows. First there are some standard time-frequency type reductions to reduce to the task of obtaining some non-trivial cancellation on a single “tree”. Using a “generalised von Neumann theorem”, we show that such cancellation will happen if (a discretised version of) one or more of the functions (or a dual function that it is convenient to test against) is small in the Gowers norm. However, the arithmetic regularity lemma alluded to earlier allows one to represent an arbitrary function , up to a small error, as the sum of such a “Gowers uniform” function, plus a structured function (or more precisely, an irrational virtual nilsequence). This effectively reduces the problem to that of establishing some cancellation in a single tree in the case when all functions involved are irrational virtual nilsequences. At this point, the contribution of each component of the tree can be estimated using the “counting lemma” from my paper with Ben. The main term in the asymptotics is a certain integral over a nilmanifold, but because the kernel in the trilinear Hilbert transform is odd, it turns out that this integral vanishes, giving the required cancellation.
The same argument works for higher order Hilbert transforms (and one can also replace the coefficients in these transforms with other rational constants). However, because the quantitative bounds in the arithmetic regularity and counting lemmas are so poor, it does not seem likely that one can use these methods to remove the logarithmic growth in entirely, and some additional ideas will be needed to resolve the full conjecture.
Problem 1 (Erdös-Ulam problem) Let be a set such that the distance between any two points in is rational. Is it true that cannot be (topologically) dense in ?
The paper of Anning and Erdös addressed the case that all the distances between two points in were integer rather than rational in the affirmative.
The Erdös-Ulam problem remains open; it was discussed recently over at Gödel’s lost letter. It is in fact likely (as we shall see below) that the set in the above problem is not only forbidden to be topologically dense, but also cannot be Zariski dense either. If so, then the structure of is quite restricted; it was shown by Solymosi and de Zeeuw that if fails to be Zariski dense, then all but finitely many of the points of must lie on a single line, or a single circle. (Conversely, it is easy to construct examples of dense subsets of a line or circle in which all distances are rational, though in the latter case the square of the radius of the circle must also be rational.)
The main tool of the Solymosi-de Zeeuw analysis was Faltings’ celebrated theorem that every algebraic curve of genus at least two contains only finitely many rational points. The purpose of this post is to observe that an affirmative answer to the full Erdös-Ulam problem similarly follows from the conjectured analogue of Falting’s theorem for surfaces, namely the following conjecture of Bombieri and Lang:
Conjecture 2 (Bombieri-Lang conjecture) Let be a smooth projective irreducible algebraic surface defined over the rationals which is of general type. Then the set of rational points of is not Zariski dense in .
In fact, the Bombieri-Lang conjecture has been made for varieties of arbitrary dimension, and for more general number fields than the rationals, but the above special case of the conjecture is the only one needed for this application. We will review what “general type” means (for smooth projective complex varieties, at least) below the fold.
The Bombieri-Lang conjecture is considered to be extremely difficult, in particular being substantially harder than Faltings’ theorem, which is itself a highly non-trivial result. So this implication should not be viewed as a practical route to resolving the Erdös-Ulam problem unconditionally; rather, it is a demonstration of the power of the Bombieri-Lang conjecture. Still, it was an instructive algebraic geometry exercise for me to carry out the details of this implication, which quickly boils down to verifying that a certain quite explicit algebraic surface is of general type (Theorem 4 below). As I am not an expert in the subject, my computations here will be rather tedious and pedestrian; it is likely that they could be made much slicker by exploiting more of the machinery of modern algebraic geometry, and I would welcome any such streamlining by actual experts in this area. (For similar reasons, there may be more typos and errors than usual in this post; corrections are welcome as always.) My calculations here are based on a similar calculation of van Luijk, who used analogous arguments to show (assuming Bombieri-Lang) that the set of perfect cuboids is not Zariski-dense in its projective parameter space.
We also remark that in a recent paper of Makhul and Shaffaf, the Bombieri-Lang conjecture (or more precisely, a weaker consequence of that conjecture) was used to show that if is a subset of with rational distances which intersects any line in only finitely many points, then there is a uniform bound on the cardinality of the intersection of with any line. I have also recently learned (private communication) that an unpublished work of Shaffaf has obtained a result similar to the one in this post, namely that the Erdös-Ulam conjecture follows from the Bombieri-Lang conjecture, plus an additional conjecture about the rational curves in a specific surface.
Let us now give the elementary reductions to the claim that a certain variety is of general type. For sake of contradiction, let be a dense set such that the distance between any two points is rational. Then certainly contains two points that are a rational distance apart. By applying a translation, rotation, and a (rational) dilation, we may assume that these two points are and . As is dense, there is a third point of not on the axis, which after a reflection we can place in the upper half-plane; we will write it as with .
Given any two points in , the quantities are rational, and so by the cosine rule the dot product is rational as well. Since , this implies that the -component of every point in is rational; this in turn implies that the product of the -coordinates of any two points in is rational as well (since this differs from by a rational number). In particular, and are rational, and all of the points in now lie in the lattice . (This fact appears to have first been observed in the 1988 habilitationschrift of Kemnitz.)
Now take four points , in in general position (so that the octuplet avoids any pre-specified hypersurface in ); this can be done if is dense. (If one wished, one could re-use the three previous points to be three of these four points, although this ultimately makes little difference to the analysis.) If is any point in , then the distances from to are rationals that obey the equations
for , and thus determine a rational point in the affine complex variety defined as
By inspecting the projection from to , we see that is a branched cover of , with the generic cover having points (coming from the different ways to form the square roots ); in particular, is a complex affine algebraic surface, defined over the rationals. By inspecting the monodromy around the four singular base points (which switch the sign of one of the roots , while keeping the other three roots unchanged), we see that the variety is connected away from its singular set, and thus irreducible. As is topologically dense in , it is Zariski-dense in , and so generates a Zariski-dense set of rational points in . To solve the Erdös-Ulam problem, it thus suffices to show that
This is already very close to a claim that can be directly resolved by the Bombieri-Lang conjecture, but is affine rather than projective, and also contains some singularities. The first issue is easy to deal with, by working with the projectivisation
and the projective complex space is the space of all equivalence classes of tuples up to projective equivalence . By identifying the affine point with the projective point , we see that consists of the affine variety together with the set , which is the union of eight curves, each of which lies in the closure of . Thus is the projective closure of , and is thus a complex irreducible projective surface, defined over the rationals. As is cut out by four quadric equations in and has degree sixteen (as can be seen for instance by inspecting the intersection of with a generic perturbation of a fibre over the generically defined projection ), it is also a complete intersection. To show (3), it then suffices to show that the rational points in are not Zariski dense in .
Heuristically, the reason why we expect few rational points in is as follows. First observe from the projective nature of (1) that every rational point is equivalent to an integer point. But for a septuple of integers of size , the quantity is an integer point of of size , and so should only vanish about of the time. Hence the number of integer points of height comparable to should be about
this is a convergent sum if ranges over (say) powers of two, and so from standard probabilistic heuristics (see this previous post) we in fact expect only finitely many solutions, in the absence of any special algebraic structure (e.g. the structure of an abelian variety, or a birational reduction to a simpler variety) that could produce an unusually large number of solutions.
The Bombieri-Lang conjecture, Conjecture 2, can be viewed as a formalisation of the above heuristics (roughly speaking, it is one of the most optimistic natural conjectures one could make that is compatible with these heuristics while also being invariant under birational equivalence).
for are linearly dependent, where the is in the coordinate position associated to . One way in which this can occur is if one of the gradient vectors vanish identically. This occurs at precisely points, when is equal to for some , and one has for all (so in particular ). Let us refer to these as the obvious singularities; they arise from the geometrically evident fact that the distance function is singular at .
The other way in which could occur is if a non-trivial linear combination of at least two of the gradient vectors vanishes. From (2), this can only occur if for some distinct , which from (1) implies that
If the non-trivial linear combination involved three or more gradient vectors, then by the pigeonhole principle at least two of the signs involved must be equal, and so the only singular points are (5). So the only remaining possibility is when we have two gradient vectors that are parallel but non-zero, with the signs in (3), (4) opposing. But then (as are in general position) the vectors are non-zero and non-parallel to each other, a contradiction. Thus, outside of the obvious singular points mentioned earlier, the only other singular points are the two points (5).
We will shortly show that the obvious singularities are ordinary double points; the surface near any of these points is analytically equivalent to an ordinary cone near the origin, which is a cone over a smooth conic curve . The two non-obvious singularities (5) are slightly more complicated than ordinary double points, they are elliptic singularities, which approximately resemble a cone over an elliptic curve. (As far as I can tell, this resemblance is exact in the category of real smooth manifolds, but not in the category of algebraic varieties.) If one blows up each of the point singularities of separately, no further singularities are created, and one obtains a smooth projective surface (using the Segre embedding as necessary to embed back into projective space, rather than in a product of projective spaces). Away from the singularities, the rational points of lift up to rational points of . Assuming the Bombieri-Lang conjecture, we thus are able to answer the Erdös-Ulam problem in the affirmative once we establish
This will be done below the fold, by the pedestrian device of explicitly constructing global differential forms on ; I will also be working from a complex analysis viewpoint rather than an algebraic geometry viewpoint as I am more comfortable with the former approach. (As mentioned above, though, there may well be a quicker way to establish this result by using more sophisticated machinery.)
I thank Mark Green and David Gieseker for helpful conversations (and a crash course in varieties of general type!).
Remark 5 The above argument shows in fact (assuming Bombieri-Lang) that sets with all distances rational cannot be Zariski-dense, and thus (by Solymosi-de Zeeuw) must lie on a single line or circle with only finitely many exceptions. Assuming a stronger version of Bombieri-Lang involving a general number field , we obtain a similar conclusion with “rational” replaced by “lying in ” (one has to extend the Solymosi-de Zeeuw analysis to more general number fields, but this should be routine, using the analogue of Faltings’ theorem for such number fields).
Van Vu and I have just uploaded to the arXiv our paper “Random matrices have simple spectrum“. Recall that an Hermitian matrix is said to have simple eigenvalues if all of its eigenvalues are distinct. This is a very typical property of matrices to have: for instance, as discussed in this previous post, in the space of all Hermitian matrices, the space of matrices without all eigenvalues simple has codimension three, and for real symmetric cases this space has codimension two. In particular, given any random matrix ensemble of Hermitian or real symmetric matrices with an absolutely continuous distribution, we conclude that random matrices drawn from this ensemble will almost surely have simple eigenvalues.
For discrete random matrix ensembles, though, the above argument breaks down, even though general universality heuristics predict that the statistics of discrete ensembles should behave similarly to those of continuous ensembles. A model case here is the adjacency matrix of an Erdös-Rényi graph – a graph on vertices in which any pair of vertices has an independent probability of being in the graph. For the purposes of this paper one should view as fixed, e.g. , while is an asymptotic parameter going to infinity. In this context, our main result is the following (answering a question of Babai):
Our argument works for more general Wigner-type matrix ensembles, but for sake of illustration we will stick with the Erdös-Renyi case. Previous work on local universality for such matrix models (e.g. the work of Erdos, Knowles, Yau, and Yin) was able to show that any individual eigenvalue gap did not vanish with probability (in fact for some absolute constant ), but because there are different gaps that one has to simultaneously ensure to be non-zero, this did not give Theorem 1 as one is forced to apply the union bound.
Our argument in fact gives simplicity of the spectrum with probability for any fixed ; in a subsequent paper we also show that it gives a quantitative lower bound on the eigenvalue gaps (analogous to how many results on the singularity probability of random matrices can be upgraded to a bound on the least singular value).
The basic idea of argument can be sketched as follows. Suppose that has a repeated eigenvalue . We split
for a random minor and a random sign vector ; crucially, and are independent. If has a repeated eigenvalue , then by the Cauchy interlacing law, also has an eigenvalue . We now write down the eigenvector equation for at :
Extracting the top coefficients, we obtain
If we let be the -eigenvector of , then by taking inner products with we conclude that
we typically expect to be non-zero, in which case we arrive at
In other words, in order for to have a repeated eigenvalue, the top right column of has to be orthogonal to an eigenvector of the minor . Note that and are going to be independent (once we specify which eigenvector of to take as ). On the other hand, thanks to inverse Littlewood-Offord theory (specifically, we use an inverse Littlewood-Offord theorem of Nguyen and Vu), we know that the vector is unlikely to be orthogonal to any given vector independent of , unless the coefficients of are extremely special (specifically, that most of them lie in a generalised arithmetic progression). The main remaining difficulty is then to show that eigenvectors of a random matrix are typically not of this special form, and this relies on a conditioning argument originally used by Komlós to bound the singularity probability of a random sign matrix. (Basically, if an eigenvector has this special form, then one can use a fraction of the rows and columns of the random matrix to determine the eigenvector completely, while still preserving enough randomness in the remaining portion of the matrix so that this vector will in fact not be an eigenvector with high probability.)
In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence of finite graphs, one can extract a subsequence which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function . What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon . For instance, the edge density
converge to the integral
the triangle density
converges to the integral
the four-cycle density
converges to the integral
and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.
One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter ) to obtain a nonstandard graph , where is the ultraproduct of the , and similarly for the . The set can then be viewed as a symmetric subset of which is measurable with respect to the Loeb -algebra of the product (see this previous blog post for the construction of Loeb measure). A crucial point is that this -algebra is larger than the product of the Loeb -algebra of the individual vertex set . This leads to a decomposition
where the “graphon” is the orthogonal projection of onto , and the “regular error” is orthogonal to all product sets for . The graphon then captures the statistics of the nonstandard graph , in exact analogy with the more traditional graph limits: for instance, the edge density
(or equivalently, the limit of the along the ultrafilter ) is equal to the integral
where denotes Loeb measure on a nonstandard finite set ; the triangle density
(or equivalently, the limit along of the triangle densities of ) is equal to the integral
and so forth. Note that with this construction, the graphon is living on the Cartesian square of an abstract probability space , which is likely to be inseparable; but it is possible to cut down the Loeb -algebra on to minimal countable -algebra for which remains measurable (up to null sets), and then one can identify with , bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)
Additive combinatorics, which studies things like the additive structure of finite subsets of an abelian group , has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.
It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group in a nonstandard group , defined as the ultraproduct of finite -approximate groups for some standard . (A -approximate group is a symmetric set containing the origin such that can be covered by or fewer translates of .) We then let be the external subgroup of generated by ; equivalently, is the union of over all standard . This space has a Loeb measure , defined by setting
whenever is an internal subset of for any standard , and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.
The Loeb measure is a translation invariant measure on , normalised so that has Loeb measure one. As such, one should think of as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that is not actually a locally compact group with Haar measure, for two reasons:
- There is not an obvious topology on that makes it simultaneously locally compact, Hausdorff, and -compact. (One can get one or two out of three without difficulty, though.)
- The addition operation is not measurable from the product Loeb algebra to . Instead, it is measurable from the coarser Loeb algebra to (compare with the analogous situation for nonstandard graphs).
Nevertheless, the analogy is a useful guide for the arguments that follow.
Let denote the space of bounded Loeb measurable functions (modulo almost everywhere equivalence) that are supported on for some standard ; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation , defined by setting
whenever , are bounded nonstandard functions (extended by zero to all of ), and then extending to arbitrary elements of by density. Equivalently, is the pushforward of the -measurable function under the map .
The basic structural theorem is then as follows.
for some standard and some compact abelian group , equipped with a Haar measure and a measurable homomorphism (using the Loeb -algebra on and the Baire -algebra on ), with the following properties:
- (i) has dense image, and is the pushforward of Loeb measure by .
- (ii) There exists sets with open and compact, such that
- (iii) Whenever with compact and open, there exists a nonstandard finite set such that
- (iv) If , then we have the convolution formula
where are the pushforwards of to , the convolution on the right-hand side is convolution using , and is the pullback map from to . In particular, if , then for all .
One can view the locally compact abelian group as a “model “or “Kronecker factor” for the ultra approximate group (in close analogy with the Kronecker factor from ergodic theory). In the case that is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components of the Kronecker group are trivial, and this theorem was implicitly established by Szegedy. The compact group is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions , one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor . Once one is in the separable case, the Baire sigma algebra is identical with the more familiar Borel sigma algebra.
Given any sequence of uniformly bounded functions for some fixed , we can view the function defined by
as an “additive limit” of the , in much the same way that graphons are limits of the indicator functions . The additive limits capture some of the statistics of the , for instance the normalised means
as an “additive limit” of the , in much the same way that graphons are limits of the indicator functions . The additive limits capture some of the statistics of the , for instance the normalised means
converge (along the ultrafilter ) to the mean
and for three sequences of functions, the normalised correlation
converges along to the correlation
the normalised Gowers norm
converges along to the Gowers norm
and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised norm
does not necessarily converge to the norm
but can converge instead to a larger quantity, due to the presence of the orthogonal projection in the definition (4) of .
An important special case of an additive limit occurs when the functions involved are indicator functions of some subsets of . The additive limit does not necessarily remain an indicator function, but instead takes values in (much as a graphon takes values in even though the original indicators take values in ). The convolution is then the ultralimit of the normalised convolutions ; in particular, the measure of the support of provides a lower bound on the limiting normalised cardinality of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset could contain a large number of elements which have very few () representations as the sum of two elements of , and in the limit these portions of the sumset fall outside of the support of . (One can think of the support of as describing the “essential” sumset of , discarding those elements that have only very few representations.) Similarly for higher convolutions of . Thus one can use additive limits to partially control the growth of iterated sumsets of subsets of approximate groups , in the regime where stays bounded and goes to infinity.
Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.
Example 2 (Bohr sets) We take to be the intervals , where is a sequence going to infinity; these are -approximate groups for all . Let be an irrational real number, let be an interval in , and for each natural number let be the Bohr set
In this case, the (reduced) Kronecker factor can be taken to be the infinite cylinder with the usual Lebesgue measure . The additive limits of and end up being and , where is the finite cylinder
and is the rectangle
Geometrically, one should think of and as being wrapped around the cylinder via the homomorphism , and then one sees that is converging in some normalised weak sense to , and similarly for and . In particular, the additive limit predicts the growth rate of the iterated sumsets to be quadratic in until becomes comparable to , at which point the growth transitions to linear growth, in the regime where is bounded and is large.
If were rational instead of irrational, then one would need to replace by the finite subgroup here.
Example 3 (Structured subsets of progressions) We take be the rank two progression
where is a sequence going to infinity; these are -approximate groups for all . Let be the subset
Then the (reduced) Kronecker factor can be taken to be with Lebesgue measure , and the additive limits of the and are then and , where is the square
and is the circle
Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism for to embed the original sets into the plane . In particular, one now expects the growth rate of the iterated sumsets and to be quadratic in , in the regime where is bounded and is large.
Example 4 (Dissociated sets) Let be a fixed natural number, and take
where are randomly chosen elements of a large cyclic group , where is a sequence of primes going to infinity. These are -approximate groups. The (reduced) Kronecker factor can (almost surely) then be taken to be with counting measure, and the additive limit of is , where and is the standard basis of . In particular, the growth rates of should grow approximately like for bounded and large.
Example 5 (Random subsets of groups) Let be a sequence of finite additive groups whose order is going to infinity. Let be a random subset of of some fixed density . Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group , and the additive limit of the is the constant function . The convolutions then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of ; this reflects the fact that of the elements of can be represented as the sum of two elements of in ways. In particular, occupies a proportion of .
Example 6 (Trigonometric series) Take for a sequence of primes going to infinity, and for each let be an infinite sequence of frequencies chosen uniformly and independently from . Let denote the random trigonometric series
Then (almost surely) we can take the reduced Kronecker factor to be the infinite torus (with the Haar probability measure ), and the additive limit of the then becomes the function defined by the formula
In fact, the pullback is the ultralimit of the . As such, for any standard exponent , the normalised norm
can be seen to converge to the limit
The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.
It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.
Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.
In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).
Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as -SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and -SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.
My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector in an affine space (where and are given, and is typically somewhat smaller than ) which minimises the -norm of the vector . This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once and get moderately large (e.g. of the order of ), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix . Much of the problem comes from the fact that the functional is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint with a convex penalty term , thus one is now trying to minimise the unconstrained functional
This functional is more convex, and is over a computationally simpler domain than the affine space , so is easier (though still not entirely trivial) to optimise over. However, the minimiser to this problem need not match the minimiser to the original problem, particularly if the (sub-)gradient of the original functional is large at , and if is not set to be small. (And setting too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term
then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser to the original problem, provided that is (or more precisely, lies in) the (sub-)gradient of at – even if is not small. But, one does not know in advance what the correct value of should be, because one does not know what the minimiser is.
With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for and simultaneously; given an initial guess for both and , one first updates to the minimiser of the convex functional
(note upon taking the first variation of (1) that is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser even for non-small values of , and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).
Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth -dimensional Riemannian manifold that one wants to embed locally into a Euclidean space . The Nash embedding theorem guarantees that one can do this if is large enough depending on , but what is the minimal value of one can get away with? Many years ago, my colleague Robert Greene showed that sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that is possible, and no better. So this suggests that is the threshold for the smooth problem also, but this remains open in general. The cases is trivial, and the case is not too difficult (if the curvature is non-zero) as the codimension is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.
This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.
Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.
One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:
Conjecture 1 (Kakeya conjecture) Let be a subset of that contains a unit line segment in every direction. Then .
This conjecture is not precisely formulated here, because we have not specified exactly what type of set is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):
Conjecture 2 (Kakeya conjecture, again) Let be a family of lines in that meet and contain a line in each direction. Let be the union of the restriction to of every line in . Then .
As the space of all directions in is two-dimensional, we thus see that is an (at least) two-dimensional subset of the four-dimensional space of lines in (actually, it lies in a compact subset of this space, since we have constrained the lines to meet ). One could then ask if this is the only property of that is needed to establish the Kakeya conjecture, that is to say if any subset of which contains a two-dimensional family of lines (restricted to , and meeting ) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:
Conjecture 3 (Strong Kakeya conjecture) Let be a two-dimensional family of lines in that meet , and assume the Wolff axiom that no (affine) plane contains more than a one-dimensional family of lines in . Let be the union of the restriction of every line in . Then .
Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie close to a plane, rather than exactly on the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.
In 1995, Wolff established the important lower bound (for various notions of dimension, e.g. Hausdorff dimension) for sets in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the barrier, coming from the possible existence of half-dimensional (approximate) subfields of the reals . To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:
Conjecture 4 (Strong Kakeya conjecture over ) Let be a four (real) dimensional family of complex lines in that meet the unit ball in , and assume the Wolff axiom that no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in . Let be the union of the restriction of every complex line in . Then has real dimension .
The argument of Wolff can be adapted to the complex case to show that all sets occuring in Conjecture 4 have real dimension at least . Unfortunately, this is sharp, due to the following fundamental counterexample:
Proposition 5 (Heisenberg group counterexample) Let be the Heisenberg group
and let be the family of complex lines
with and . Then is a five (real) dimensional subset of that contains every line in the four (real) dimensional set ; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in . In particular, the strong Kakeya conjecture over the complex numbers is false.
This proposition is proven by a routine computation, which we omit here. The group structure on is given by the group law
giving the structure of a -step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over . Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines
with ; multiplying this family of lines on the right by a group element in gives other families of parallel lines, which in fact sweep out all of .
The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield of , which induces an involution which can then be used to define the Heisenberg group through the formula
Analogous Heisenberg counterexamples can also be constructed if one works over finite fields that contain a “half-dimensional” subfield ; we leave the details to the interested reader. Morally speaking, if in turn contained a subfield of dimension (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.
We thus see that to go beyond the dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:
- (a) Exploit the distinct directions of the lines in in a way that goes beyond the Wolff axiom; or
- (b) Exploit the fact that does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).
(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)
Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of for Kakeya sets very slightly to (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of , and then pursued route (b) to obtain a corresponding improvement to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.
Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:
- Assume that the (strong) Kakeya conjecture fails, so that there are sets of the form in Conjecture 3 of dimension for some . Assume that is “optimal”, in the sense that is as large as possible.
- Use the optimality of (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets , namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining to “behave like” a putative Heisenberg group counterexample.
- By playing all these structural properties off of each other, show that can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.
Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.
Roth’s theorem on arithmetic progressions asserts that every subset of the integers of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:
Theorem 1 (Roth’s theorem) Let be a compact abelian group, with Haar probability measure , which is -divisible (i.e. the map is surjective) and let be a measurable subset of with for some . Then we have
where denotes the bound for some depending only on .
This theorem is usually formulated in the case that is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of -divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant on , but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the -divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the shift in that case.
We can deduce Theorem 1 from the following more general Khintchine-type statement. Let denote the Pontryagin dual of a compact abelian group , that is to say the set of all continuous homomorphisms from to the (additive) unit circle . Thus is a discrete abelian group, and functions have a Fourier transform defined by
If is -divisible, then is -torsion-free in the sense that the map is injective. For any finite set and any radius , define the Bohr set
where denotes the distance of to the nearest integer. We refer to the cardinality of as the rank of the Bohr set. We record a simple volume bound on Bohr sets:
Proof: We can cover the torus by translates of the cube . Then the sets form an cover of . But all of these sets lie in a translate of , and the claim then follows from the translation invariance of .
The function should be viewed as an -normalised “tent function” cutoff to . Note from Lemma 2 that
We then have the following sharper version of Theorem 1:
where denotes the convolution operation
A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with and equal to a small multiple of to conclude that there is a Bohr set with and such that
Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set to capture all the “large Fourier coefficients” of , but such that a certain “dilate” of does not capture much more Fourier energy of than itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the considered above to achieve a similar effect.
A core foundation of the subject now known as arithmetic combinatorics (and particularly the subfield of additive combinatorics) are the elementary sum set estimates (sometimes known as “Ruzsa calculus”) that relate the cardinality of various sum sets
and difference sets
as well as iterated sumsets such as , , and so forth. Here, are finite non-empty subsets of some additive group (classically one took or , but nowadays one usually considers more general additive groups). Some basic estimates in this vein are the following:
Lemma 1 (Ruzsa covering lemma) Let be finite non-empty subsets of . Then may be covered by at most translates of .
Proof: Consider a maximal set of disjoint translates of by elements . These translates have cardinality , are disjoint, and lie in , so there are at most of them. By maximality, for any , must intersect at least one of the selected , thus , and the claim follows.
Lemma 2 (Ruzsa triangle inequality) Let be finite non-empty subsets of . Then .
Proof: Consider the addition map from to . Every element of has a preimage of this map of cardinality at least , thanks to the obvious identity for each . Since has cardinality , the claim follows.
Such estimates (which are covered, incidentally, in Section 2 of my book with Van Vu) are particularly useful for controlling finite sets of small doubling, in the sense that for some bounded . (There are deeper theorems, most notably Freiman’s theorem, which give more control than what elementary Ruzsa calculus does, however the known bounds in the latter theorem are worse than polynomial in (although it is conjectured otherwise), whereas the elementary estimates are almost all polynomial in .)
However, there are some settings in which the standard sum set estimates are not quite applicable. One such setting is the continuous setting, where one is dealing with bounded open sets in an additive Lie group (e.g. or a torus ) rather than a finite setting. Here, one can largely replicate the discrete sum set estimates by working with a Haar measure in place of cardinality; this is the approach taken for instance in this paper of mine. However, there is another setting, which one might dub the “discretised” setting (as opposed to the “discrete” setting or “continuous” setting), in which the sets remain finite (or at least discretisable to be finite), but for which there is a certain amount of “roundoff error” coming from the discretisation. As a typical example (working now in a non-commutative multiplicative setting rather than an additive one), consider the orthogonal group of orthogonal matrices, and let be the matrices obtained by starting with all of the orthogonal matrice in and rounding each coefficient of each matrix in this set to the nearest multiple of , for some small . This forms a finite set (whose cardinality grows as like a certain negative power of ). In the limit , the set is not a set of small doubling in the discrete sense. However, is still close to in a metric sense, being contained in the -neighbourhood of . Another key example comes from graphs of maps from a subset of one additive group to another . If is “approximately additive” in the sense that for all , is close to in some metric, then might not have small doubling in the discrete sense (because could take a large number of values), but could be considered a set of small doubling in a discretised sense.
One would like to have a sum set (or product set) theory that can handle these cases, particularly in “high-dimensional” settings in which the standard methods of passing back and forth between continuous, discrete, or discretised settings behave poorly from a quantitative point of view due to the exponentially large doubling constant of balls. One way to do this is to impose a translation invariant metric on the underlying group (reverting back to additive notation), and replace the notion of cardinality by that of metric entropy. There are a number of almost equivalent ways to define this concept:
Definition 3 Let be a metric space, let be a subset of , and let be a radius.
- The packing number is the largest number of points one can pack inside such that the balls are disjoint.
- The internal covering number is the fewest number of points such that the balls cover .
- The external covering number is the fewest number of points such that the balls cover .
- The metric entropy is the largest number of points one can find in that are -separated, thus for all .
It is an easy exercise to verify the inequalities
for any , and that is non-increasing in and non-decreasing in for the three choices (but monotonicity in can fail for !). It turns out that the external covering number is slightly more convenient than the other notions of metric entropy, so we will abbreviate . The cardinality can be viewed as the limit of the entropies as .
If we have the bounded doubling property that is covered by translates of for each , and one has a Haar measure on which assigns a positive finite mass to each ball, then any of the above entropies is comparable to , as can be seen by simple volume packing arguments. Thus in the bounded doubling setting one can usually use the measure-theoretic sum set theory to derive entropy-theoretic sumset bounds (see e.g. this paper of mine for an example of this). However, it turns out that even in the absence of bounded doubling, one still has an entropy analogue of most of the elementary sum set theory, except that one has to accept some degradation in the radius parameter by some absolute constant. Such losses can be acceptable in applications in which the underlying sets are largely “transverse” to the balls , so that the -entropy of is largely independent of ; this is a situation which arises in particular in the case of graphs discussed above, if one works with “vertical” metrics whose balls extend primarily in the vertical direction. (I hope to present a specific application of this type here in the near future.)
Henceforth we work in an additive group equipped with a translation-invariant metric . (One can also generalise things slightly by allowing the metric to attain the values or , without changing much of the analysis below.) By the Heine-Borel theorem, any precompact set will have finite entropy for any . We now have analogues of the two basic Ruzsa lemmas above:
Proof: Let be a maximal set of points such that the sets are all disjoint. Then the sets are disjoint in and have entropy , and furthermore any ball of radius can intersect at most one of the . We conclude that , so . If , then must intersect one of the , so , and the claim follows.
Proof: Consider the addition map from to . The domain may be covered by product balls . Every element of has a preimage of this map which projects to a translate of , and thus must meet at least of these product balls. However, if two elements of are separated by a distance of at least , then no product ball can intersect both preimages. We thus see that , and the claim follows.
Below the fold we will record some further metric entropy analogues of sum set estimates (basically redoing much of Chapter 2 of my book with Van Vu). Unfortunately there does not seem to be a direct way to abstractly deduce metric entropy results from their sum set analogues (basically due to the failure of a certain strong version of Freiman’s theorem, as discussed in this previous post); nevertheless, the proofs of the discrete arguments are elementary enough that they can be modified with a small amount of effort to handle the entropy case. (In fact, there should be a very general model-theoretic framework in which both the discrete and entropy arguments can be processed in a unified manner; see this paper of Hrushovski for one such framework.)
It is also likely that many of the arguments here extend to the non-commutative setting, but for simplicity we will not pursue such generalisations here.
(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)
Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.
The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:
|Ramsey theory||Topological dynamics||Compactness|
|Density Ramsey theory||Ergodic theory||Furstenberg correspondence principle|
|Graph/hypergraph regularity||Measure theory||Graph limits|
|Polynomial regularity||Linear algebra||Ultralimits|
|Structural decompositions||Hilbert space geometry||Ultralimits|
|Fourier analysis||Spectral theory||Direct and inverse limits|
|Quantitative algebraic geometry||Algebraic geometry||Schemes|
|Discrete metric spaces||Continuous metric spaces||Gromov-Hausdorff limits|
|Approximate group theory||Topological group theory||Model theory|
As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:
- Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects in a common space , which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object , which remains in the same space, and is “close” to many of the original objects with respect to the given metric or topology.
- Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects in a category , which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit or the inverse limit of these objects, which is another object in the same category , and is connected to the original objects by various morphisms.
- Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects or of spaces , each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, might be groups and might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object or a new space , which is still a model of the same language (e.g. if the spaces were all groups, then the limiting space will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if is an abelian group, then the will also be abelian groups for many .)
The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects to all lie in a common space in order to form an ultralimit ; they are permitted to lie in different spaces ; this is more natural in many discrete contexts, e.g. when considering graphs on vertices in the limit when goes to infinity. Also, no convergence properties on the are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces involved are required in order to construct the ultraproduct.
With so few requirements on the objects or spaces , the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the , will be exactly obeyed by the limit object ; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.
Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.
Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.
Let be a field. A definable set over is a set of the form
where is a natural number, and is a predicate involving the ring operations of , the equality symbol , an arbitrary number of constants and free variables in , the quantifiers , boolean operators such as , and parentheses and colons, where the quantifiers are always understood to be over the field . Thus, for instance, the set of quadratic residues
where is a natural number, and is a predicate involving the ring operations of , the equality symbol , an arbitrary number of constants and free variables in , the quantifiers , boolean operators such as , and parentheses and colons, where the quantifiers are always understood to be over the field . Thus, for instance, the set of quadratic residues
is definable over , and any algebraic variety over is also a definable set over . Henceforth we will abbreviate “definable over ” simply as “definable”.
If is a finite field, then every subset of is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the complexity of a definable set. We say that is a definable set of complexity at most if , and can be written in the form (1) for some predicate of length at most (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in dimensions of degree would be a definable set of complexity . We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large.
In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells:
Lemma 1 (Algebraic regularity lemma) Let be a finite field, let be definable non-empty sets of complexity at most , and let also be definable with complexity at most . Assume that the characteristic of is sufficiently large depending on . Then we may partition and with , with the following properties:
My original proof of this lemma was quite complicated, based on an explicit calculation of the “square”
of using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.)
Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields (although its content is trivial if the field is not sufficiently large depending on the complexity at most ).
Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre:
- (Discretised cardinality) If is a non-empty definable set of complexity at most , then one has
where is a natural number, and is a positive rational number with numerator and denominator . In particular, we have .
- (Definable cardinality) Assume is sufficiently large depending on . If , and are definable sets of complexity at most , so that can be viewed as a definable subset of that is definably parameterised by , then for each natural number and each positive rational with numerator and denominator , the set
is definable with complexity , where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” and “measure” of depends definably on .)
We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.)
The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most and cardinality , if is sufficiently large depending on . If are definable sets of complexity at most , it shows that for some rational with numerator and denominator ; furthermore, if , we may improve this bound to . In particular, we obtain the following “self-improving” properties:
- If are definable of complexity at most and for some , then (if is sufficiently small depending on and is sufficiently large depending on ) this forces .
- If are definable of complexity at most and for some and positive rational , then (if is sufficiently small depending on and is sufficiently large depending on ) this forces .
It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to ) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski.