You are currently browsing Terence Tao’s articles.

Let be the algebraic closure of , that is to say the field of algebraic numbers. We fix an embedding of into , giving rise to a complex absolute value for algebraic numbers .

Let be of degree , so that is irrational. A classical theorem of Liouville gives the quantitative bound

for the irrationality of fails to be approximated by rational numbers , where depends on but not on . Indeed, if one lets be the Galois conjugates of , then the quantity is a non-zero natural number divided by a constant, and so we have the trivial lower bound

from which the bound (1) easily follows. A well known corollary of the bound (1) is that Liouville numbers are automatically transcendental.

The famous theorem of Thue, Siegel and Roth improves the bound (1) to

for any and rationals , where depends on but not on . Apart from the in the exponent and the implied constant, this bound is optimal, as can be seen from Dirichlet’s theorem. This theorem is a good example of the *ineffectivity phenomenon* that affects a large portion of modern number theory: the implied constant in the notation is known to be finite, but there is no explicit bound for it in terms of the coefficients of the polynomial defining (in contrast to (1), for which an effective bound may be easily established). This is ultimately due to the reliance on the “dueling conspiracy” (or “repulsion phenomenon”) strategy. We do not as yet have a good way to rule out *one* counterexample to (2), in which is far closer to than ; however we can rule out *two* such counterexamples, by playing them off of each other.

A powerful strengthening of the Thue-Siegel-Roth theorem is given by the *subspace theorem*, first proven by Schmidt and then generalised further by several authors. To motivate the theorem, first observe that the Thue-Siegel-Roth theorem may be rephrased as a bound of the form

for any algebraic numbers with and linearly independent (over the algebraic numbers), and any and , with the exception when or are rationally dependent (i.e. one is a rational multiple of the other), in which case one has to remove some lines (i.e. subspaces in ) of rational slope from the space of pairs to which the bound (3) does not apply (namely, those lines for which the left-hand side vanishes). Here can depend on but not on . More generally, we have

Theorem 1 (Schmidt subspace theorem)Let be a natural number. Let be linearly independent linear forms. Then for any , one has the boundfor all , outside of a finite number of proper subspaces of , where

and depends on and the , but is independent of .

Being a generalisation of the Thue-Siegel-Roth theorem, it is unsurprising that the known proofs of the subspace theorem are also ineffective with regards to the constant . (However, the number of exceptional subspaces may be bounded effectively; cf. the situation with the Skolem-Mahler-Lech theorem, discussed in this previous blog post.) Once again, the lower bound here is basically sharp except for the factor and the implied constant: given any with , a simple volume packing argument (the same one used to prove the Dirichlet approximation theorem) shows that for any sufficiently large , one can find integers , not all zero, such that

for all . Thus one can get comparable to in many different ways.

There are important generalisations of the subspace theorem to other number fields than the rationals (and to other valuations than the Archimedean valuation ); we will develop one such generalisation below.

The subspace theorem is one of many *finiteness theorems* in Diophantine geometry; in this case, it is the number of exceptional subspaces which is finite. It turns out that finiteness theorems are very compatible with the language of nonstandard analysis. (See this previous blog post for a review of the basics of nonstandard analysis, and in particular for the nonstandard interpretation of asymptotic notation such as and .) The reason for this is that a standard set is finite if and only if it contains no strictly nonstandard elements (that is to say, elements of ). This makes for a clean formulation of finiteness theorems in the nonstandard setting. For instance, the standard form of Bezout’s theorem asserts that if are coprime polynomials over some field, then the curves and intersect in only finitely many points. The nonstandard version of this is then

Theorem 2 (Bezout’s theorem, nonstandard form)Let be standard coprime polynomials. Then there are no strictly nonstandard solutions to .

Now we reformulate Theorem 1 in nonstandard language. We need a definition:

Definition 3 (General position)Let be nested fields. A point in is said to be in-general positionif it is not contained in any hyperplane of definable over , or equivalently if one hasfor any .

Theorem 4 (Schmidt subspace theorem, nonstandard version)Let be a standard natural number. Let be linearly independent standard linear forms. Let be a tuple of nonstandard integers which is in -general position (in particular, this forces to be strictly nonstandard). Then one haswhere we extend from to (and also similarly extend from to ) in the usual fashion.

Observe that (as is usual when translating to nonstandard analysis) some of the epsilons and quantifiers that are present in the standard version become hidden in the nonstandard framework, being moved inside concepts such as “strictly nonstandard” or “general position”. We remark that as is in -general position, it is also in -general position (as an easy Galois-theoretic argument shows), and the requirement that the are linearly independent is thus equivalent to being -linearly independent.

Exercise 1Verify that Theorem 1 and Theorem 4 are equivalent. (Hint:there are only countably many proper subspaces of .)

We will not prove the subspace theorem here, but instead focus on a particular application of the subspace theorem, namely to counting integer points on curves. In this paper of Corvaja and Zannier, the subspace theorem was used to give a new proof of the following basic result of Siegel:

Theorem 5 (Siegel’s theorem on integer points)Let be an irreducible polynomial of two variables, such that the affine plane curve either has genus at least one, or has at least three points on the line at infinity, or both. Then has only finitely many integer points .

This is a finiteness theorem, and as such may be easily converted to a nonstandard form:

Theorem 6 (Siegel’s theorem, nonstandard form)Let be a standard irreducible polynomial of two variables, such that the affine plane curve either has genus at least one, or has at least three points on the line at infinity, or both. Then does not contain any strictly nonstandard integer points .

Note that Siegel’s theorem can fail for genus zero curves that only meet the line at infinity at just one or two points; the key examples here are the graphs for a polynomial , and the Pell equation curves . Siegel’s theorem can be compared with the more difficult theorem of Faltings, which establishes finiteness of rational points (not just integer points), but now needs the stricter requirement that the curve has genus at least two (to avoid the additional counterexample of elliptic curves of positive rank, which have infinitely many rational points).

The standard proofs of Siegel’s theorem rely on a combination of the Thue-Siegel-Roth theorem and a number of results on abelian varieties (notably the Mordell-Weil theorem). The Corvaja-Zannier argument rebalances the difficulty of the argument by replacing the Thue-Siegel-Roth theorem by the more powerful subspace theorem (in fact, they need one of the stronger versions of this theorem alluded to earlier), while greatly reducing the reliance on results on abelian varieties. Indeed, for curves with three or more points at infinity, no theory from abelian varieties is needed at all, while for the remaining cases, one mainly needs the existence of the Abel-Jacobi embedding, together with a relatively elementary theorem of Chevalley-Weil which is used in the proof of the Mordell-Weil theorem, but is significantly easier to prove.

The Corvaja-Zannier argument (together with several further applications of the subspace theorem) is presented nicely in this Bourbaki expose of Bilu. To establish the theorem in full generality requires a certain amount of algebraic number theory machinery, such as the theory of valuations on number fields, or of relative discriminants between such number fields. However, the basic ideas can be presented without much of this machinery by focusing on simple special cases of Siegel’s theorem. For instance, we can handle irreducible cubics that meet the line at infinity at exactly three points :

Theorem 7 (Siegel’s theorem with three points at infinity)Siegel’s theorem holds when the irreducible polynomial takes the formfor some quadratic polynomial and some distinct algebraic numbers .

*Proof:* We use the nonstandard formalism. Suppose for sake of contradiction that we can find a strictly nonstandard integer point on a curve of the indicated form. As this point is infinitesimally close to the line at infinity, must be infinitesimally close to one of ; without loss of generality we may assume that is infinitesimally close to .

We now use a version of the polynomial method, to find some polynomials of controlled degree that vanish to high order on the “arm” of the cubic curve that asymptotes to . More precisely, let be a large integer (actually will already suffice here), and consider the -vector space of polynomials of degree at most , and of degree at most in the variable; this space has dimension . Also, as one traverses the arm of , any polynomial in grows at a rate of at most , that is to say has a pole of order at most at the point at infinity . By performing Laurent expansions around this point (which is a non-singular point of , as the are assumed to be distinct), we may thus find a basis of , with the property that has a pole of order at most at for each .

From the control of the pole at , we have

for all . The exponents here become negative for , and on multiplying them all together we see that

This exponent is negative for large enough (or just take ). If we expand

for some algebraic numbers , then we thus have

for some standard . Note that the -dimensional vectors are linearly independent in , because the are linearly independent in . Applying the Schmidt subspace theorem in the contrapositive, we conclude that the -tuple is not in -general position. That is to say, one has a non-trivial constraint of the form

for some standard rational coefficients , not all zero. But, as is irreducible and cubic in , it has no common factor with the standard polynomial , so by Bezout’s theorem (Theorem 2) the constraint (4) only has standard solutions, contradicting the strictly nonstandard nature of .

Exercise 2Rewrite the above argument so that it makes no reference to nonstandard analysis. (In this case, the rewriting is quite straightforward; however, there will be a subsequent argument in which the standard version is significantly messier than the nonstandard counterpart, which is the reason why I am working with the nonstandard formalism in this blog post.)

A similar argument works for higher degree curves that meet the line at infinity in three or more points, though if the curve has singularities at infinity then it becomes convenient to rely on the Riemann-Roch theorem to control the dimension of the analogue of the space . Note that when there are only two or fewer points at infinity, though, one cannot get the negative exponent of needed to usefully apply the subspace theorem. To deal with this case we require some additional tricks. For simplicity we focus on the case of Mordell curves, although it will be convenient to work with more general number fields than the rationals:

Theorem 8 (Siegel’s theorem for Mordell curves)Let be a non-zero integer. Then there are only finitely many integer solutions to . More generally, for any number field , and any nonzero , there are only finitely many algebraic integer solutions to , where is the ring of algebraic integers in .

Again, we will establish the nonstandard version. We need some additional notation:

Definition 9We define an almost rational integerto be a nonstandard such that for some standard positive integer , and write for the -algebra of almost rational integers.If is a standard number field, we define an almost -integerto be a nonstandard such that for some standard positive integer , and write for the -algebra of almost -integers.We define an almost algebraic integerto be a nonstandard such that is a nonstandard algebraic integer for some standard positive integer , and write for the -algebra of almost algebraic integers.

Theorem 10 (Siegel for Mordell, nonstandard version)Let be a non-zero standard algebraic number. Then the curve does not contain any strictly nonstandard almost algebraic integer point.

Another way of phrasing this theorem is that if are strictly nonstandard almost algebraic integers, then is either strictly nonstandard or zero.

Exercise 3Verify that Theorem 8 and Theorem 10 are equivalent.

Due to all the ineffectivity, our proof does not supply any bound on the solutions in terms of , even if one removes all references to nonstandard analysis. It is a conjecture of Hall (a special case of the notorious ABC conjecture) that one has the bound for all (or equivalently ), but even the weaker conjecture that are of polynomial size in is open. (The best known bounds are of exponential nature, and are proven using a version of Baker’s method: see for instance this text of Sprindzuk.)

A direct repetition of the arguments used to prove Theorem 7 will not work here, because the Mordell curve only hits the line at infinity at one point, . To get around this we will exploit the fact that the Mordell curve is an elliptic curve and thus has a group law on it. We will then divide all the integer points on this curve by two; as elliptic curves have four 2-torsion points, this will end up placing us in a situation like Theorem 7, with four points at infinity. However, there is an obstruction: it is not obvious that dividing an integer point on the Mordell curve by two will produce another integer point. However, this is essentially true (after enlarging the ring of integers slightly) thanks to a general principle of Chevalley and Weil, which can be worked out explicitly in the case of division by two on Mordell curves by relatively elementary means (relying mostly on unique factorisation of ideals of algebraic integers). We give the details below the fold.

As laid out in the foundational work of Kolmogorov, a *classical probability space* (or probability space for short) is a triplet , where is a set, is a -algebra of subsets of , and is a countably additive probability measure on . Given such a space, one can form a number of interesting function spaces, including

- the (real) Hilbert space of square-integrable functions , modulo -almost everywhere equivalence, and with the positive definite inner product ; and
- the unital commutative Banach algebra of essentially bounded functions , modulo -almost everywhere equivalence, with defined as the essential supremum of .

There is also a trace on defined by integration: .

One can form the category of classical probability spaces, by defining a morphism between probability spaces to be a function which is measurable (thus for all ) and measure-preserving (thus for all ).

Let us now abstract the algebraic features of these spaces as follows; for want of a better name, I will refer to this abstraction as an *algebraic probability space*, and is very similar to the non-commutative probability spaces studied in this previous post, except that these spaces are now commutative (and real).

Definition 1Analgebraic probability spaceis a pair where

- is a unital commutative real algebra;
- is a homomorphism such that and for all ;
- Every element of is
boundedin the sense that . (Technically, this isn’t an algebraic property, but I need it for technical reasons.)A morphism is a homomorphism which is trace-preserving, in the sense that for all .

For want of a better name, I’ll denote the category of algebraic probability spaces as . One can view this category as the opposite category to that of (a subcategory of) the category of tracial commutative real algebras. One could emphasise this opposite nature by denoting the algebraic probability space as rather than ; another suggestive (but slightly inaccurate) notation, inspired by the language of schemes, would be rather than . However, we will not adopt these conventions here, and refer to algebraic probability spaces just by the pair .

By the previous discussion, we have a covariant functor that takes a classical probability space to its algebraic counterpart , with a morphism of classical probability spaces mapping to a morphism of the corresponding algebraic probability spaces by the formula

for . One easily verifies that this is a functor.

In this post I would like to describe a functor which partially inverts (up to natural isomorphism), that is to say a recipe for starting with an algebraic probability space and producing a classical probability space . This recipe is not new – it is basically the (commutative) Gelfand-Naimark-Segal construction (discussed in this previous post) combined with the Loomis-Sikorski theorem (discussed in this previous post). However, I wanted to put the construction in a single location for sake of reference. I also wanted to make the point that and are not complete inverses; there is a bit of information in the algebraic probability space (e.g. topological information) which is lost when passing back to the classical probability space. In some future posts, I would like to develop some ergodic theory using the algebraic foundations of probability theory rather than the classical foundations; this turns out to be convenient in the ergodic theory arising from nonstandard analysis (such as that described in this previous post), in which the groups involved are uncountable and the underlying spaces are not standard Borel spaces.

Let us describe how to construct the functor , with details postponed to below the fold.

- Starting with an algebraic probability space , form an inner product on by the formula , and also form the spectral radius .
- The inner product is clearly positive semi-definite. Quotienting out the null vectors and taking completions, we arrive at a real Hilbert space , to which the trace may be extended.
- Somewhat less obviously, the spectral radius is well-defined and gives a norm on . Taking limits of sequences in of bounded spectral radius gives us a subspace of that has the structure of a real commutative Banach algebra.
- The idempotents of the Banach algebra may be indexed by elements of an abstract -algebra .
- The Boolean algebra homomorphisms (or equivalently, the real algebra homomorphisms ) may be indexed by elements of a space .
- Let denote the -algebra on generated by the basic sets for every .
- Let be the -ideal of generated by the sets , where is a sequence with .
- One verifies that is isomorphic to . Using this isomorphism, the trace on can be used to construct a countably additive measure on . The classical probability space is then , and the abstract spaces may now be identified with their concrete counterparts , .
- Every algebraic probability space morphism generates a classical probability morphism via the formula
using a pullback operation on the abstract -algebras that can be defined by density.

Remark 1The classical probability space constructed by the functor has some additional structure; namely is a -Stone space (a Stone space with the property that the closure of any countable union of clopen sets is clopen), is the Baire -algebra (generated by the clopen sets), and the null sets are the meager sets. However, we will not use this additional structure here.

The partial inversion relationship between the functors and is given by the following assertion:

- There is a natural transformation from to the identity functor .

More informally: if one starts with an algebraic probability space and converts it back into a classical probability space , then there is a trace-preserving algebra homomorphism of to , which respects morphisms of the algebraic probability space. While this relationship is far weaker than an equivalence of categories (which would require that and are both natural isomorphisms), it is still good enough to allow many ergodic theory problems formulated using classical probability spaces to be reformulated instead as an equivalent problem in algebraic probability spaces.

Remark 2The opposite composition is a little odd: it takes an arbitrary probability space and returns a more complicated probability space , with being the space of homomorphisms . while there is “morally” an embedding of into using the evaluation map, this map does not exist in general because points in may well have zero measure. However, if one takes a “pointless” approach and focuses just on the measure algebras , , then these algebras become naturally isomorphic after quotienting out by null sets.

Remark 3An algebraic probability space captures a bit more structure than a classical probability space, because may be identified with a proper subset of that describes the “regular” functions (or random variables) of the space. For instance, starting with the unit circle (with the usual Haar measure and the usual trace ), any unital subalgebra of that is dense in will generate the same classical probability space on applying the functor , namely one will get the space of homomorphisms from to (with the measure induced from ). Thus for instance could be the continuous functions , the Wiener algebra or the full space , but the classical space will be unable to distinguish these spaces from each other. In particular, the functor loses information (roughly speaking, this functor takes an algebraic probability space and completes it to a von Neumann algebra, but then forgets exactly what algebra was initially used to create this completion). In ergodic theory, this sort of “extra structure” is traditionally encoded in topological terms, by assuming that the underlying probability space has a nice topological structure (e.g. a standard Borel space); however, with the algebraic perspective one has the freedom to have non-topological notions of extra structure, by choosing to be something other than an algebra of continuous functions on a topological space. I hope to discuss one such example of extra structure (coming from the Gowers-Host-Kra theory of uniformity seminorms) in a later blog post (this generalises the example of the Wiener algebra given previously, which is encoding “Fourier structure”).

A small example of how one could use the functors is as follows. Suppose one has a classical probability space with a measure-preserving action of an uncountable group , which is only defined (and an action) up to almost everywhere equivalence; thus for instance for any set and any , and might not be exactly equal, but only equal up to a null set. For similar reasons, an element of the invariant factor might not be exactly invariant with respect to , but instead one only has and equal up to null sets for each . One might like to “clean up” the action of to make it defined everywhere, and a genuine action everywhere, but this is not immediately achievable if is uncountable, since the union of all the null sets where something bad occurs may cease to be a null set. However, by applying the functor , each shift defines a morphism on the associated algebraic probability space (i.e. the Koopman operator), and then applying , we obtain a shift on a new classical probability space which now gives a genuine measure-preserving action of , and which is equivalent to the original action from a measure algebra standpoint. The invariant factor now consists of those sets in which are genuinely -invariant, not just up to null sets. (Basically, the classical probability space contains a Boolean algebra with the property that every measurable set is equivalent up to null sets to precisely one set in , allowing for a canonical “retraction” onto that eliminates all null set issues.)

More indirectly, the functors suggest that one should be able to develop a “pointless” form of ergodic theory, in which the underlying probability spaces are given algebraically rather than classically. I hope to give some more specific examples of this in later posts.

There are a number of ways to construct the real numbers , for instance

- as the metric completion of (thus, is defined as the set of Cauchy sequences of rationals, modulo Cauchy equivalence);
- as the space of Dedekind cuts on the rationals ;
- as the space of quasimorphisms on the integers, quotiented by bounded functions. (I believe this construction first appears in this paper of Street, who credits the idea to Schanuel, though the germ of this construction arguably goes all the way back to Eudoxus.)

There is also a fourth family of constructions that proceeds via nonstandard analysis, as a special case of what is known as the *nonstandard hull construction*. (Here I will assume some basic familiarity with nonstandard analysis and ultraproducts, as covered for instance in this previous blog post.) Given an unbounded nonstandard natural number , one can define two external additive subgroups of the nonstandard integers :

- The group of all nonstandard integers of magnitude less than or comparable to ; and
- The group of nonstandard integers of magnitude infinitesimally smaller than .

The group is a subgroup of , so we may form the quotient group . This space is isomorphic to the reals , and can in fact be used to construct the reals:

Proposition 1For any coset of , there is a unique real number with the property that . The map is then an isomorphism between the additive groups and .

*Proof:* Uniqueness is clear. For existence, observe that the set is a Dedekind cut, and its supremum can be verified to have the required properties for .

In a similar vein, we can view the unit interval in the reals as the quotient

where is the nonstandard (i.e. internal) set ; of course, is not a group, so one should interpret as the image of under the quotient map (or , if one prefers). Or to put it another way, (1) asserts that is the image of with respect to the map .

In this post I would like to record a nice measure-theoretic version of the equivalence (1), which essentially appears already in standard texts on Loeb measure (see e.g. this text of Cutland). To describe the results, we must first quickly recall the construction of *Loeb measure* on . Given an internal subset of , we may define the elementary measure of by the formula

This is a finitely additive probability measure on the Boolean algebra of internal subsets of . We can then construct the *Loeb outer measure* of any subset in complete analogy with Lebesgue outer measure by the formula

where ranges over all sequences of internal subsets of that cover . We say that a subset of is *Loeb measurable* if, for any (standard) , one can find an internal subset of which differs from by a set of Loeb outer measure at most , and in that case we define the *Loeb measure* of to be . It is a routine matter to show (e.g. using the Carathéodory extension theorem) that the space of Loeb measurable sets is a -algebra, and that is a countably additive probability measure on this space that extends the elementary measure . Thus now has the structure of a probability space .

Now, the group acts (Loeb-almost everywhere) on the probability space by the addition map, thus for and (excluding a set of Loeb measure zero where exits ). This action is clearly seen to be measure-preserving. As such, we can form the *invariant factor* , defined by restricting attention to those Loeb measurable sets with the property that is equal -almost everywhere to for each .

The claim is then that this invariant factor is equivalent (up to almost everywhere equivalence) to the unit interval with Lebesgue measure (and the trivial action of ), by the same factor map used in (1). More precisely:

Theorem 2Given a set , there exists a Lebesgue measurable set , unique up to -a.e. equivalence, such that is -a.e. equivalent to the set . Conversely, if is Lebesgue measurable, then is in , and .

More informally, we have the measure-theoretic version

of (1).

*Proof:* We first prove the converse. It is clear that is -invariant, so it suffices to show that is Loeb measurable with Loeb measure . This is easily verified when is an elementary set (a finite union of intervals). By countable subadditivity of outer measure, this implies that Loeb outer measure of is bounded by the Lebesgue outer measure of for any set ; since every Lebesgue measurable set differs from an elementary set by a set of arbitrarily small Lebesgue outer measure, the claim follows.

Now we establish the forward claim. Uniqueness is clear from the converse claim, so it suffices to show existence. Let . Let be an arbitrary standard real number, then we can find an internal set which differs from by a set of Loeb measure at most . As is -invariant, we conclude that for every , and differ by a set of Loeb measure (and hence elementary measure) at most . By the (contrapositive of the) underspill principle, there must exist a standard such that and differ by a set of elementary measure at most for all . If we then define the nonstandard function by the formula

then from the (nonstandard) triangle inequality we have

(say). On the other hand, has the Lipschitz continuity property

and so in particular we see that

for some Lipschitz continuous function . If we then let be the set where , one can check that differs from by a set of Loeb outer measure , and hence does so also. Sending to zero, we see (from the converse claim) that is a Cauchy sequence in and thus converges in for some Lebesgue measurable . The sets then converge in Loeb outer measure to , giving the claim.

Thanks to the Lebesgue differentiation theorem, the conditional expectation of a bounded Loeb-measurable function can be expressed (as a function on , defined -a.e.) as

By the abstract ergodic theorem from the previous post, one can also view this conditional expectation as the element in the closed convex hull of the shifts , of minimal norm. In particular, we obtain a form of the von Neumann ergodic theorem in this context: the averages for converge (as a net, rather than a sequence) in to .

If is (the standard part of) an internal function, that is to say the ultralimit of a sequence of finitary bounded functions, one can view the measurable function as a limit of the that is analogous to the “graphons” that emerge as limits of graphs (see e.g. the recent text of Lovasz on graph limits). Indeed, the measurable function is related to the discrete functions by the formula

for all , where is the nonprincipal ultrafilter used to define the nonstandard universe. In particular, from the Arzela-Ascoli diagonalisation argument there is a subsequence such that

thus is the asymptotic density function of the . For instance, if is the indicator function of a randomly chosen subset of , then the asymptotic density function would equal (almost everywhere, at least).

I’m continuing to look into understanding the ergodic theory of actions, as I believe this may allow one to apply ergodic theory methods to the “single-scale” or “non-asymptotic” setting (in which one averages only over scales comparable to a large parameter , rather than the traditional asymptotic approach of letting the scale go to infinity). I’m planning some further posts in this direction, though this is still a work in progress.

The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if is a unitary operator on a Hilbert space , and is a vector in that Hilbert space, then one has

in the strong topology, where is the -invariant subspace of , and is the orthogonal projection to . (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if is a countable amenable group acting on a Hilbert space by unitary transformations , and is a vector in that Hilbert space, then one has

for any Folner sequence of , where is the -invariant subspace. Thus one can interpret as a certain average of elements of the orbit of .

I recently discovered that there is a simple variant of this ergodic theorem that holds even when the group is not amenable (or not discrete), using a more abstract notion of averaging:

Theorem 1 (Abstract ergodic theorem)Let be an arbitrary group acting unitarily on a Hilbert space , and let be a vector in . Then is the element in the closed convex hull of of minimal norm, and is also the unique element of in this closed convex hull.

*Proof:* As the closed convex hull of is closed, convex, and non-empty in a Hilbert space, it is a classical fact (see e.g. Proposition 1 of this previous post) that it has a unique element of minimal norm. If for some , then the midpoint of and would be in the closed convex hull and be of smaller norm, a contradiction; thus is -invariant. To finish the first claim, it suffices to show that is orthogonal to every element of . But if this were not the case for some such , we would have for all , and thus on taking convex hulls , a contradiction.

Finally, since is orthogonal to , the same is true for for any in the closed convex hull of , and this gives the second claim.

This result is due to Alaoglu and Birkhoff. It implies the amenable ergodic theorem (1); indeed, given any , Theorem 1 implies that there is a finite convex combination of shifts of which lies within (in the norm) to . By the triangle inequality, all the averages also lie within of , but by the Folner property this implies that the averages are eventually within (say) of , giving the claim.

It turns out to be possible to use Theorem 1 as a substitute for the mean ergodic theorem in a number of contexts, thus removing the need for an amenability hypothesis. Here is a basic application:

Corollary 2 (Relative orthogonality)Let be a group acting unitarily on a Hilbert space , and let be a -invariant subspace of . Then and are relatively orthogonal over their common subspace , that is to say the restrictions of and to the orthogonal complement of are orthogonal to each other.

*Proof:* By Theorem 1, we have for all , and the claim follows. (Thanks to Gergely Harcos for this short argument.)

Now we give a more advanced application of Theorem 1, to establish some “Mackey theory” over arbitrary groups . Define a *-system* to be a probability space together with a measure-preserving action of on ; this gives an action of on , which by abuse of notation we also call :

(In this post we follow the usual convention of defining the spaces by quotienting out by almost everywhere equivalence.) We say that a -system is *ergodic* if consists only of the constants.

(A technical point: the theory becomes slightly cleaner if we interpret our measure spaces abstractly (or “pointlessly“), removing the underlying space and quotienting by the -ideal of null sets, and considering maps such as only on this quotient -algebra (or on the associated von Neumann algebra or Hilbert space ). However, we will stick with the more traditional setting of classical probability spaces here to keep the notation familiar, but with the understanding that many of the statements below should be understood modulo null sets.)

A *factor* of a -system is another -system together with a *factor map* which commutes with the -action (thus for all ) and respects the measure in the sense that for all . For instance, the *-invariant factor* , formed by restricting to the invariant algebra , is a factor of . (This factor is the first factor in an important hierachy, the next element of which is the *Kronecker factor* , but we will not discuss higher elements of this hierarchy further here.) If is a factor of , we refer to as an *extension* of .

From Corollary 2 we have

Corollary 3 (Relative independence)Let be a -system for a group , and let be a factor of . Then and are relatively independent over their common factor , in the sense that the spaces and are relatively orthogonal over when all these spaces are embedded into .

This has a simple consequence regarding the product of two -systems and , in the case when the action is trivial:

Lemma 4If are two -systems, with the action of on trivial, then is isomorphic to in the obvious fashion.

This lemma is immediate for countable , since for a -invariant function , one can ensure that holds simultaneously for all outside of a null set, but is a little trickier for uncountable .

*Proof:* It is clear that is a factor of . To obtain the reverse inclusion, suppose that it fails, thus there is a non-zero which is orthogonal to . In particular, we have orthogonal to for any . Since lies in , we conclude from Corollary 3 (viewing as a factor of ) that is also orthogonal to . Since is an arbitrary element of , we conclude that is orthogonal to and in particular is orthogonal to itself, a contradiction. (Thanks to Gergely Harcos for this argument.)

Now we discuss the notion of a group extension.

Definition 5 (Group extension)Let be an arbitrary group, let be a -system, and let be a compact metrisable group. A-extensionof is an extension whose underlying space is (with the product of and the Borel -algebra on ), the factor map is , and the shift maps are given bywhere for each , is a measurable map (known as the

cocycleassociated to the -extension ).

An important special case of a -extension arises when the measure is the product of with the Haar measure on . In this case, also has a -action that commutes with the -action, making a -system. More generally, could be the product of with the Haar measure of some closed subgroup of , with taking values in ; then is now a system. In this latter case we will call *-uniform*.

If is a -extension of and is a measurable map, we can define the *gauge transform* of to be the -extension of whose measure is the pushforward of under the map , and whose cocycles are given by the formula

It is easy to see that is a -extension that is isomorphic to as a -extension of ; we will refer to and as *equivalent* systems, and as *cohomologous* to . We then have the following fundamental result of Mackey and of Zimmer:

Theorem 6 (Mackey-Zimmer theorem)Let be an arbitrary group, let be an ergodic -system, and let be a compact metrisable group. Then every ergodic -extension of is equivalent to an -uniform extension of for some closed subgroup of .

This theorem is usually stated for amenable groups , but by using Theorem 1 (or more precisely, Corollary 3) the result is in fact also valid for arbitrary groups; we give the proof below the fold. (In the usual formulations of the theorem, and are also required to be Lebesgue spaces, or at least standard Borel, but again with our abstract approach here, such hypotheses will be unnecessary.) Among other things, this theorem plays an important role in the Furstenberg-Zimmer structural theory of measure-preserving systems (as well as subsequent refinements of this theory by Host and Kra); see this previous blog post for some relevant discussion. One can obtain similar descriptions of non-ergodic extensions via the ergodic decomposition, but the result becomes more complicated to state, and we will not do so here.

This should be the final thread (for now, at least) for the Polymath8 project (encompassing the original Polymath8a paper, the nearly finished Polymath8b paper, and the retrospective paper), superseding the previous Polymath8b thread (which was quite full) and the Polymath8a/retrospective thread (which was more or less inactive).

On Polymath8a: I talked briefly with Andrew Granville, who is handling the paper for Algebra & Number Theory, and he said that a referee report should be coming in soon. Apparently length of the paper is a bit of an issue (not surprising, as it is 163 pages long) and there will be some suggestions to trim the size down a bit.

In view of the length issue at A&NT, I’m now leaning towards taking up Ken Ono’s offer to submit the Polymath8b paper to the new open access journal “Research in the Mathematical Sciences“. I think the paper is almost ready to be submitted (after the current participants sign off on it, of course), but it might be worth waiting on the Polymath8a referee report in case the changes suggested impact the 8b paper.

Finally, it is perhaps time to start working on the retrospective article, and collect some impressions from participants. I wrote up a quick draft of my own experiences, and also pasted in Pace Nielsen’s thoughts, as well as a contribution from an undergraduate following the project (Andrew Gibson). Hopefully we can collect a few more (either through comments on this blog, through email, or through Dropbox), and then start working on editing them together and finding some suitable concluding points to make about the Polymath8 project, and what lessons we can take from it for future projects of this type.

Given two unit vectors in a real inner product space, one can define the *correlation* between these vectors to be their inner product , or in more geometric terms, the cosine of the angle subtended by and . By the Cauchy-Schwarz inequality, this is a quantity between and , with the extreme positive correlation occurring when are identical, the extreme negative correlation occurring when are diametrically opposite, and the zero correlation occurring when are orthogonal. This notion is closely related to the notion of correlation between two non-constant square-integrable real-valued random variables , which is the same as the correlation between two unit vectors lying in the Hilbert space of square-integrable random variables, with being the normalisation of defined by subtracting off the mean and then dividing by the standard deviation of , and similarly for and .

One can also define correlation for complex (Hermitian) inner product spaces by taking the real part of the complex inner product to recover a real inner product.

While reading the (highly recommended) recent popular maths book “How not to be wrong“, by my friend and co-author Jordan Ellenberg, I came across the (important) point that correlation is not necessarily transitive: if correlates with , and correlates with , then this does not imply that correlates with . A simple geometric example is provided by the three unit vectors

in the Euclidean plane : and have a positive correlation of , as does and , but and are not correlated with each other. Or: for a typical undergraduate course, it is generally true that good exam scores are correlated with a deep understanding of the course material, and memorising from flash cards are correlated with good exam scores, but this does not imply that memorising flash cards is correlated with deep understanding of the course material.

However, there are at least two situations in which some partial version of transitivity of correlation can be recovered. The first is in the “99%” regime in which the correlations are *very* close to : if are unit vectors such that is *very* highly correlated with , and is *very* highly correlated with , then this *does* imply that is very highly correlated with . Indeed, from the identity

(and similarly for and ) and the triangle inequality

Thus, for instance, if and , then . This is of course closely related to (though slightly weaker than) the triangle inequality for angles:

Remark 1(Thanks to Andrew Granville for conversations leading to this observation.) The inequality (1) also holds for sub-unit vectors, i.e. vectors with . This comes by extending in directions orthogonal to all three original vectors and to each other in order to make them unit vectors, enlarging the ambient Hilbert space if necessary. More concretely, one can apply (1) to the unit vectorsin .

But even in the “” regime in which correlations are very weak, there is still a version of transitivity of correlation, known as the *van der Corput lemma*, which basically asserts that if a unit vector is correlated with *many* unit vectors , then many of the pairs will then be correlated with each other. Indeed, from the Cauchy-Schwarz inequality

Thus, for instance, if for at least values of , then must be at least , which implies that for at least pairs . Or as another example: if a random variable exhibits at least positive correlation with other random variables , then if , at least two distinct must have positive correlation with each other (although this argument does not tell you *which* pair are so correlated). Thus one can view this inequality as a sort of `pigeonhole principle” for correlation.

A similar argument (multiplying each by an appropriate sign ) shows the related van der Corput inequality

and this inequality is also true for complex inner product spaces. (Also, the do not need to be unit vectors for this inequality to hold.)

Geometrically, the picture is this: if positively correlates with all of the , then the are all squashed into a somewhat narrow cone centred at . The cone is still wide enough to allow a few pairs to be orthogonal (or even negatively correlated) with each other, but (when is large enough) it is not wide enough to allow *all* of the to be so widely separated. Remarkably, the bound here does not depend on the dimension of the ambient inner product space; while increasing the number of dimensions should in principle add more “room” to the cone, this effect is counteracted by the fact that in high dimensions, almost all pairs of vectors are close to orthogonal, and the exceptional pairs that are even weakly correlated to each other become exponentially rare. (See this previous blog post for some related discussion; in particular, Lemma 2 from that post is closely related to the van der Corput inequality presented here.)

A particularly common special case of the van der Corput inequality arises when is a unit vector fixed by some unitary operator , and the are shifts of a single unit vector . In this case, the inner products are all equal, and we arrive at the useful van der Corput inequality

(In fact, one can even remove the absolute values from the right-hand side, by using (2) instead of (4).) Thus, to show that has negligible correlation with , it suffices to show that the shifts of have negligible correlation with each other.

Here is a basic application of the van der Corput inequality:

Proposition 1 (Weyl equidistribution estimate)Let be a polynomial with at least one non-constant coefficient irrational. Then one haswhere .

Note that this assertion implies the more general assertion

for any non-zero integer (simply by replacing by ), which by the Weyl equidistribution criterion is equivalent to the sequence being asymptotically equidistributed in .

*Proof:* We induct on the degree of the polynomial , which must be at least one. If is equal to one, the claim is easily established from the geometric series formula, so suppose that and that the claim has already been proven for . If the top coefficient of is rational, say , then by partitioning the natural numbers into residue classes modulo , we see that the claim follows from the induction hypothesis; so we may assume that the top coefficient is irrational.

In order to use the van der Corput inequality as stated above (i.e. in the formalism of inner product spaces) we will need a non-principal ultrafilter (see e.g this previous blog post for basic theory of ultrafilters); we leave it as an exercise to the reader to figure out how to present the argument below without the use of ultrafilters (or similar devices, such as Banach limits). The ultrafilter defines an inner product on bounded complex sequences by setting

Strictly speaking, this inner product is only positive semi-definite rather than positive definite, but one can quotient out by the null vectors to obtain a positive-definite inner product. To establish the claim, it will suffice to show that

for every non-principal ultrafilter .

Note that the space of bounded sequences (modulo null vectors) admits a shift , defined by

This shift becomes unitary once we quotient out by null vectors, and the constant sequence is clearly a unit vector that is invariant with respect to the shift. So by the van der Corput inequality, we have

for any . But we may rewrite . Then observe that if , is a polynomial of degree whose coefficient is irrational, so by induction hypothesis we have for . For we of course have , and so

for any . Letting , we obtain the claim.

This is the eleventh thread for the Polymath8b project to obtain new bounds for the quantity

;

the previous thread may be found here.

The main focus is now on writing up the results, with a draft paper close to completion here (with the directory of source files here). Most of the sections are now written up more or less completely, with the exception of the appendix on narrow admissible tuples, which was awaiting the bounds on such tuples to stabilise. There is now also an acknowledgments section (linking to the corresponding page on the wiki, which participants should check to see if their affiliations etc. are posted correctly), and in the final remarks section there is now also some discussion about potential improvements to the bounds. I’ve also added some mention of a recent paper of Banks, Freiberg and Maynard which makes use of some of our results (in particular, that ). On the other hand, the portions of the writeup relating to potential improvements to the MPZ estimates have been commented out, as it appears that one cannot easily obtain the exponential sum estimates required to make those go through. (Perhaps, if there are significant new developments, one could incorporate them into a putative Polymath8c project, although at present I think there’s not much urgency to start over once again.)

Regarding the numerics in Section 7 of the paper, one thing which is missing at present is some links to code in case future readers wish to verify the results; alternatively one could include such code and data into the arXiv submission.

It’s about time to discuss possible journals to submit the paper to. Ken Ono has invited us to submit to his new journal, “Research in the Mathematical Sciences“. Another option would be to submit to the same journal “Algebra & Number Theory” that is currently handling our Polymath8a paper (no news on the submission there, but it is a very long paper), although I think the papers are independent enough that it is not absolutely necessary to place them in the same journal. A third natural choice is “Mathematics of Computation“, though I should note that when the Polymath4 paper was submitted there, the editors required us to use our real names instead of the D.H.J. Polymath pseudonym as it would have messed up their metadata system otherwise. (But I can check with the editor there before submitting to see if there is some workaround now, perhaps their policies have changed.) At present I have no strong preferences regarding journal selection, and would welcome further thoughts and proposals. (It is perhaps best to avoid the journals that I am editor or associate editor of, namely Amer. J. Math, Forum of Mathematics, Analysis & PDE, and Dynamics and PDE, due to conflict of interest (and in the latter two cases, specialisation to a different area of mathematics)).

Many fluid equations are expected to exhibit turbulence in their solutions, in which a significant portion of their energy ends up in high frequency modes. A typical example arises from the three-dimensional periodic Navier-Stokes equations

where is the velocity field, is a forcing term, is a pressure field, and is the viscosity. To study the dynamics of energy for this system, we first pass to the Fourier transform

so that the system becomes

We may normalise (and ) to have mean zero, so that . Then we introduce the dyadic energies

where ranges over the powers of two, and is shorthand for . Taking the inner product of (1) with , we obtain the energy flow equation

where range over powers of two, is the energy flow rate

is the energy dissipation rate

and is the energy injection rate

The Navier-Stokes equations are notoriously difficult to solve in general. Despite this, Kolmogorov in 1941 was able to give a convincing heuristic argument for what the distribution of the dyadic energies should become over long times, assuming that some sort of distributional steady state is reached. It is common to present this argument in the form of dimensional analysis, but one can also give a more “first principles” form Kolmogorov’s argument, which I will do here. Heuristically, one can divide the frequency scales into three regimes:

- The
*injection regime*in which the energy injection rate dominates the right-hand side of (2); - The
*energy flow regime*in which the flow rates dominate the right-hand side of (2); and - The
*dissipation regime*in which the dissipation dominates the right-hand side of (2).

If we assume a fairly steady and smooth forcing term , then will be supported on the low frequency modes , and so we heuristically expect the injection regime to consist of the low scales . Conversely, if we take the viscosity to be small, we expect the dissipation regime to only occur for very large frequencies , with the energy flow regime occupying the intermediate frequencies.

We can heuristically predict the dividing line between the energy flow regime. Of all the flow rates , it turns out in practice that the terms in which (i.e., interactions between comparable scales, rather than widely separated scales) will dominate the other flow rates, so we will focus just on these terms. It is convenient to return back to physical space, decomposing the velocity field into Littlewood-Paley components

of the velocity field at frequency . By Plancherel’s theorem, this field will have an norm of , and as a naive model of turbulence we expect this field to be spread out more or less uniformly on the torus, so we have the heuristic

and a similar heuristic applied to gives

(One can consider modifications of the Kolmogorov model in which is concentrated on a lower-dimensional subset of the three-dimensional torus, leading to some changes in the numerology below, but we will not consider such variants here.) Since

we thus arrive at the heuristic

Of course, there is the possibility that due to significant cancellation, the energy flow is significantly less than , but we will assume that cancellation effects are not that significant, so that we typically have

or (assuming that does not oscillate too much in , and are close to )

On the other hand, we clearly have

We thus expect to be in the dissipation regime when

and in the energy flow regime when

Now we study the energy flow regime further. We assume a “statistically scale-invariant” dynamics in this regime, in particular assuming a power law

for some . From (3), we then expect an average asymptotic of the form

for some structure constants that depend on the exact nature of the turbulence; here we have replaced the factor by the comparable term to make things more symmetric. In order to attain a steady state in the energy flow regime, we thus need a cancellation in the structure constants:

On the other hand, if one is assuming statistical scale invariance, we expect the structure constants to be scale-invariant (in the energy flow regime), in that

for dyadic . Also, since the Euler equations conserve energy, the energy flows symmetrise to zero,

which from (7) suggests a similar cancellation among the structure constants

Combining this with the scale-invariance (9), we see that for fixed , we may organise the structure constants for dyadic into sextuples which sum to zero (including some degenerate tuples of order less than six). This will *automatically* guarantee the cancellation (8) required for a steady state energy distribution, provided that

or in other words

for any other value of , there is no particular reason to expect this cancellation (8) to hold. Thus we are led to the heuristic conclusion that the most stable power law distribution for the energies is the law

or in terms of shell energies, we have the famous Kolmogorov 5/3 law

Given that frequency interactions tend to cascade from low frequencies to high (if only because there are so many more high frequencies than low ones), the above analysis predicts a stablising effect around this power law: scales at which a law (6) holds for some are likely to lose energy in the near-term, while scales at which a law (6) hold for some are conversely expected to gain energy, thus nudging the exponent of power law towards .

We can solve for in terms of energy dissipation as follows. If we let be the frequency scale demarcating the transition from the energy flow regime (5) to the dissipation regime (4), we have

and hence by (10)

On the other hand, if we let be the energy dissipation at this scale (which we expect to be the dominant scale of energy dissipation), we have

Some simple algebra then lets us solve for and as

and

Thus, we have the Kolmogorov prediction

for

with energy dissipation occuring at the high end of this scale, which is counterbalanced by the energy injection at the low end of the scale.

Let be a quasiprojective variety defined over a finite field , thus for instance could be an affine variety

where is -dimensional affine space and are a finite collection of polynomials with coefficients in . Then one can define the set of -rational points, and more generally the set of -rational points for any , since can be viewed as a field extension of . Thus for instance in the affine case (1) we have

The Weil conjectures are concerned with understanding the number

of -rational points over a variety . The first of these conjectures was proven by Dwork, and can be phrased as follows.

Theorem 1 (Rationality of the zeta function)Let be a quasiprojective variety defined over a finite field , and let be given by (2). Then there exist a finite number of algebraic integers (known ascharacteristic valuesof ), such thatfor all .

After cancelling, we may of course assume that for any and , and then it is easy to see (as we will see below) that the become uniquely determined up to permutations of the and . These values are known as the *characteristic values* of . Since is a rational integer (i.e. an element of ) rather than merely an algebraic integer (i.e. an element of the ring of integers of the algebraic closure of ), we conclude from the above-mentioned uniqueness that the set of characteristic values are invariant with respect to the Galois group . To emphasise this Galois invariance, we will not fix a specific embedding of the algebraic numbers into the complex field , but work with all such embeddings simultaneously. (Thus, for instance, contains three cube roots of , but which of these is assigned to the complex numbers , , will depend on the choice of embedding .)

An equivalent way of phrasing Dwork’s theorem is that the (-form of the) zeta function

associated to (which is well defined as a formal power series in , at least) is equal to a rational function of (with the and being the poles and zeroes of respectively). Here, we use the formal exponential

Equivalently, the (-form of the) zeta-function is a meromorphic function on the complex numbers which is also periodic with period , and which has only finitely many poles and zeroes up to this periodicity.

Dwork’s argument relies primarily on -adic analysis – an analogue of complex analysis, but over an algebraically complete (and metrically complete) extension of the -adic field , rather than over the Archimedean complex numbers . The argument is quite effective, and in particular gives explicit upper bounds for the number of characteristic values in terms of the complexity of the variety ; for instance, in the affine case (1) with of degree , Bombieri used Dwork’s methods (in combination with Deligne’s theorem below) to obtain the bound , and a subsequent paper of Hooley established the slightly weaker bound purely from Dwork’s methods (a similar bound had also been pointed out in unpublished work of Dwork). In particular, one has bounds that are uniform in the field , which is an important fact for many analytic number theory applications.

These -adic arguments stand in contrast with Deligne’s resolution of the last (and deepest) of the Weil conjectures:

Theorem 2 (Riemann hypothesis)Let be a quasiprojective variety defined over a finite field , and let be a characteristic value of . Then there exists a natural number such that for every embedding , where denotes the usual absolute value on the complex numbers . (Informally: and all of its Galois conjugates have complex magnitude .)

To put it another way that closely resembles the classical Riemann hypothesis, all the zeroes and poles of the -form lie on the critical lines for . (See this previous blog post for further comparison of various instantiations of the Riemann hypothesis.) Whereas Dwork uses -adic analysis, Deligne uses the essentially orthogonal technique of ell-adic cohomology to establish his theorem. However, ell-adic methods can be used (via the Grothendieck-Lefschetz trace formula) to establish rationality, and conversely, in this paper of Kedlaya p-adic methods are used to establish the Riemann hypothesis. As pointed out by Kedlaya, the ell-adic methods are tied to the intrinsic geometry of (such as the structure of sheaves and covers over ), while the -adic methods are more tied to the *extrinsic* geometry of (how sits inside its ambient affine or projective space).

In this post, I would like to record my notes on Dwork’s proof of Theorem 1, drawing heavily on the expositions of Serre, Hooley, Koblitz, and others.

The basic strategy is to control the rational integers both in an “Archimedean” sense (embedding the rational integers inside the complex numbers with the usual norm ) as well as in the “-adic” sense, with the characteristic of (embedding the integers now in the “complexification” of the -adic numbers , which is equipped with a norm that we will recall later). (This is in contrast to the methods of ell-adic cohomology, in which one primarily works over an -adic field with .) The Archimedean control is trivial:

Proposition 3 (Archimedean control of )With as above, and any embedding , we havefor all and some independent of .

*Proof:* Since is a rational integer, is just . By decomposing into affine pieces, we may assume that is of the affine form (1), then we trivially have , and the claim follows.

Another way of thinking about this Archimedean control is that it guarantees that the zeta function can be defined holomorphically on the open disk in of radius centred at the origin.

The -adic control is significantly more difficult, and is the main component of Dwork’s argument:

Proposition 4 (-adic control of )With as above, and using an embedding (defined later) with the characteristic of , we can find for any real a finite number of elements such thatfor all .

Another way of thinking about this -adic control is that it guarantees that the zeta function can be defined *meromorphically* on the entire -adic complex field .

Proposition 4 is ostensibly much weaker than Theorem 1 because of (a) the error term of -adic magnitude at most ; (b) the fact that the number of potential characteristic values here may go to infinity as ; and (c) the potential characteristic values only exist inside the complexified -adics , rather than in the algebraic integers . However, it turns out that by combining -adic control on in Proposition 4 with the trivial control on in Proposition 3, one can obtain Theorem 1 by an elementary argument that does not use any further properties of (other than the obvious fact that the are rational integers), with the in Proposition 4 chosen to exceed the in Proposition 3. We give this argument (essentially due to Borel) below the fold.

The proof of Proposition 4 can be split into two pieces. The first piece, which can be viewed as the number-theoretic component of the proof, uses external descriptions of such as (1) to obtain the following decomposition of :

Proposition 5 (Decomposition of )With and as above, we can decompose as a finite linear combination (over the integers) of sequences , such that for each such sequence , the zeta functionsare entire in , by which we mean that

as .

This proposition will ultimately be a consequence of the properties of the Teichmuller lifting .

The second piece, which can be viewed as the “-adic complex analytic” component of the proof, relates the -adic entire nature of a zeta function with control on the associated sequence , and can be interpreted (after some manipulation) as a -adic version of the Weierstrass preparation theorem:

Proposition 6 (-adic Weierstrass preparation theorem)Let be a sequence in , such that the zeta functionis entire in . Then for any real , there exist a finite number of elements such that

for all and some .

Clearly, the combination of Proposition 5 and Proposition 6 (and the non-Archimedean nature of the norm) imply Proposition 4.

This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.

Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.

One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:

Conjecture 1 (Kakeya conjecture)Let be a subset of that contains a unit line segment in every direction. Then .

This conjecture is not precisely formulated here, because we have not specified exactly what type of set is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):

Conjecture 2 (Kakeya conjecture, again)Let be a family of lines in that meet and contain a line in each direction. Let be the union of the restriction to of every line in . Then .

As the space of all directions in is two-dimensional, we thus see that is an (at least) two-dimensional subset of the four-dimensional space of lines in (actually, it lies in a compact subset of this space, since we have constrained the lines to meet ). One could then ask if this is the only property of that is needed to establish the Kakeya conjecture, that is to say if any subset of which contains a two-dimensional family of lines (restricted to , and meeting ) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:

Conjecture 3 (Strong Kakeya conjecture)Let be a two-dimensional family of lines in that meet , and assume theWolff axiomthat no (affine) plane contains more than a one-dimensional family of lines in . Let be the union of the restriction of every line in . Then .

Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie *close* to a plane, rather than exactly *on* the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.

In 1995, Wolff established the important lower bound (for various notions of dimension, e.g. Hausdorff dimension) for sets in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the barrier, coming from the possible existence of *half-dimensional (approximate) subfields* of the reals . To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:

Conjecture 4 (Strong Kakeya conjecture over )Let be a four (real) dimensional family of complex lines in that meet the unit ball in , and assume theWolff axiomthat no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in . Let be the union of the restriction of every complex line in . Then has real dimension .

The argument of Wolff can be adapted to the complex case to show that all sets occuring in Conjecture 4 have real dimension at least . Unfortunately, this is sharp, due to the following fundamental counterexample:

Proposition 5 (Heisenberg group counterexample)Let be the Heisenberg groupand let be the family of complex lines

with and . Then is a five (real) dimensional subset of that contains every line in the four (real) dimensional set ; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in . In particular, the strong Kakeya conjecture over the complex numbers is false.

This proposition is proven by a routine computation, which we omit here. The group structure on is given by the group law

giving the structure of a -step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over . Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines

with ; multiplying this family of lines on the right by a group element in gives other families of parallel lines, which in fact sweep out all of .

The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield of , which induces an involution which can then be used to define the Heisenberg group through the formula

Analogous Heisenberg counterexamples can also be constructed if one works over finite fields that contain a “half-dimensional” subfield ; we leave the details to the interested reader. Morally speaking, if in turn contained a subfield of dimension (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.

We thus see that to go beyond the dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:

- (a) Exploit the distinct directions of the lines in in a way that goes beyond the Wolff axiom; or
- (b) Exploit the fact that does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).

(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)

Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of for Kakeya sets very slightly to (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of , and then pursued route (b) to obtain a corresponding improvement to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.

Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:

- Assume that the (strong) Kakeya conjecture fails, so that there are sets of the form in Conjecture 3 of dimension for some . Assume that is “optimal”, in the sense that is as large as possible.
- Use the optimality of (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets , namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining to “behave like” a putative Heisenberg group counterexample.
- By playing all these structural properties off of each other, show that can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.

Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.

## Recent Comments