You are currently browsing the category archive for the ‘math.DS’ category.
A (smooth) Riemannian manifold is a smooth manifold without boundary, equipped with a Riemannian metric , which assigns a length to every tangent vector at a point , and more generally assigns an inner product
to every pair of tangent vectors at a point . (We use Roman font for here, as we will need to use to denote group elements later in this post.) This inner product is assumed to symmetric, positive definite, and smoothly varying in , and the length is then given in terms of the inner product by the formula
In coordinates (and also using abstract index notation), the metric can be viewed as an invertible symmetric rank tensor , with
One can also view the Riemannian metric as providing a (self-adjoint) identification between the tangent bundle of the manifold and the cotangent bundle ; indeed, every tangent vector is then identified with the cotangent vector , defined by the formula
In coordinates, .
A fundamental dynamical system on the tangent bundle (or equivalently, the cotangent bundle, using the above identification) of a Riemannian manifold is that of geodesic flow. Recall that geodesics are smooth curves that minimise the length
There is some degeneracy in this definition, because one can reparameterise the curve without affecting the length. In order to fix this degeneracy (and also because the square of the speed is a more tractable quantity analytically than the speed itself), it is better if one replaces the length with the energy
Minimising the energy of a parameterised curve turns out to be the same as minimising the length, together with an additional requirement that the speed stay constant in time. Minimisers (and more generally, critical points) of the energy functional (holding the endpoints fixed) are known as geodesic flows. From a physical perspective, geodesic flow governs the motion of a particle that is subject to no external forces and thus moves freely, save for the constraint that it must always lie on the manifold .
One can also view geodesic flows as a dynamical system on the tangent bundle (with the state at any time given by the position and the velocity ) or on the cotangent bundle (with the state then given by the position and the momentum ). With the latter perspective (sometimes referred to as cogeodesic flow), geodesic flow becomes a Hamiltonian flow, with Hamiltonian given as
where is the inverse inner product to , which can be defined for instance by the formula
In coordinates, geodesic flow is given by Hamilton’s equations of motion
In terms of the velocity , we can rewrite these equations as the geodesic equation
If the manifold is an embedded submanifold of a larger Euclidean space , with the metric on being induced from the standard metric on , then the geodesic flow equation can be rewritten in the equivalent form
where is now viewed as taking values in , and is similarly viewed as a subspace of . This is intuitively obvious from the geometric interpretation of geodesics: if the curvature of a curve contains components that are transverse to the manifold rather than normal to it, then it is geometrically clear that one should be able to shorten the curve by shifting it along the indicated transverse direction. It is an instructive exercise to rigorously formulate the above intuitive argument. This fact also conforms well with one’s physical intuition of geodesic flow as the motion of a free particle constrained to be in ; the normal quantity then corresponds to the centripetal force necessary to keep the particle lying in (otherwise it would fly off along a tangent line to , as per Newton’s first law). The precise value of the normal vector can be computed via the second fundamental form as , but we will not need this formula here.
In a beautiful paper from 1966, Vladimir Arnold (who, sadly, passed away last week), observed that many basic equations in physics, including the Euler equations of motion of a rigid body, and also (by which is a priori a remarkable coincidence) the Euler equations of fluid dynamics of an inviscid incompressible fluid, can be viewed (formally, at least) as geodesic flows on a (finite or infinite dimensional) Riemannian manifold. And not just any Riemannian manifold: the manifold is a Lie group (or, to be truly pedantic, a torsor of that group), equipped with a right-invariant (or left-invariant, depending on one’s conventions) metric. In the context of rigid bodies, the Lie group is the group of rigid motions; in the context of incompressible fluids, it is the group ) of measure-preserving diffeomorphisms. The right-invariance makes the Hamiltonian mechanics of geodesic flow in this context (where it is sometimes known as the Euler-Arnold equation or the Euler-Poisson equation) quite special; it becomes (formally, at least) completely integrable, and also indicates (in principle, at least) a way to reformulate these equations in a Lax pair formulation. And indeed, many further completely integrable equations, such as the Korteweg-de Vries equation, have since been reinterpreted as Euler-Arnold flows.
From a physical perspective, this all fits well with the interpretation of geodesic flow as the free motion of a system subject only to a physical constraint, such as rigidity or incompressibility. (I do not know, though, of a similarly intuitive explanation as to why the Korteweg de Vries equation is a geodesic flow.)
One consequence of being a completely integrable system is that one has a large number of conserved quantities. In the case of the Euler equations of motion of a rigid body, the conserved quantities are the linear and angular momentum (as observed in an external reference frame, rather than the frame of the object). In the case of the two-dimensional Euler equations, the conserved quantities are the pointwise values of the vorticity (as viewed in Lagrangian coordinates, rather than Eulerian coordinates). In higher dimensions, the conserved quantity is now the (Hodge star of) the vorticity, again viewed in Lagrangian coordinates. The vorticity itself then evolves by the vorticity equation, and is subject to vortex stretching as the diffeomorphism between the initial and final state becomes increasingly sheared.
The elegant Euler-Arnold formalism is reasonably well-known in some circles (particularly in Lagrangian and symplectic dynamics, where it can be viewed as a special case of the Euler-Poincaré formalism or Lie-Poisson formalism respectively), but not in others; I for instance was only vaguely aware of it until recently, and I think that even in fluid mechanics this perspective to the subject is not always emphasised. Given the circumstances, I thought it would therefore be appropriate to present Arnold’s original 1966 paper here. (For a more modern treatment of these topics, see the books of Arnold-Khesin and Marsden-Ratiu.)
In order to avoid technical issues, I will work formally, ignoring questions of regularity or integrability, and pretending that infinite-dimensional manifolds behave in exactly the same way as their finite-dimensional counterparts. In the finite-dimensional setting, it is not difficult to make all of the formal discussion below rigorous; but the situation in infinite dimensions is substantially more delicate. (Indeed, it is a notorious open problem whether the Euler equations for incompressible fluids even forms a global continuous flow in a reasonable topology in the first place!) However, I do not want to discuss these analytic issues here; see this paper of Ebin and Marsden for a treatment of these topics.
Ben Green, Tamar Ziegler, and I have just uploaded to the arXiv the note “An inverse theorem for the Gowers norm (announcement)“, not intended for publication. This is an announcement of our forthcoming solution of the inverse conjecture for the Gowers norm, which roughly speaking asserts that norm of a bounded function is large if and only if that function correlates with an -step nilsequence of bounded complexity.
The full argument is quite lengthy (our most recent draft is about 90 pages long), but this is in large part due to the presence of various technical details which are necessary in order to make the argument fully rigorous. In this 20-page announcement, we instead sketch a heuristic proof of the conjecture, relying in a number of “cheats” to avoid the above-mentioned technical details. In particular:
- In the announcement, we rely on somewhat vaguely defined terms such as “bounded complexity” or “linearly independent with respect to bounded linear combinations” or “equivalent modulo lower step errors” without specifying them rigorously. In the full paper we will use the machinery of nonstandard analysis to rigorously and precisely define these concepts.
- In the announcement, we deal with the traditional linear nilsequences rather than the polynomial nilsequences that turn out to be better suited for finitary equidistribution theory, but require more notation and machinery in order to use.
- In a similar vein, we restrict attention to scalar-valued nilsequences in the announcement, though due to topological obstructions arising from the twisted nature of the torus bundles used to build nilmanifolds, we will have to deal instead with vector-valued nilsequences in the main paper.
- In the announcement, we pretend that nilsequences can be described by bracket polynomial phases, at least for the sake of making examples, although strictly speaking bracket polynomial phases only give examples of piecewise Lipschitz nilsequences rather than genuinely Lipschitz nilsequences.
With these cheats, it becomes possible to shorten the length of the argument substantially. Also, it becomes clearer that the main task is a cohomological one; in order to inductively deduce the inverse conjecture for a given step from the conjecture for the preceding step , the basic problem is to show that a certain (quasi-)cocycle is necessarily a (quasi-)coboundary. This in turn requires a detailed analysis of the top order and second-to-top order terms in the cocycle, which requires a certain amount of nilsequence equidistribution theory and additive combinatorics, as well as a “sunflower decomposition” to arrange the various nilsequences one encounters into a usable “normal form”.
It is often the case in modern mathematics that the informal heuristic way to explain an argument looks quite different (and is significantly shorter) than the way one would formally present the argument with all the details. This seems to be particularly true in this case; at a superficial level, the full paper has a very different set of notation than the announcement, and a lot of space is invested in setting up additional machinery that one can quickly gloss over in the announcement. We hope though that the announcement can provide a “road map” to help navigate the much longer paper to come.
In Notes 5, we saw that the Gowers uniformity norms on vector spaces in high characteristic were controlled by classical polynomial phases .
Now we study the analogous situation on cyclic groups . Here, there is an unexpected surprise: the polynomial phases (classical or otherwise) are no longer sufficient to control the Gowers norms once exceeds . To resolve this problem, one must enlarge the space of polynomials to a larger class. It turns out that there are at least three closely related options for this class: the local polynomials, the bracket polynomials, and the nilsequences. Each of the three classes has its own strengths and weaknesses, but in my opinion the nilsequences seem to be the most natural class, due to the rich algebraic and dynamical structure coming from the nilpotent Lie group undergirding such sequences. For reasons of space we shall focus primarily on the nilsequence viewpoint here.
Traditionally, nilsequences have been defined in terms of linear orbits on nilmanifolds ; however, in recent years it has been realised that it is convenient for technical reasons (particularly for the quantitative “single-scale” theory) to generalise this setup to that of polynomial orbits , and this is the perspective we will take here.
A polynomial phase on a finite abelian group is formed by starting with a polynomial to the unit circle, and then composing it with the exponential function . To create a nilsequence , we generalise this construction by starting with a polynomial into a nilmanifold , and then composing this with a Lipschitz function . (The Lipschitz regularity class is convenient for minor technical reasons, but one could also use other regularity classes here if desired.) These classes of sequences certainly include the polynomial phases, but are somewhat more general; for instance, they almost include bracket polynomial phases such as . (The “almost” here is because the relevant functions involved are only piecewise Lipschitz rather than Lipschitz, but this is primarily a technical issue and one should view bracket polynomial phases as “morally” being nilsequences.)
In these notes we set out the basic theory for these nilsequences, including their equidistribution theory (which generalises the equidistribution theory of polynomial flows on tori from Notes 1) and show that they are indeed obstructions to the Gowers norm being small. This leads to the inverse conjecture for the Gowers norms that shows that the Gowers norms on cyclic groups are indeed controlled by these sequences.
A (complex, semi-definite) inner product space is a complex vector space equipped with a sesquilinear form which is conjugate symmetric, in the sense that for all , and non-negative in the sense that for all . By inspecting the non-negativity of for complex numbers , one obtains the Cauchy-Schwarz inequality
if one then defines , one then quickly concludes the triangle inequality
which then soon implies that is a semi-norm on . If we make the additional assumption that the inner product is positive definite, i.e. that whenever is non-zero, then this semi-norm becomes a norm. If is complete with respect to the metric induced by this norm, then is called a Hilbert space.
The above material is extremely standard, and can be found in any graduate real analysis course; I myself covered it here. But what is perhaps less well known (except inside the fields of additive combinatorics and ergodic theory) is that the above theory of classical Hilbert spaces is just the first case of a hierarchy of higher order Hilbert spaces, in which the binary inner product is replaced with a -ary inner product that obeys an appropriate generalisation of the conjugate symmetry, sesquilinearity, and positive semi-definiteness axioms. Such inner products then obey a higher order Cauchy-Schwarz inequality, known as the Cauchy-Schwarz-Gowers inequality, and then also obey a triangle inequality and become semi-norms (or norms, if the inner product was non-degenerate). Examples of such norms and spaces include the Gowers uniformity norms , the Gowers box norms , and the Gowers-Host-Kra seminorms ; a more elementary example are the family of Lebesgue spaces when the exponent is a power of two. They play a central role in modern additive combinatorics and to certain aspects of ergodic theory, particularly those relating to Szemerédi’s theorem (or its ergodic counterpart, the Furstenberg multiple recurrence theorem); they also arise in the regularity theory of hypergraphs (which is not unrelated to the other two topics).
A simple example to keep in mind here is the order two Hilbert space on a measure space , where the inner product takes the form
In this brief note I would like to set out the abstract theory of such higher order Hilbert spaces. This is not new material, being already implicit in the breakthrough papers of Gowers and Host-Kra, but I just wanted to emphasise the fact that the material is abstract, and is not particularly tied to any explicit choice of norm so long as a certain axiom are satisfied. (Also, I wanted to write things down so that I would not have to reconstruct this formalism again in the future.) Unfortunately, the notation is quite heavy and the abstract axiom is a little strange; it may be that there is a better way to formulate things. In this particular case it does seem that a concrete approach is significantly clearer, but abstraction is at least possible.
Note: the discussion below is likely to be comprehensible only to readers who already have some exposure to the Gowers norms.
(Linear) Fourier analysis can be viewed as a tool to study an arbitrary function on (say) the integers , by looking at how such a function correlates with linear phases such as , where is the fundamental character, and is a frequency. These correlations control a number of expressions relating to , such as the expected behaviour of on arithmetic progressions of length three.
In this course we will be studying higher-order correlations, such as the correlation of with quadratic phases such as , as these will control the expected behaviour of on more complex patterns, such as arithmetic progressions of length four. In order to do this, we must first understand the behaviour of exponential sums such as
Such sums are closely related to the distribution of expressions such as in the unit circle , as varies from to . More generally, one is interested in the distribution of polynomials of one or more variables taking values in a torus ; for instance, one might be interested in the distribution of the quadruplet as both vary from to . Roughly speaking, once we understand these types of distributions, then the general machinery of quadratic Fourier analysis will then allow us to understand the distribution of the quadruplet for more general classes of functions ; this can lead for instance to an understanding of the distribution of arithmetic progressions of length in the primes, if is somehow related to the primes.
More generally, to find arithmetic progressions such as in a set , it would suffice to understand the equidistribution of the quadruplet in as and vary. This is the starting point for the fundamental connection between combinatorics (and more specifically, the task of finding patterns inside sets) and dynamics (and more specifically, the theory of equidistribution and recurrence in measure-preserving dynamical systems, which is a subfield of ergodic theory). This connection was explored in one of my previous classes; it will also be important in this course (particularly as a source of motivation), but the primary focus will be on finitary, and Fourier-based, methods.
The theory of equidistribution of polynomial orbits was developed in the linear case by Dirichlet and Kronecker, and in the polynomial case by Weyl. There are two regimes of interest; the (qualitative) asymptotic regime in which the scale parameter is sent to infinity, and the (quantitative) single-scale regime in which is kept fixed (but large). Traditionally, it is the asymptotic regime which is studied, which connects the subject to other asymptotic fields of mathematics, such as dynamical systems and ergodic theory. However, for many applications (such as the study of the primes), it is the single-scale regime which is of greater importance. The two regimes are not directly equivalent, but are closely related: the single-scale theory can be usually used to derive analogous results in the asymptotic regime, and conversely the arguments in the asymptotic regime can serve as a simplified model to show the way to proceed in the single-scale regime. The analogy between the two can be made tighter by introducing the (qualitative) ultralimit regime, which is formally equivalent to the single-scale regime (except for the fact that explicitly quantitative bounds are abandoned in the ultralimit), but resembles the asymptotic regime quite closely.
We will view the equidistribution theory of polynomial orbits as a special case of Ratner’s theorem, which we will study in more generality later in this course.
For the finitary portion of the course, we will be using asymptotic notation: , , or denotes the bound for some absolute constant , and if we need to depend on additional parameters then we will indicate this by subscripts, e.g. means that for some depending only on . In the ultralimit theory we will use an analogue of asymptotic notation, which we will review later in these notes.
Ben Green, and I have just uploaded to the arXiv our paper “An arithmetic regularity lemma, an associated counting lemma, and applications“, submitted (a little behind schedule) to the 70th birthday conference proceedings for Endre Szemerédi. In this paper we describe the general-degree version of the arithmetic regularity lemma, which can be viewed as the counterpart of the Szemerédi regularity lemma, in which the object being regularised is a function on a discrete interval rather than a graph, and the type of patterns one wishes to count are additive patterns (such as arithmetic progressions ) rather than subgraphs. Very roughly speaking, this regularity lemma asserts that all such functions can be decomposed as a degree nilsequence (or more precisely, a variant of a nilsequence that we call an virtual irrational nilsequence), plus a small error, plus a third error which is extremely tiny in the Gowers uniformity norm . In principle, at least, the latter two errors can be readily discarded in applications, so that the regularity lemma reduces many questions in additive combinatorics to questions concerning (virtual irrational) nilsequences. To work with these nilsequences, we also establish a arithmetic counting lemma that gives an integral formula for counting additive patterns weighted by such nilsequences.
The regularity lemma is a manifestation of the “dichotomy between structure and randomness”, as discussed for instance in my ICM article or FOCS article. In the degree case , this result is essentially due to Green. It is powered by the inverse conjecture for the Gowers norms, which we and Tamar Ziegler have recently established (paper to be forthcoming shortly; the case of our argument is discussed here). The counting lemma is established through the quantitative equidistribution theory of nilmanifolds, which Ben and I set out in this paper.
The regularity and counting lemmas are designed to be used together, and in the paper we give three applications of this combination. Firstly, we give a new proof of Szemerédi’s theorem, which proceeds via an energy increment argument rather than a density increment one. Secondly, we establish a conjecture of Bergelson, Host, and Kra, namely that if has density , and , then there exist shifts for which contains at least arithmetic progressions of length of spacing . (The case of this conjecture was established earlier by Green; the case is false, as was shown by Ruzsa in an appendix to the Bergelson-Host-Kra paper.) Thirdly, we establish a variant of a recent result of Gowers-Wolf, showing that the true complexity of a system of linear forms over indeed matches the conjectured value predicted in their first paper.
In all three applications, the scheme of proof can be described as follows:
- Apply the arithmetic regularity lemma, and decompose a relevant function into three pieces, .
- The uniform part is so tiny in the Gowers uniformity norm that its contribution can be easily dealt with by an appropriate “generalised von Neumann theorem”.
- The contribution of the (virtual, irrational) nilsequence can be controlled using the arithmetic counting lemma.
- Finally, one needs to check that the contribution of the small error does not overwhelm the main term . This is the trickiest bit; one often needs to use the counting lemma again to show that one can find a set of arithmetic patterns for that is so sufficiently “equidistributed” that it is not impacted by the small error.
To illustrate the last point, let us give the following example. Suppose we have a set of some positive density (say ) and we have managed to prove that contains a reasonable number of arithmetic progressions of length (say), e.g. it contains at least such progressions. Now we perturb by deleting a small number, say , elements from to create a new set . Can we still conclude that the new set contains any arithmetic progressions of length ?
Unfortunately, the answer could be no; conceivably, all of the arithmetic progressions in could be wiped out by the elements removed from , since each such element of could be associated with up to (or even ) arithmetic progressions in .
But suppose we knew that the arithmetic progressions in were equidistributed, in the sense that each element in belonged to the same number of such arithmetic progressions, namely . Then each element deleted from only removes at most progressions, and so one can safely remove elements from and still retain some arithmetic progressions. The same argument works if the arithmetic progressions are only approximately equidistributed, in the sense that the number of progressions that a given element belongs to concentrates sharply around its mean (for instance, by having a small variance), provided that the equidistribution is sufficiently strong. Fortunately, the arithmetic regularity and counting lemmas are designed to give precisely such a strong equidistribution result.
A succinct (but slightly inaccurate) summation of the regularity+counting lemma strategy would be that in order to solve a problem in additive combinatorics, it “suffices to check it for nilsequences”. But this should come with a caveat, due to the issue of the small error above; in addition to checking it for nilsequences, the answer in the nilsequence case must be sufficiently “dispersed” in a suitable sense, so that it can survive the addition of a small (but not completely negligible) perturbation.
One last “production note”. Like our previous paper with Emmanuel Breuillard, we used Subversion to write this paper, which turned out to be a significant efficiency boost as we could work on different parts of the paper simultaneously (this was particularly important this time round as the paper was somewhat lengthy and complicated, and there was a submission deadline). When doing so, we found it convenient to split the paper into a dozen or so pieces (one for each section of the paper, basically) in order to avoid conflicts, and to help coordinate the writing process. I’m also looking into git (a more advanced version control system), and am planning to use it for another of my joint projects; I hope to be able to comment on the relative strengths of these systems (and with plain old email) in the future.
Tim Austin, Tanja Eisner, and I have just uploaded to the arXiv our joint paper Nonconventional ergodic averages and multiple recurrence for von Neumann dynamical systems, submitted to Pacific Journal of Mathematics. This project started with the observation that the multiple recurrence theorem of Furstenberg (and the related multiple convergence theorem of Host and Kra) could be interpreted in the language of dynamical systems of commutative finite von Neumann algebras, which naturally raised the question of the extent to which the results hold in the noncommutative setting. The short answer is “yes for small averages, but not for long ones”.
The Furstenberg multiple recurrence theorem can be phrased as follows: if is a probability space with a measure-preserving shift (which naturally induces an isomorphism by setting ), is non-negative with positive trace , and is an integer, then one has
In particular, for all in a set of positive upper density. This result is famously equivalent to Szemerédi’s theorem on arithmetic progressions.
The Host-Kra multiple convergence theorem makes the related assertion that if , then the scalar averages
converge to a limit as ; a fortiori, the function averages
converge in (say) norm.
The space is a commutative example of a von Neumann algebra: an algebra of bounded linear operators on a complex Hilbert space which is closed under the weak operator topology, and under taking adjoints. Indeed, one can take to be , and identify each element of with the multiplier operator . The operation is then a finite trace for this algebra, i.e. a linear map from the algebra to the scalars such that , , and , with equality iff . The shift is then an automorphism of this algebra (preserving shift and conjugation).
We can generalise this situation to the noncommutative setting. Define a von Neumann dynamical system to be a von Neumann algebra with a finite trace and an automorphism . In addition to the commutative examples generated by measure-preserving systems, we give three other examples here:
- (Matrices) is the algebra of complex matrices, with trace and shift , where is a fixed unitary matrix.
- (Group algebras) is the closure of the group algebra of a discrete group (i.e. the algebra of finite formal complex combinations of group elements), which acts on the Hilbert space by convolution (identifying each group element with its Kronecker delta function). A trace is given by , where is the Kronecker delta at the identity. Any automorphism of the group induces a shift .
- (Noncommutative torus) is the von Neumann algebra acting on generated by the multiplier operator and the shifted multiplier operator , where is fixed. A trace is given by , where is the constant function.
Inspired by noncommutative generalisations of other results in commutative analysis, one can then ask the following questions, for a fixed and for a fixed von Neumann dynamical system :
- (Recurrence on average) Whenever is non-negative with positive trace, is it true that
- (Recurrence on a dense set) Whenever is non-negative with positive trace, is it true thatfor all in a set of positive upper density?
- (Weak convergence) With , is it true thatconverges?
- (Strong convergence) With , is it true thatconverges in using the Hilbert-Schmidt norm ?
Note that strong convergence automatically implies weak convergence, and recurrence on average automatically implies recurrence on a dense set.
For , all four questions can trivially be answered “yes”. For , the answer to the above four questions is also “yes”, thanks to the von Neumann ergodic theorem for unitary operators. For , we were able to establish a positive answer to the “recurrence on a dense set”, “weak convergence”, and “strong convergence” results assuming that is ergodic. For general , we have a positive answer to all four questions under the assumption that is asymptotically abelian, which roughly speaking means that the commutators converges to zero (in an appropriate weak sense) as . Both of these proofs adapt the usual ergodic theory arguments; the latter result generalises some earlier work of Niculescu-Stroh-Zsido, Duvenhage, and Beyers-Duvenhage-Stroh. For the result, a key observation is that the van der Corput lemma can be used to control triple averages without requiring any commutativity; the “generalised von Neumann” trick of using multiple applications of the van der Corput trick to control higher averages, however, relies much more strongly on commutativity.
In most other situations we have counterexamples to all of these questions. In particular:
- For , recurrence on average can fail on an ergodic system; indeed, one can even make the average negative. This example is ultimately based on a Behrend example construction and a von Neumann algebra construction known as the crossed product.
- For , recurrence on a dense set can also fail if the ergodicity hypothesis is dropped. This also uses the Behrend example and the crossed product construction.
- For , weak and strong convergence can fail even assuming ergodicity. This uses a group theoretic construction, which amusingly was inspired by Grothendieck’s interpretation of a group as a sheaf of flat connections, which I blogged about recently, and which I will discuss below the fold.
- For , recurrence on a dense set fails even with the ergodicity hypothesis. This uses a fancier version of the Behrend example due to Ruzsa in this paper of Bergelson, Host, and Kra. This example only applies for ; we do not know for whether recurrence on a dense set holds for ergodic systems.
Ben Green, Tamar Ziegler and I have just uploaded to the arXiv our paper “An inverse theorem for the Gowers norm“. This paper establishes the next case of the inverse conjecture for the Gowers norm for the integers (after the case, which was done by Ben and myself a few years ago). This conjecture has a number of combinatorial and number-theoretic consequences, for instance by combining this new inverse theorem with previous results, one can now get the correct asymptotic for the number of arithmetic progressions of primes of length five in any large interval .
To state the inverse conjecture properly requires a certain amount of notation. Given a function and a shift , define the multiplicative derivative
and then define the Gowers norm of a function to (essentially) be the quantity
where we extend f by zero outside of . (Actually, we use a slightly different normalisation to ensure that the function 1 has a norm of 1, but never mind this for now.)
Informally, the Gowers norm measures the amount of bias present in the multiplicative derivatives of . In particular, if for some polynomial , then the derivative of is identically 1, and so is the Gowers norm.
However, polynomial phases are not the only functions with large Gowers norm. For instance, consider the function , which is what we call a quadratic bracket polynomial phase. This function isn’t quite quadratic, but it is close enough to being quadratic (because one has the approximate linearity relationship holding a good fraction of the time) that it turns out that third derivative is trivial fairly often, and the Gowers norm is comparable to 1. This bracket polynomial phase can be modeled as a nilsequence , where is a polynomial orbit on a nilmanifold , which in this case has step 2. (The function F is only piecewise smooth, due to the discontinuity in the floor function , so strictly speaking we would classify this as an almost nilsequence rather than a nilsequence, but let us ignore this technical issue here.) In fact, there is a very close relationship between nilsequences and bracket polynomial phases, but I will detail this in a later post.
The inverse conjecture for the Gowers norm, GI(s), asserts that such nilsequences are the only obstruction to the Gowers norm being small. Roughly speaking, it goes like this:
Inverse conjecture, GI(s). (Informal statement) Suppose that is bounded but has large norm. Then there is an s-step nilsequence of “bounded complexity” that correlates with f.
This conjecture is trivial for s=0, is a short consequence of Fourier analysis when s=1, and was proven for s=2 by Ben and myself. In this paper we establish the s=3 case. An equivalent formulation in this case is that any bounded function of large norm must correlate with a “bracket cubic phase”, which is the product of a bounded number of phases from the following list
for various real numbers .
It appears that our methods also work in higher step, though for technical reasons it is convenient to make a number of adjustments to our arguments to do so, most notably a switch from standard analysis to non-standard analysis, about which I hope to say more later. But there are a number of simplifications available on the s=3 case which make the argument significantly shorter, and so we will be writing the higher s argument in a separate paper.
The arguments largely follow those for the s=2 case (which in turn are based on this paper of Gowers). Two major new ingredients are a deployment of a normal form and equidistribution theory for bracket quadratic phases, and a combinatorial decomposition of frequency space which we call the sunflower decomposition. I will sketch these ideas below the fold.
In a previous post, we discussed the Szemerédi regularity lemma, and how a given graph could be regularised by partitioning the vertex set into random neighbourhoods. More precisely, we gave a proof of
Lemma 1 (Regularity lemma via random neighbourhoods) Let . Then there exists integers with the following property: whenever be a graph on finitely many vertices, if one selects one of the integers at random from , then selects vertices uniformly from at random, then the vertex cells (some of which can be empty) generated by the vertex neighbourhoods for , will obey the regularity property
with probability at least , where the sum is over all pairs for which is not -regular between and . [Recall that a pair is -regular for if one has
for any and with , where is the density of edges between and .]
The proof was a combinatorial one, based on the standard energy increment argument.
In this post I would like to discuss an alternate approach to the regularity lemma, which is an infinitary approach passing through a graph-theoretic version of the Furstenberg correspondence principle (mentioned briefly in this earlier post of mine). While this approach superficially looks quite different from the combinatorial approach, it in fact uses many of the same ingredients, most notably a reliance on random neighbourhoods to regularise the graph. This approach was introduced by myself back in 2006, and used by Austin and by Austin and myself to establish some property testing results for hypergraphs; more recently, a closely related infinitary hypergraph removal lemma developed in the 2006 paper was also used by Austin to give new proofs of the multidimensional Szemeredi theorem and of the density Hales-Jewett theorem (the latter being a spinoff of the polymath1 project).
For various technical reasons we will not be able to use the correspondence principle to recover Lemma 1 in its full strength; instead, we will establish the following slightly weaker variant.
Lemma 2 (Regularity lemma via random neighbourhoods, weak version) Let . Then there exist an integer with the following property: whenever be a graph on finitely many vertices, there exists such that if one selects vertices uniformly from at random, then the vertex cells generated by the vertex neighbourhoods for , will obey the regularity property (1) with probability at least .
Roughly speaking, Lemma 1 asserts that one can regularise a large graph with high probability by using random neighbourhoods, where is chosen at random from one of a number of choices ; in contrast, the weaker Lemma 2 asserts that one can regularise a large graph with high probability by using some integer from , but the exact choice of depends on , and it is not guaranteed that a randomly chosen will be likely to work. While Lemma 2 is strictly weaker than Lemma 1, it still implies the (weighted) Szemerédi regularity lemma (Lemma 2 from the previous post).
As part of the polymath1 project, I would like to set up a reading seminar on this blog for the following three papers and notes:
- H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241.
- R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
- H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119.
As I understand it, paper #1 begins the proof of DHJ(3) (the k=3 version of density Hales-Jewett), but the proof is not quite complete, and the notes in #2 completes the proof using ideas from both paper #1 and paper #3. Paper #3, of course, does DHJ(k) for all k. For the purposes of the polymath1 project, though, I think it would be best if we focus exclusively on k=3.
While this seminar is of course related in content to the main discussion threads in the polymath1 project, I envision this to be a more sedate affair, in which we go slowly through various sections of various papers, asking questions of each other along the way, and presenting various bits and pieces of the proof. The papers require a certain technical background in ergodic theory in order to understand, but my hope is that if enough other people (in particular, combinatorialists) ask questions here (and “naive” or “silly” questions are strongly encouraged) then we should be able to make a fair amount of the arguments here accessible. I also hope that some ergodic theorists who have been intending to read these papers already, but didn’t get around to it, will join with reading the papers with me.
This is the first time I am trying something like this, and so we shall be using the carefully thought out protocol known as “making things up as we go along”. My initial plan is to start understanding the “big picture” (in particular, to outline the general strategy of proof), while also slowly going through the key stages of that proof in something resembling a linear order. But I imagine that the focus may change as the seminar progresses.
I’ll start the ball rolling with some initial impressions of paper #1 in the comments below. As with other threads in this project, I would like all comments to come with a number and title, starting with 600 and then incrementing (the numbers 1-599 being reserved by other threads in this project).