You are currently browsing the tag archive for the ‘nilsequences’ tag.

Kaisa Matomäki, Xuancheng Shao, Joni Teräväinen, and myself have just uploaded to the arXiv our preprint “Higher uniformity of arithmetic functions in short intervals I. All intervals“. This paper investigates the higher order (Gowers) uniformity of standard arithmetic functions in analytic number theory (and specifically, the Möbius function {\mu}, the von Mangoldt function {\Lambda}, and the generalised divisor functions {d_k}) in short intervals {(X,X+H]}, where {X} is large and {H} lies in the range {X^{\theta+\varepsilon} \leq H \leq X^{1-\varepsilon}} for a fixed constant {0 < \theta < 1} (that one would like to be as small as possible). If we let {f} denote one of the functions {\mu, \Lambda, d_k}, then there is extensive literature on the estimation of short sums

\displaystyle  \sum_{X < n \leq X+H} f(n)

and some literature also on the estimation of exponential sums such as

\displaystyle  \sum_{X < n \leq X+H} f(n) e(-\alpha n)

for a real frequency {\alpha}, where {e(\theta) := e^{2\pi i \theta}}. For applications in the additive combinatorics of such functions {f}, it is also necessary to consider more general correlations, such as polynomial correlations

\displaystyle  \sum_{X < n \leq X+H} f(n) e(-P(n))

where {P: {\bf Z} \rightarrow {\bf R}} is a polynomial of some fixed degree, or more generally

\displaystyle  \sum_{X < n \leq X+H} f(n) \overline{F}(g(n) \Gamma)

where {G/\Gamma} is a nilmanifold of fixed degree and dimension (and with some control on structure constants), {g: {\bf Z} \rightarrow G} is a polynomial map, and {F: G/\Gamma \rightarrow {\bf C}} is a Lipschitz function (with some bound on the Lipschitz constant). Indeed, thanks to the inverse theorem for the Gowers uniformity norm, such correlations let one control the Gowers uniformity norm of {f} (possibly after subtracting off some renormalising factor) on such short intervals {(X,X+H]}, which can in turn be used to control other multilinear correlations involving such functions.

Traditionally, asymptotics for such sums are expressed in terms of a “main term” of some arithmetic nature, plus an error term that is estimated in magnitude. For instance, a sum such as {\sum_{X < n \leq X+H} \Lambda(n) e(-\alpha n)} would be approximated in terms of a main term that vanished (or is negligible) if {\alpha} is “minor arc”, but would be expressible in terms of something like a Ramanujan sum if {\alpha} was “major arc”, together with an error term. We found it convenient to cancel off such main terms by subtracting an approximant {f^\sharp} from each of the arithmetic functions {f} and then getting upper bounds on remainder correlations such as

\displaystyle  |\sum_{X < n \leq X+H} (f(n)-f^\sharp(n)) \overline{F}(g(n) \Gamma)| \ \ \ \ \ (1)

(actually for technical reasons we also allow the {n} variable to be restricted further to a subprogression of {(X,X+H]}, but let us ignore this minor extension for this discussion). There is some flexibility in how to choose these approximants, but we eventually found it convenient to use the following choices.

  • For the Möbius function {\mu}, we simply set {\mu^\sharp = 0}, as per the Möbius pseudorandomness conjecture. (One could choose a more sophisticated approximant in the presence of a Siegel zero, as I did with Joni in this recent paper, but we do not do so here.)
  • For the von Mangoldt function {\Lambda}, we eventually went with the Cramér-Granville approximant {\Lambda^\sharp(n) = \frac{W}{\phi(W)} 1_{(n,W)=1}}, where {W = \prod_{p < R} p} and {R = \exp(\log^{1/10} X)}.
  • For the divisor functions {d_k}, we used a somewhat complicated-looking approximant {d_k^\sharp(n) = \sum_{m \leq X^{\frac{k-1}{5k}}} P_m(\log n)} for some explicit polynomials {P_m}, chosen so that {d_k^\sharp} and {d_k} have almost exactly the same sums along arithmetic progressions (see the paper for details).

The objective is then to obtain bounds on sums such as (1) that improve upon the “trivial bound” that one can get with the triangle inequality and standard number theory bounds such as the Brun-Titchmarsh inequality. For {\mu} and {\Lambda}, the Siegel-Walfisz theorem suggests that it is reasonable to expect error terms that have “strongly logarithmic savings” in the sense that they gain a factor of {O_A(\log^{-A} X)} over the trivial bound for any {A>0}; for {d_k}, the Dirichlet hyperbola method suggests instead that one has “power savings” in that one should gain a factor of {X^{-c_k}} over the trivial bound for some {c_k>0}. In the case of the Möbius function {\mu}, there is an additional trick (introduced by Matomäki and Teräväinen) that allows one to lower the exponent {\theta} somewhat at the cost of only obtaining “weakly logarithmic savings” of shape {\log^{-c} X} for some small {c>0}.

Our main estimates on sums of the form (1) work in the following ranges:

  • For {\theta=5/8}, one can obtain strongly logarithmic savings on (1) for {f=\mu,\Lambda}, and power savings for {f=d_k}.
  • For {\theta=3/5}, one can obtain weakly logarithmic savings for {f = \mu, d_k}.
  • For {\theta=5/9}, one can obtain power savings for {f=d_3}.
  • For {\theta=1/3}, one can obtain power savings for {f=d_2}.

Conjecturally, one should be able to obtain power savings in all cases, and lower {\theta} down to zero, but the ranges of exponents and savings given here seem to be the limit of current methods unless one assumes additional hypotheses, such as GRH. The {\theta=5/8} result for correlation against Fourier phases {e(\alpha n)} was established previously by Zhan, and the {\theta=3/5} result for such phases and {f=\mu} was established previously by by Matomäki and Teräväinen.

By combining these results with tools from additive combinatorics, one can obtain a number of applications:

  • Direct insertion of our bounds in the recent work of Kanigowski, Lemanczyk, and Radziwill on the prime number theorem on dynamical systems that are analytic skew products gives some improvements in the exponents there.
  • We can obtain a “short interval” version of a multiple ergodic theorem along primes established by Frantzikinakis-Host-Kra and Wooley-Ziegler, in which we average over intervals of the form {(X,X+H]} rather than {[1,X]}.
  • We can obtain a “short interval” version of the “linear equations in primes” asymptotics obtained by Ben Green, Tamar Ziegler, and myself in this sequence of papers, where the variables in these equations lie in short intervals {(X,X+H]} rather than long intervals such as {[1,X]}.

We now briefly discuss some of the ingredients of proof of our main results. The first step is standard, using combinatorial decompositions (based on the Heath-Brown identity and (for the {\theta=3/5} result) the Ramaré identity) to decompose {\mu(n), \Lambda(n), d_k(n)} into more tractable sums of the following types:

  • Type {I} sums, which are basically of the form {\sum_{m \leq A:m|n} \alpha(m)} for some weights {\alpha(m)} of controlled size and some cutoff {A} that is not too large;
  • Type {II} sums, which are basically of the form {\sum_{A_- \leq m \leq A_+:m|n} \alpha(m)\beta(n/m)} for some weights {\alpha(m)}, {\beta(n)} of controlled size and some cutoffs {A_-, A_+} that are not too close to {1} or to {X};
  • Type {I_2} sums, which are basically of the form {\sum_{m \leq A:m|n} \alpha(m) d_2(n/m)} for some weights {\alpha(m)} of controlled size and some cutoff {A} that is not too large.

The precise ranges of the cutoffs {A, A_-, A_+} depend on the choice of {\theta}; our methods fail once these cutoffs pass a certain threshold, and this is the reason for the exponents {\theta} being what they are in our main results.

The Type {I} sums involving nilsequences can be treated by methods similar to those in this previous paper of Ben Green and myself; the main innovations are in the treatment of the Type {II} and Type {I_2} sums.

For the Type {II} sums, one can split into the “abelian” case in which (after some Fourier decomposition) the nilsequence {F(g(n)\Gamma)} is basically of the form {e(P(n))}, and the “non-abelian” case in which {G} is non-abelian and {F} exhibits non-trivial oscillation in a central direction. In the abelian case we can adapt arguments of Matomaki and Shao, which uses Cauchy-Schwarz and the equidistribution properties of polynomials to obtain good bounds unless {e(P(n))} is “major arc” in the sense that it resembles (or “pretends to be”) {\chi(n) n^{it}} for some Dirichlet character {\chi} and some frequency {t}, but in this case one can use classical multiplicative methods to control the correlation. It turns out that the non-abelian case can be treated similarly. After applying Cauchy-Schwarz, one ends up analyzing the equidistribution of the four-variable polynomial sequence

\displaystyle  (n,m,n',m') \mapsto (g(nm)\Gamma, g(n'm)\Gamma, g(nm') \Gamma, g(n'm'\Gamma))

as {n,m,n',m'} range in various dyadic intervals. Using the known multidimensional equidistribution theory of polynomial maps in nilmanifolds, one can eventually show in the non-abelian case that this sequence either has enough equidistribution to give cancellation, or else the nilsequence involved can be replaced with one from a lower dimensional nilmanifold, in which case one can apply an induction hypothesis.

For the type {I_2} sum, a model sum to study is

\displaystyle  \sum_{X < n \leq X+H} d_2(n) e(\alpha n)

which one can expand as

\displaystyle  \sum_{n,m: X < nm \leq X+H} e(\alpha nm).

We experimented with a number of ways to treat this type of sum (including automorphic form methods, or methods based on the Voronoi formula or van der Corput’s inequality), but somewhat to our surprise, the most efficient approach was an elementary one, in which one uses the Dirichlet approximation theorem to decompose the hyperbolic region {\{ (n,m) \in {\bf N}^2: X < nm \leq X+H \}} into a number of arithmetic progressions, and then uses equidistribution theory to establish cancellation of sequences such as {e(\alpha nm)} on the majority of these progressions. As it turns out, this strategy works well in the regime {H > X^{1/3+\varepsilon}} unless the nilsequence involved is “major arc”, but the latter case is treatable by existing methods as discussed previously; this is why the {\theta} exponent for our {d_2} result can be as low as {1/3}.

In a sequel to this paper (currently in preparation), we will obtain analogous results for almost all intervals {(x,x+H]} with {x} in the range {[X,2X]}, in which we will be able to lower {\theta} all the way to {0}.

Kaisa Matomäki, Maksym Radziwill, Joni Teräväinen, Tamar Ziegler and I have uploaded to the arXiv our paper Higher uniformity of bounded multiplicative functions in short intervals on average. This paper (which originated from a working group at an AIM workshop on Sarnak’s conjecture) focuses on the local Fourier uniformity conjecture for bounded multiplicative functions such as the Liouville function {\lambda}. One form of this conjecture is the assertion that

\displaystyle  \int_0^X \| \lambda \|_{U^k([x,x+H])}\ dx = o(X) \ \ \ \ \ (1)

as {X \rightarrow \infty} for any fixed {k \geq 0} and any {H = H(X) \leq X} that goes to infinity as {X \rightarrow \infty}, where {U^k([x,x+H])} is the (normalized) Gowers uniformity norm. Among other things this conjecture implies (logarithmically averaged version of) the Chowla and Sarnak conjectures for the Liouville function (or the Möbius function), see this previous blog post.

The conjecture gets more difficult as {k} increases, and also becomes more difficult the more slowly {H} grows with {X}. The {k=0} conjecture is equivalent to the assertion

\displaystyle  \int_0^X |\sum_{x \leq n \leq x+H} \lambda(n)| \ dx = o(HX)

which was proven (for arbitrarily slowly growing {H}) in a landmark paper of Matomäki and Radziwill, discussed for instance in this blog post.

For {k=1}, the conjecture is equivalent to the assertion

\displaystyle  \int_0^X \sup_\alpha |\sum_{x \leq n \leq x+H} \lambda(n) e(-\alpha n)| \ dx = o(HX). \ \ \ \ \ (2)

This remains open for sufficiently slowly growing {H} (and it would be a major breakthrough in particular if one could obtain this bound for {H} as small as {\log^\varepsilon X} for any fixed {\varepsilon>0}, particularly if applicable to more general bounded multiplicative functions than {\lambda}, as this would have new implications for a generalization of the Chowla conjecture known as the Elliott conjecture). Recently, Kaisa, Maks and myself were able to establish this conjecture in the range {H \geq X^\varepsilon} (in fact we have since worked out in the current paper that we can get {H} as small as {\exp(\log^{5/8+\varepsilon} X)}). In our current paper we establish Fourier uniformity conjecture for higher {k} for the same range of {H}. This in particular implies local orthogonality to polynomial phases,

\displaystyle  \int_0^X \sup_{P \in \mathrm{Poly}_{\leq k-1}({\bf R} \rightarrow {\bf R})} |\sum_{x \leq n \leq x+H} \lambda(n) e(-P(n))| \ dx = o(HX) \ \ \ \ \ (3)

where {\mathrm{Poly}_{\leq k-1}({\bf R} \rightarrow {\bf R})} denotes the polynomials of degree at most {k-1}, but the full conjecture is a bit stronger than this, establishing the more general statement

\displaystyle  \int_0^X \sup_{g \in \mathrm{Poly}({\bf R} \rightarrow G)} |\sum_{x \leq n \leq x+H} \lambda(n) \overline{F}(g(n) \Gamma)| \ dx = o(HX) \ \ \ \ \ (4)

for any degree {k} filtered nilmanifold {G/\Gamma} and Lipschitz function {F: G/\Gamma \rightarrow {\bf C}}, where {g} now ranges over polynomial maps from {{\bf R}} to {G}. The method of proof follows the same general strategy as in the previous paper with Kaisa and Maks. (The equivalence of (4) and (1) follows from the inverse conjecture for the Gowers norms, proven in this paper.) We quickly sketch first the proof of (3), using very informal language to avoid many technicalities regarding the precise quantitative form of various estimates. If the estimate (3) fails, then we have the correlation estimate

\displaystyle  |\sum_{x \leq n \leq x+H} \lambda(n) e(-P_x(n))| \gg H

for many {x \sim X} and some polynomial {P_x} depending on {x}. The difficulty here is to understand how {P_x} can depend on {x}. We write the above correlation estimate more suggestively as

\displaystyle  \lambda(n) \sim_{[x,x+H]} e(P_x(n)).

Because of the multiplicativity {\lambda(np) = -\lambda(p)} at small primes {p}, one expects to have a relation of the form

\displaystyle  e(P_{x'}(p'n)) \sim_{[x/p,x/p+H/p]} e(P_x(pn)) \ \ \ \ \ (5)

for many {x,x'} for which {x/p \approx x'/p'} for some small primes {p,p'}. (This can be formalised using an inequality of Elliott related to the Turan-Kubilius theorem.) This gives a relationship between {P_x} and {P_{x'}} for “edges” {x,x'} in a rather sparse “graph” connecting the elements of say {[X/2,X]}. Using some graph theory one can locate some non-trivial “cycles” in this graph that eventually lead (in conjunction to a certain technical but important “Chinese remainder theorem” step to modify the {P_x} to eliminate a rather serious “aliasing” issue that was already discussed in this previous post) to obtain functional equations of the form

\displaystyle  P_x(a_x \cdot) \approx P_x(b_x \cdot)

for some large and close (but not identical) integers {a_x,b_x}, where {\approx} should be viewed as a first approximation (ignoring a certain “profinite” or “major arc” term for simplicity) as “differing by a slowly varying polynomial” and the polynomials {P_x} should now be viewed as taking values on the reals rather than the integers. This functional equation can be solved to obtain a relation of the form

\displaystyle  P_x(t) \approx T_x \log t

for some real number {T_x} of polynomial size, and with further analysis of the relation (5) one can make {T_x} basically independent of {x}. This simplifies (3) to something like

\displaystyle  \int_0^X \sup_{P \in \mathrm{Poly}_{\leq k-1}({\bf R} \rightarrow {\bf R})} |\sum_{x \leq n \leq x+H} \lambda(n) n^{-iT}| \ dx = o(HX)

and this is now of a form that can be treated by the theorem of Matomäki and Radziwill (because {n \mapsto \lambda(n) n^{-iT}} is a bounded multiplicative function). (Actually because of the profinite term mentioned previously, one also has to insert a Dirichlet character of bounded conductor into this latter conclusion, but we will ignore this technicality.)

Now we apply the same strategy to (4). For abelian {G} the claim follows easily from (3), so we focus on the non-abelian case. One now has a polynomial sequence {g_x \in \mathrm{Poly}({\bf R} \rightarrow G)} attached to many {x \sim X}, and after a somewhat complicated adaptation of the above arguments one again ends up with an approximate functional equation

\displaystyle  g_x(a_x \cdot) \Gamma \approx g_x(b_x \cdot) \Gamma \ \ \ \ \ (6)

where the relation {\approx} is rather technical and will not be detailed here. A new difficulty arises in that there are some unwanted solutions to this equation, such as

\displaystyle  g_x(t) = \gamma^{\frac{\log(a_x t)}{\log(a_x/b_x)}}

for some {\gamma \in \Gamma}, which do not necessarily lead to multiplicative characters like {n^{-iT}} as in the polynomial case, but instead to some unfriendly looking “generalized multiplicative characters” (think of {e(\lfloor \alpha \log n \rfloor \beta \log n)} as a rough caricature). To avoid this problem, we rework the graph theory portion of the argument to produce not just one functional equation of the form (6)for each {x}, but many, leading to dilation invariances

\displaystyle  g_x((1+\theta) t) \Gamma \approx g_x(t) \Gamma

for a “dense” set of {\theta}. From a certain amount of Lie algebra theory (ultimately arising from an understanding of the behaviour of the exponential map on nilpotent matrices, and exploiting the hypothesis that {G} is non-abelian) one can conclude that (after some initial preparations to avoid degenerate cases) {g_x(t)} must behave like {\gamma_x^{\log t}} for some central element {\gamma_x} of {G}. This eventually brings one back to the multiplicative characters {n^{-iT}} that arose in the polynomial case, and the arguments now proceed as before.

We give two applications of this higher order Fourier uniformity. One regards the growth of the number

\displaystyle  s(k) := |\{ (\lambda(n+1),\dots,\lambda(n+k)): n \in {\bf N} \}|

of length {k} sign patterns in the Liouville function. The Chowla conjecture implies that {s(k) = 2^k}, but even the weaker conjecture of Sarnak that {s(k) \gg (1+\varepsilon)^k} for some {\varepsilon>0} remains open. Until recently, the best asymptotic lower bound on {s(k)} was {s(k) \gg k^2}, due to McNamara; with our result, we can now show {s(k) \gg_A k^A} for any {A} (in fact we can get {s(k) \gg_\varepsilon \exp(\log^{8/5-\varepsilon} k)} for any {\varepsilon>0}). The idea is to repeat the now-standard argument to exploit multiplicativity at small primes to deduce Chowla-type conjectures from Fourier uniformity conjectures, noting that the Chowla conjecture would give all the sign patterns one could hope for. The usual argument here uses the “entropy decrement argument” to eliminate a certain error term (involving the large but mean zero factor {p 1_{p|n}-1}). However the observation is that if there are extremely few sign patterns of length {k}, then the entropy decrement argument is unnecessary (there isn’t much entropy to begin with), and a more low-tech moment method argument (similar to the derivation of Chowla’s conjecture from Sarnak’s conjecture, as discussed for instance in this post) gives enough of Chowla’s conjecture to produce plenty of length {k} sign patterns. If there are not extremely few sign patterns of length {k} then we are done anyway. One quirk of this argument is that the sign patterns it produces may only appear exactly once; in contrast with preceding arguments, we were not able to produce a large number of sign patterns that each occur infinitely often.

The second application is to obtain cancellation for various polynomial averages involving the Liouville function {\lambda} or von Mangoldt function {\Lambda}, such as

\displaystyle  {\bf E}_{n \leq X} {\bf E}_{m \leq X^{1/d}} \lambda(n+P_1(m)) \lambda(n+P_2(m)) \dots \lambda(n+P_k(m))

or

\displaystyle  {\bf E}_{n \leq X} {\bf E}_{m \leq X^{1/d}} \lambda(n+P_1(m)) \Lambda(n+P_2(m)) \dots \Lambda(n+P_k(m))

where {P_1,\dots,P_k} are polynomials of degree at most {d}, no two of which differ by a constant (the latter is essential to avoid having to establish the Chowla or Hardy-Littlewood conjectures, which of course remain open). Results of this type were previously obtained by Tamar Ziegler and myself in the “true complexity zero” case when the polynomials {P} had distinct degrees, in which one could use the {k=0} theory of Matomäki and Radziwill; now that higher {k} is available at the scale {H=X^{1/d}} we can now remove this restriction.

A sequence {a: {\bf Z} \rightarrow {\bf C}} of complex numbers is said to be quasiperiodic if it is of the form

\displaystyle a(n) = F( \alpha_1 n \hbox{ mod } 1, \dots, \alpha_k n \hbox{ mod } 1)

for some real numbers {\alpha_1,\dots,\alpha_k} and continuous function {F: ({\bf R}/{\bf Z})^k \rightarrow {\bf C}}. For instance, linear phases such as {n \mapsto e(\alpha n + \beta)} (where {e(\theta) := e^{2\pi i \theta}}) are examples of quasiperiodic sequences; the top order coefficient {\alpha} (modulo {1}) can be viewed as a “frequency” of the integers, and an element of the Pontryagin dual {\hat {\bf Z} \equiv {\bf R}/{\bf Z}} of the integers. Any periodic sequence is also quasiperiodic (taking {k=1} and {\alpha_1} to be the reciprocal of the period). A sequence is said to be almost periodic if it is the uniform limit of quasiperiodic sequences. For instance any Fourier series of the form

\displaystyle a(n) = \sum_{j=1}^\infty c_j e(\alpha_j n)

with {\alpha_1,\alpha_2,\dots} real numbers and {c_1,c_2,\dots} an absolutely summable sequence of complex coefficients, will be almost periodic.

These sequences arise in various “complexity one” problems in arithmetic combinatorics and ergodic theory. For instance, if {(X, \mu, T)} is a measure-preserving system – a probability space {(X,\mu)} equipped with a measure-preserving shift, and {f_1,f_2 \in L^\infty(X)} are bounded measurable functions, then the correlation sequence

\displaystyle a(n) := \int_X f_1(x) f_2(T^n x)\ d\mu(x)

can be shown to be an almost periodic sequence, plus an error term {b_n} which is “null” in the sense that it has vanishing uniform density:

\displaystyle \sup_N \frac{1}{M} \sum_{n=N+1}^{N+M} |b_n| \rightarrow 0 \hbox{ as } M \rightarrow \infty.

This can be established in a number of ways, for instance by writing {a(n)} as the Fourier coefficients of the spectral measure of the shift {T} with respect to the functions {f_1,f_2}, and then decomposing that measure into pure point and continuous components.

In the last two decades or so, it has become clear that there are natural higher order versions of these concepts, in which linear polynomials such as {\alpha n + \beta} are replaced with higher degree counterparts. The most obvious candidates for these counterparts would be the polynomials {\alpha_d n^d + \dots + \alpha_0}, but this turns out to not be a complete set of higher degree objects needed for the theory. Instead, the higher order versions of quasiperiodic and almost periodic sequences are now known as basic nilsequences and nilsequences respectively, while the higher order version of a linear phase is a nilcharacter; each nilcharacter then has a symbol that is a higher order generalisation of the concept of a frequency (and the collection of all symbols forms a group that can be viewed as a higher order version of the Pontryagin dual of {{\bf Z}}). The theory of these objects is spread out in the literature across a number of papers; in particular, the theory of nilcharacters is mostly developed in Appendix E of this 116-page paper of Ben Green, Tamar Ziegler, and myself, and is furthermore written using nonstandard analysis and treating the more general setting of higher dimensional sequences. I therefore decided to rewrite some of that material in this blog post, in the simpler context of the qualitative asymptotic theory of one-dimensional nilsequences and nilcharacters rather than the quantitative single-scale theory that is needed for combinatorial applications (and which necessitated the use of nonstandard analysis in the previous paper).

For technical reasons (having to do with the non-trivial topological structure on nilmanifolds), it will be convenient to work with vector-valued sequences, that take values in a finite-dimensional complex vector space {{\bf C}^m} rather than in {{\bf C}}. By doing so, the space of sequences is now, technically, no longer a ring, as the operations of addition and multiplication on vector-valued sequences become ill-defined. However, we can still take complex conjugates of a sequence, and add sequences taking values in the same vector space {{\bf C}^m}, and for sequences taking values in different vector spaces {{\bf C}^m}, {{\bf C}^{m'}}, we may utilise the tensor product {\otimes: {\bf C}^m \times {\bf C}^{m'} \rightarrow {\bf C}^{mm'}}, which we will normalise by defining

\displaystyle (z_1,\dots,z_m) \otimes (w_1,\dots,w_{m'}) = (z_1 w_1, \dots, z_1 w_{m'}, \dots, z_m w_1, \dots, z_m w_{m'} ).

This product is associative and bilinear, and also commutative up to permutation of the indices. It also interacts well with the Hermitian norm

\displaystyle \| (z_1,\dots,z_m) \| := \sqrt{|z_1|^2 + \dots + |z_m|^2}

since we have {\|z \otimes w\| = \|z\| \|w\|}.

The traditional definition of a basic nilsequence (as defined for instance by Bergelson, Host, and Kra) is as follows:

Definition 1 (Basic nilsequence, first definition) A nilmanifold of step at most {d} is a quotient {G/\Gamma}, where {G} is a connected, simply connected nilpotent Lie group of step at most {d} (thus, all {d+1}-fold commutators vanish) and {\Gamma} is a discrete cocompact lattice in {G}. A basic nilsequence of degree at most {d} is a sequence of the form {n \mapsto F(g^n g_0 \Gamma)}, where {g_0 \Gamma \in G/\Gamma}, {g \in G}, and {F: G/\Gamma \rightarrow {\bf C}^m} is a continuous function.

For instance, it is not difficult using this definition to show that a sequence is a basic nilsequence of degree at most {1} if and only if it is quasiperiodic. The requirement that {G} be simply connected can be easily removed if desired by passing to a universal cover, but it is technically convenient to assume it (among other things, it allows for a well-defined logarithm map that obeys the Baker-Campbell-Hausdorff formula). When one wishes to perform a more quantitative analysis of nilsequences (particularly when working on a “single scale”. sich as on a single long interval {\{ N+1, \dots, N+M\}}), it is common to impose additional regularity conditions on the function {F}, such as Lipschitz continuity or smoothness, but ordinary continuity will suffice for the qualitative discussion in this blog post.

Nowadays, and particularly when one needs to understand the “single-scale” equidistribution properties of nilsequences, it is more convenient (as is for instance done in this ICM paper of Green) to use an alternate definition of a nilsequence as follows.

Definition 2 Let {d \geq 0}. A filtered group of degree at most {d} is a group {G} together with a sequence {G_\bullet = (G_0,G_1,G_2,\dots)} of subgroups {G \geq G_0 \geq G_1 \geq \dots} with {G_{d+1}=\{\hbox{id}\}} and {[G_i,G_j] \subset G_{i+j}} for {i,j \geq 0}. A polynomial sequence {g: {\bf Z} \rightarrow G} into a filtered group is a function such that {\partial_{h_i} \dots \partial_{h_1} g(n) \in G_i} for all {i \geq 0} and {n,h_1,\dots,h_i \in{\bf Z}}, where {\partial_h g(n) := g(n+h) g(n)^{-1}} is the difference operator. A filtered nilmanifold of degree at most {s} is a quotient {G/\Gamma}, where {G} is a filtered group of degree at most {s} such that {G} and all of the subgroups {G_i} are connected, simply connected nilpotent filtered Lie group, and {\Gamma} is a discrete cocompact subgroup of {G} such that {\Gamma_i := \Gamma \cap G_i} is a discrete cocompact subgroup of {G_i}. A basic nilsequence of degree at most {d} is a sequence of the form {n \mapsto F(g(n))}, where {g: {\bf Z} \rightarrow G} is a polynomial sequence, {G/\Gamma} is a filtered nilmanifold of degree at most {d}, and {F: G \rightarrow {\bf C}^m} is a continuous function which is {\Gamma}-automorphic, in the sense that {F(g \gamma) = F(g)} for all {g \in G} and {\gamma \in \Gamma}.

One can easily identify a {\Gamma}-automorphic function on {G} with a function on {G/\Gamma}, but there are some (very minor) advantages to working on the group {G} instead of the quotient {G/\Gamma}, as it becomes slightly easier to modify the automorphy group {\Gamma} when needed. (But because the action of {\Gamma} on {G} is free, one can pass from {\Gamma}-automorphic functions on {G} to functions on {G/\Gamma} with very little difficulty.) The main reason to work with polynomial sequences {n \mapsto g(n)} rather than geometric progressions {n \mapsto g^n g_0 \Gamma} is that they form a group, a fact essentially established by by Lazard and Leibman; see Corollary B.4 of this paper of Green, Ziegler, and myself for a proof in the filtered group setting.

It is easy to see that any sequence that is a basic nilsequence of degree at most {d} in the sense of the first definition, is also a basic nilsequence of degree at most {d} in the second definition, since a nilmanifold of degree at most {d} can be filtered using the lower central series, and any linear sequence {n \mapsto g^n g_0} will be a polynomial sequence with respect to that filtration. The converse implication is a little trickier, but still not too hard to show: see Appendix C of this paper of Ben Green, Tamar Ziegler, and myself. There are two key examples of basic nilsequences to keep in mind. The first are the polynomially quasiperiodic sequences

\displaystyle a(n) = F( P_1(n), \dots, P_k(n) ),

where {P_1,\dots,P_k: {\bf Z} \rightarrow {\bf R}} are polynomials of degree at most {d}, and {F: {\bf R}^k \rightarrow {\bf C}^m} is a {{\bf Z}^k}-automorphic (i.e., {{\bf Z}^k}-periodic) continuous function. The map {P: {\bf Z} \rightarrow {\bf R}^k} defined by {P(n) := (P_1(n),\dots,P_k(n))} is a polynomial map of degree at most {d}, if one filters {{\bf R}^k} by defining {({\bf R}^k)_i} to equal {{\bf R}^k} when {i \leq d}, and {\{0\}} for {i > d}. The torus {{\bf R}^k/{\bf Z}^k} then becomes a filtered nilmanifold of degree at most {d}, and {a(n)} is thus a basic nilsequence of degree at most {d} as per the second definition. It is also possible explicitly describe {a_n} as a basic nilsequence of degree at most {d} as per the first definition, for instance (in the {k=1} case) by taking {G} to be the space of upper triangular unipotent {d+1 \times d+1} real matrices, and {\Gamma} the subgroup with integer coefficients; we leave the details to the interested reader.

The other key example is a sequence of the form

\displaystyle a(n) = F( \alpha n, \{ \alpha n \} \beta n )

where {\alpha,\beta} are real numbers, {\{ \alpha n \} = \alpha n - \lfloor \alpha n \rfloor} denotes the fractional part of {\alpha n}, and and {F: {\bf R}^2 \rightarrow {\bf C}^m} is a {{\bf Z}^2}-automorphic continuous function that vanishes in a neighbourhood of {{\bf Z} \times {\bf R}}. To describe this as a nilsequence, we use the nilpotent connected, simply connected degree {2}, Heisenberg group

\displaystyle G := \begin{pmatrix} 1 & {\bf R} & {\bf R} \\ 0 & 1 & {\bf R} \\ 0 & 0 & 1 \end{pmatrix}

with the lower central series filtration {G_0=G_1=G}, {G_2= [G,G] = \begin{pmatrix} 1 &0 & {\bf R} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}}, and {G_i = \{ \mathrm{id} \}} for {i > 2}, {\Gamma} to be the discrete compact subgroup

\displaystyle \Gamma := \begin{pmatrix} 1 & {\bf Z} & {\bf Z} \\ 0 & 1 & {\bf Z} \\ 0 & 0 & 1 \end{pmatrix},

{g: {\bf Z} \rightarrow G} to be the polynomial sequence

\displaystyle g(n) := \begin{pmatrix} 1 & \beta n & \alpha \beta n^2 \\ 0 & 1 & \alpha n \\ 0 & 0 & 1 \end{pmatrix}

and {\tilde F: G \rightarrow {\bf C}^m} to be the {\Gamma}-automorphic function

\displaystyle \tilde F( \begin{pmatrix} 1 & x & y \\ 0 & 1 & z \\ 0 & 0 & 1 \end{pmatrix} ) = F( \{ z \}, y - \lfloor z \rfloor x );

one easily verifies that this function is indeed {\Gamma}-automorphic, and it is continuous thanks to the vanishing properties of {F}. Also we have {a(n) = \tilde F(g(n))}, so {a} is a basic nilsequence of degree at most {2}. One can concoct similar examples with {\{ \alpha n \} \beta n} replaced by other “bracket polynomials” of {n}; for instance

\displaystyle a(n) = F( \alpha n, \{ \alpha n - \frac{1}{2} \} \beta n )

will be a basic nilsequence if {F} now vanishes in a neighbourhood of {(\frac{1}{2}+{\bf Z}) \times {\bf R}} rather than {{\bf Z} \times {\bf R}}. See this paper of Bergelson and Leibman for more discussion of bracket polynomials (also known as generalised polynomials) and their relationship to nilsequences.

A nilsequence of degree at most {d} is defined to be a sequence that is the uniform limit of basic nilsequences of degree at most {d}. Thus for instance a sequence is a nilsequence of degree at most {1} if and only if it is almost periodic, while a sequence is a nilsequence of degree at most {0} if and only if it is constant. Such objects arise in higher order recurrence: for instance, if {h_0,\dots,h_d} are integers, {(X,\mu,T)} is a measure-preserving system, and {f_0,\dots,f_d \in L^\infty(X)}, then it was shown by Leibman that the sequence

\displaystyle n \mapsto \int_X f_0(T^{h_0 n} x) \dots f_d(T^{h_d n} x)\ d\mu(x)

is equal to a nilsequence of degree at most {d}, plus a null sequence. (The special case when the measure-preserving system was ergodic and {h_i = i} for {i=0,\dots,d} was previously established by Bergelson, Host, and Kra.) Nilsequences also arise in the inverse theory of the Gowers uniformity norms, as discussed for instance in this previous post.

It is easy to see that a sequence {a: {\bf Z} \rightarrow {\bf C}^m} is a basic nilsequence of degree at most {d} if and only if each of its {m} components are. The scalar basic nilsequences {a: {\bf Z} \rightarrow {\bf C}} of degree {d} are easily seen to form a {*}-algebra (that is to say, they are a complex vector space closed under pointwise multiplication and complex conjugation), which implies similarly that vector-valued basic nilsequences {a: {\bf Z} \rightarrow {\bf C}^m} of degree at most {d} form a complex vector space closed under complex conjugation for each {m}, and that the tensor product of any two basic nilsequences of degree at most {d} is another basic nilsequence of degree at most {d}. Similarly with “basic nilsequence” replaced by “nilsequence” throughout.

Now we turn to the notion of a nilcharacter, as defined in this paper of Ben Green, Tamar Ziegler, and myself:

Definition 3 (Nilcharacters) Let {d \geq 1}. A sub-nilcharacter of degree {d} is a basic nilsequence {\chi: n \mapsto F(g(n))} of degree at most {d}, such that {F} obeys the additional modulation property

\displaystyle F( g_d g ) = e( \xi \cdot g_d ) F(g) \ \ \ \ \ (1)

for all {g \in G} and {g_d \in G_d}, where {\xi: G_d \rightarrow {\bf R}} is a continuous homomorphism {g_d \mapsto \xi \cdot g_d}. (Note from (1) and {\Gamma}-automorphy that unless {F} vanishes identically, {\xi} must map {\Gamma_d} to {{\bf Z}}, thus without loss of generality one can view {\xi} as an element of the Pontryagial dual of the torus {G_d / \Gamma_d}.) If in addition one has {\|F(g)\|=1} for all {g \in G}, we call {\chi} a nilcharacter of degree {d \geq 1}.

In the degree one case {d=1}, the only sub-nilcharacters are of the form {\chi(n) = e(\alpha n)} for some vector {c \in {\bf C}^m} and {\alpha \in {\bf R}}, and this is a nilcharacter if {c} is a unit vector. Similarly, in higher degree, any sequence of the form {\chi(n) = c e(P(n))}, where {c \in {\bf C}^m} is a vector and {P: {\bf Z} \rightarrow {\bf R}} is a polynomial of degree at most {d}, is a sub-nilcharacter of degree {d}, and a character if {c} is a unit vector. A nilsequence of degree at most {d-1} is automatically a sub-nilcharacter of degree {d}, and a nilcharacter if it is of magnitude {1}. A further example of a nilcharacter is provided by the two-dimensional sequence {\chi: {\bf Z} \rightarrow {\bf C}^2} defined by

\displaystyle \chi(n) := ( F_0( \alpha n ) e( \{ \alpha n \} \beta n ), F_{1/2}( \alpha n ) e( \{ \alpha n - \frac{1}{2} \} \beta n ) ) \ \ \ \ \ (2)

where {F_0, F_{1/2}: {\bf R} \rightarrow {\bf C}} are continuous, {{\bf Z}}-automorphic functions that vanish on a neighbourhood of {{\bf Z}} and {\frac{1}{2}+{\bf Z}} respectively, and which form a partition of unity in the sense that

\displaystyle |F_0(x)|^2 + |F_{1/2}(x)|^2 = 1

for all {x \in {\bf R}}. Note that one needs both {F_0} and {F_{1/2}} to be not identically zero in order for all these conditions to be satisfied; it turns out (for topological reasons) that there is no scalar nilcharacter that is “equivalent” to this nilcharacter in a sense to be defined shortly. In some literature, one works exclusively with sub-nilcharacters rather than nilcharacters, however the former space contains zero-divisors, which is a little annoying technically. Nevertheless, both nilcharacters and sub-nilcharacters generate the same set of “symbols” as we shall see later.

We claim that every degree {d} sub-nilcharacter {f: {\bf Z} \rightarrow {\bf C}^m} can be expressed in the form {f = c \chi}, where {\chi: {\bf Z} \rightarrow {\bf C}^{m'}} is a degree {d} nilcharacter, and {c: {\bf C}^{m'} \rightarrow {\bf C}^m} is a linear transformation. Indeed, by scaling we may assume {f(n) = F(g(n))} where {|F| < 1} uniformly. Using partitions of unity, one can find further functions {F_1,\dots,F_m} also obeying (1) for the same character {\xi} such that {|F_1|^2 + \dots + |F_m|^2} is non-zero; by dividing out the {F_1,\dots,F_m} by the square root of this quantity, and then multiplying by {\sqrt{1-|F|^2}}, we may assume that

\displaystyle |F|^2 + |F_1|^2 + \dots + |F_m|^2 = 1,

and then

\displaystyle \chi(n) := (F(g(n)), F_1(g(n)), \dots, F_m(g(n)))

becomes a degree {d} nilcharacter that contains {f(n)} amongst its components, giving the claim.

As we shall show below, nilsequences can be approximated uniformly by linear combinations of nilcharacters, in much the same way that quasiperiodic or almost periodic sequences can be approximated uniformly by linear combinations of linear phases. In particular, nilcharacters can be used as “obstructions to uniformity” in the sense of the inverse theory of the Gowers uniformity norms.

The space of degree {d} nilcharacters forms a semigroup under tensor product, with the constant sequence {1} as the identity. One can upgrade this semigroup to an abelian group by quotienting nilcharacters out by equivalence:

Definition 4 Let {d \geq 1}. We say that two degree {d} nilcharacters {\chi: {\bf Z} \rightarrow {\bf C}^m}, {\chi': {\bf Z} \rightarrow {\bf C}^{m'}} are equivalent if {\chi \otimes \overline{\chi'}: {\bf Z} \rightarrow {\bf C}^{mm'}} is equal (as a sequence) to a basic nilsequence of degree at most {d-1}. (We will later show that this is indeed an equivalence relation.) The equivalence class {[\chi]_{\mathrm{Symb}^d({\bf Z})}} of such a nilcharacter will be called the symbol of that nilcharacter (in analogy to the symbol of a differential or pseudodifferential operator), and the collection of such symbols will be denoted {\mathrm{Symb}^d({\bf Z})}.

As we shall see below the fold, {\mathrm{Symb}^d({\bf Z})} has the structure of an abelian group, and enjoys some nice “symbol calculus” properties; also, one can view symbols as precisely describing the obstruction to equidistribution for nilsequences. For {d=1}, the group is isomorphic to the Ponytragin dual {\hat {\bf Z} = {\bf R}/{\bf Z}} of the integers, and {\mathrm{Symb}^d({\bf Z})} for {d > 1} should be viewed as higher order generalisations of this Pontryagin dual. In principle, this group can be explicitly described for all {d}, but the theory rapidly gets complicated as {d} increases (much as the classification of nilpotent Lie groups or Lie algebras of step {d} rapidly gets complicated even for medium-sized {d} such as {d=3} or {d=4}). We will give an explicit description of the {d=2} case here. There is however one nice (and non-trivial) feature of {\mathrm{Symb}^d({\bf Z})} for {d \geq 2} – it is not just an abelian group, but is in fact a vector space over the rationals {{\bf Q}}!

Read the rest of this entry »

Note: this post is of a particularly technical nature, in particular presuming familiarity with nilsequences, nilsystems, characteristic factors, etc., and is primarily intended for experts.

As mentioned in the previous post, Ben Green, Tamar Ziegler, and myself proved the following inverse theorem for the Gowers norms:

Theorem 1 (Inverse theorem for Gowers norms) Let {N \geq 1} and {s \geq 1} be integers, and let {\delta > 0}. Suppose that {f: {\bf Z} \rightarrow [-1,1]} is a function supported on {[N] := \{1,\dots,N\}} such that

\displaystyle \frac{1}{N^{s+2}} \sum_{n,h_1,\dots,h_{s+1}} \prod_{\omega \in \{0,1\}^{s+1}} f(n+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}) \geq \delta.

Then there exists a filtered nilmanifold {G/\Gamma} of degree {\leq s} and complexity {O_{s,\delta}(1)}, a polynomial sequence {g: {\bf Z} \rightarrow G}, and a Lipschitz function {F: G/\Gamma \rightarrow {\bf R}} of Lipschitz constant {O_{s,\delta}(1)} such that

\displaystyle \frac{1}{N} \sum_n f(n) F(g(n) \Gamma) \gg_{s,\delta} 1.

This result was conjectured earlier by Ben Green and myself; this conjecture was strongly motivated by an analogous inverse theorem in ergodic theory by Host and Kra, which we formulate here in a form designed to resemble Theorem 1 as closely as possible:

Theorem 2 (Inverse theorem for Gowers-Host-Kra seminorms) Let {s \geq 1} be an integer, and let {(X, T)} be an ergodic, countably generated measure-preserving system. Suppose that one has

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N^{s+1}} \sum_{h_1,\dots,h_{s+1} \in [N]} \int_X \prod_{\omega \in \{0,1\}^{s+1}} f(T^{\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}}x)\ d\mu(x)

\displaystyle > 0

for all non-zero {f \in L^\infty(X)} (all {L^p} spaces are real-valued in this post). Then {(X,T)} is an inverse limit (in the category of measure-preserving systems, up to almost everywhere equivalence) of ergodic degree {\leq s} nilsystems, that is to say systems of the form {(G/\Gamma, x \mapsto gx)} for some degree {\leq s} filtered nilmanifold {G/\Gamma} and a group element {g \in G} that acts ergodically on {G/\Gamma}.

It is a natural question to ask if there is any logical relationship between the two theorems. In the finite field category, one can deduce the combinatorial inverse theorem from the ergodic inverse theorem by a variant of the Furstenberg correspondence principle, as worked out by Tamar Ziegler and myself, however in the current context of {{\bf Z}}-actions, the connection is less clear.

One can split Theorem 2 into two components:

Theorem 3 (Weak inverse theorem for Gowers-Host-Kra seminorms) Let {s \geq 1} be an integer, and let {(X, T)} be an ergodic, countably generated measure-preserving system. Suppose that one has

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N^{s+1}} \sum_{h_1,\dots,h_{s+1} \in [N]} \int_X \prod_{\omega \in \{0,1\}^{s+1}} T^{\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}} f\ d\mu

\displaystyle > 0

for all non-zero {f \in L^\infty(X)}, where {T^h f := f \circ T^h}. Then {(X,T)} is a factor of an inverse limit of ergodic degree {\leq s} nilsystems.

Theorem 4 (Pro-nilsystems closed under factors) Let {s \geq 1} be an integer. Then any factor of an inverse limit of ergodic degree {\leq s} nilsystems, is again an inverse limit of ergodic degree {\leq s} nilsystems.

Indeed, it is clear that Theorem 2 implies both Theorem 3 and Theorem 4, and conversely that the two latter theorems jointly imply the former. Theorem 4 is, in principle, purely a fact about nilsystems, and should have an independent proof, but this is not known; the only known proofs go through the full machinery needed to prove Theorem 2 (or the closely related theorem of Ziegler). (However, the fact that a factor of a nilsystem is again a nilsystem was established previously by Parry.)

The purpose of this post is to record a partial implication in reverse direction to the correspondence principle:

Proposition 5 Theorem 1 implies Theorem 3.

As mentioned at the start of the post, a fair amount of familiarity with the area is presumed here, and some routine steps will be presented with only a fairly brief explanation.

Read the rest of this entry »

One of the basic objects of study in combinatorics are finite strings {(a_n)_{n=0}^N} or infinite strings {(a_n)_{n=0}^\infty} of symbols {a_n} from some given alphabet {{\mathcal A}}, which could be either finite or infinite (but which we shall usually take to be compact). For instance, a set {A} of natural numbers can be identified with the infinite string {(1_A(n))_{n=0}^\infty} of {0}s and {1}s formed by the indicator of {A}, e.g. the even numbers can be identified with the string {1010101\ldots} from the alphabet {\{0,1\}}, the multiples of three can be identified with the string {100100100\ldots}, and so forth. One can also consider doubly infinite strings {(a_n)_{n \in {\bf Z}}}, which among other things can be used to describe arbitrary subsets of integers.

On the other hand, the basic object of study in dynamics (and in related fields, such as ergodic theory) is that of a dynamical system {(X,T)}, that is to say a space {X} together with a shift map {T: X \rightarrow X} (which is often assumed to be invertible, although one can certainly study non-invertible dynamical systems as well). One often adds additional structure to this dynamical system, such as topological structure (giving rise topological dynamics), measure-theoretic structure (giving rise to ergodic theory), complex structure (giving rise to complex dynamics), and so forth. A dynamical system gives rise to an action of the natural numbers {{\bf N}} on the space {X} by using the iterates {T^n: X \rightarrow X} of {T} for {n=0,1,2,\ldots}; if {T} is invertible, we can extend this action to an action of the integers {{\bf Z}} on the same space. One can certainly also consider dynamical systems whose underlying group (or semi-group) is something other than {{\bf N}} or {{\bf Z}} (e.g. one can consider continuous dynamical systems in which the evolution group is {{\bf R}}), but we will restrict attention to the classical situation of {{\bf N}} or {{\bf Z}} actions here.

There is a fundamental correspondence principle connecting the study of strings (or subsets of natural numbers or integers) with the study of dynamical systems. In one direction, given a dynamical system {(X,T)}, an observable {c: X \rightarrow {\mathcal A}} taking values in some alphabet {{\mathcal A}}, and some initial datum {x_0 \in X}, we can first form the forward orbit {(T^n x_0)_{n=0}^\infty} of {x_0}, and then observe this orbit using {c} to obtain an infinite string {(c(T^n x_0))_{n=0}^\infty}. If the shift {T} in this system is invertible, one can extend this infinite string into a doubly infinite string {(c(T^n x_0))_{n \in {\bf Z}}}. Thus we see that every quadruplet {(X,T,c,x_0)} consisting of a dynamical system {(X,T)}, an observable {c}, and an initial datum {x_0} creates an infinite string.

Example 1 If {X} is the three-element set {X = {\bf Z}/3{\bf Z}} with the shift map {Tx := x+1}, {c: {\bf Z}/3{\bf Z} \rightarrow \{0,1\}} is the observable that takes the value {1} at the residue class {0 \hbox{ mod } 3} and zero at the other two classes, and one starts with the initial datum {x_0 = 0 \hbox{ mod } 3}, then the observed string {(c(T^n x_0))_{n=0}^\infty} becomes the indicator {100100100\ldots} of the multiples of three.

In the converse direction, every infinite string {(a_n)_{n=0}^\infty} in some alphabet {{\mathcal A}} arises (in a decidedly non-unique fashion) from a quadruple {(X,T,c,x_0)} in the above fashion. This can be easily seen by the following “universal” construction: take {X} to be the set {X:= {\mathcal A}^{\bf N}} of infinite strings {(b_i)_{n=0}^\infty} in the alphabet {{\mathcal A}}, let {T: X \rightarrow X} be the shift map

\displaystyle  T(b_i)_{n=0}^\infty := (b_{i+1})_{n=0}^\infty,

let {c: X \rightarrow {\mathcal A}} be the observable

\displaystyle  c((b_i)_{n=0}^\infty) := b_0,

and let {x_0 \in X} be the initial point

\displaystyle  x_0 := (a_i)_{n=0}^\infty.

Then one easily sees that the observed string {(c(T^n x_0))_{n=0}^\infty} is nothing more than the original string {(a_n)_{n=0}^\infty}. Note also that this construction can easily be adapted to doubly infinite strings by using {{\mathcal A}^{\bf Z}} instead of {{\mathcal A}^{\bf N}}, at which point the shift map {T} now becomes invertible. An important variant of this construction also attaches an invariant probability measure to {X} that is associated to the limiting density of various sets associated to the string {(a_i)_{n=0}^\infty}, and leads to the Furstenberg correspondence principle, discussed for instance in these previous blog posts. Such principles allow one to rigorously pass back and forth between the combinatorics of strings and the dynamics of systems; for instance, Furstenberg famously used his correspondence principle to demonstrate the equivalence of Szemerédi’s theorem on arithmetic progressions with what is now known as the Furstenberg multiple recurrence theorem in ergodic theory.

In the case when the alphabet {{\mathcal A}} is the binary alphabet {\{0,1\}}, and (for technical reasons related to the infamous non-injectivity {0.999\ldots = 1.00\ldots} of the decimal representation system) the string {(a_n)_{n=0}^\infty} does not end with an infinite string of {1}s, then one can reformulate the above universal construction by taking {X} to be the interval {[0,1)}, {T} to be the doubling map {Tx := 2x \hbox{ mod } 1}, {c: X \rightarrow \{0,1\}} to be the observable that takes the value {1} on {[1/2,1)} and {0} on {[0,1/2)} (that is, {c(x)} is the first binary digit of {x}), and {x_0} is the real number {x_0 := \sum_{n=0}^\infty a_n 2^{-n-1}} (that is, {x_0 = 0.a_0a_1\ldots} in binary).

The above universal construction is very easy to describe, and is well suited for “generic” strings {(a_n)_{n=0}^\infty} that have no further obvious structure to them, but it often leads to dynamical systems that are much larger and more complicated than is actually needed to produce the desired string {(a_n)_{n=0}^\infty}, and also often obscures some of the key dynamical features associated to that sequence. For instance, to generate the indicator {100100100\ldots} of the multiples of three that were mentioned previously, the above universal construction requires an uncountable space {X} and a dynamics which does not obviously reflect the key features of the sequence such as its periodicity. (Using the unit interval model, the dynamics arise from the orbit of {2/7} under the doubling map, which is a rather artificial way to describe the indicator function of the multiples of three.)

A related aesthetic objection to the universal construction is that of the four components {X,T,c,x_0} of the quadruplet {(X,T,c,x_0)} used to generate the sequence {(a_n)_{n=0}^\infty}, three of the components {X,T,c} are completely universal (in that they do not depend at all on the sequence {(a_n)_{n=0}^\infty}), leaving only the initial datum {x_0} to carry all the distinctive features of the original sequence. While there is nothing wrong with this mathematically, from a conceptual point of view it would make sense to make all four components of the quadruplet to be adapted to the sequence, in order to take advantage of the accumulated intuition about various special dynamical systems (and special observables), not just special initial data.

One step in this direction can be made by restricting {X} to the orbit {\{ T^n x_0: n \in {\bf N} \}} of the initial datum {x_0} (actually for technical reasons it is better to restrict to the topological closure {\overline{\{ T^n x_0: n \in {\bf N} \}}} of this orbit, in order to keep {X} compact). For instance, starting with the sequence {100100100\ldots}, the orbit now consists of just three points {100100100\ldots}, {010010010\ldots}, {001001001\ldots}, bringing the system more in line with the example in Example 1. Technically, this is the “optimal” representation of the sequence by a quadruplet {(X,T,c,x_0)}, because any other such representation {(X',T',c',x'_0)} is a factor of this representation (in the sense that there is a unique map {\pi: X \rightarrow X'} with {T' \circ \pi = \pi \circ T}, {c' \circ \pi = c}, and {x'_0 = \pi(x_0)}). However, from a conceptual point of view this representation is still somewhat unsatisfactory, given that the elements of the system {X} are interpreted as infinite strings rather than elements of a more geometrically or algebraically rich object (e.g. points in a circle, torus, or other homogeneous space).

For general sequences {(a_n)_{n=0}^\infty}, locating relevant geometric or algebraic structure in a dynamical system generating that sequence is an important but very difficult task (see e.g. this paper of Host and Kra, which is more or less devoted to precisely this task in the context of working out what component of a dynamical system controls the multiple recurrence behaviour of that system). However, for specific examples of sequences {(a_n)_{n=0}^\infty}, one can use an informal procedure of educated guesswork in order to produce a more natural-looking quadruple {(X,T,c,x_0)} that generates that sequence. This is not a particularly difficult or deep operation, but I found it very helpful in internalising the intuition behind the correspondence principle. Being non-rigorous, this procedure does not seem to be emphasised in most presentations of the correspondence principle, so I thought I would describe it here.

Read the rest of this entry »

In Notes 5, we saw that the Gowers uniformity norms on vector spaces {{\bf F}^n} in high characteristic were controlled by classical polynomial phases {e(\phi)}.

Now we study the analogous situation on cyclic groups {{\bf Z}/N{\bf Z}}. Here, there is an unexpected surprise: the polynomial phases (classical or otherwise) are no longer sufficient to control the Gowers norms {U^{s+1}({\bf Z}/N{\bf Z})} once {s} exceeds {1}. To resolve this problem, one must enlarge the space of polynomials to a larger class. It turns out that there are at least three closely related options for this class: the local polynomials, the bracket polynomials, and the nilsequences. Each of the three classes has its own strengths and weaknesses, but in my opinion the nilsequences seem to be the most natural class, due to the rich algebraic and dynamical structure coming from the nilpotent Lie group undergirding such sequences. For reasons of space we shall focus primarily on the nilsequence viewpoint here.

Traditionally, nilsequences have been defined in terms of linear orbits {n \mapsto g^n x} on nilmanifolds {G/\Gamma}; however, in recent years it has been realised that it is convenient for technical reasons (particularly for the quantitative “single-scale” theory) to generalise this setup to that of polynomial orbits {n \mapsto g(n) \Gamma}, and this is the perspective we will take here.

A polynomial phase {n \mapsto e(\phi(n))} on a finite abelian group {H} is formed by starting with a polynomial {\phi: H \rightarrow {\bf R}/{\bf Z}} to the unit circle, and then composing it with the exponential function {e: {\bf R}/{\bf Z} \rightarrow {\bf C}}. To create a nilsequence {n \mapsto F(g(n) \Gamma)}, we generalise this construction by starting with a polynomial {g \Gamma: H \rightarrow G/\Gamma} into a nilmanifold {G/\Gamma}, and then composing this with a Lipschitz function {F: G/\Gamma \rightarrow {\bf C}}. (The Lipschitz regularity class is convenient for minor technical reasons, but one could also use other regularity classes here if desired.) These classes of sequences certainly include the polynomial phases, but are somewhat more general; for instance, they almost include bracket polynomial phases such as {n \mapsto e( \lfloor \alpha n \rfloor \beta n )}. (The “almost” here is because the relevant functions {F: G/\Gamma \rightarrow {\bf C}} involved are only piecewise Lipschitz rather than Lipschitz, but this is primarily a technical issue and one should view bracket polynomial phases as “morally” being nilsequences.)

In these notes we set out the basic theory for these nilsequences, including their equidistribution theory (which generalises the equidistribution theory of polynomial flows on tori from Notes 1) and show that they are indeed obstructions to the Gowers norm being small. This leads to the inverse conjecture for the Gowers norms that shows that the Gowers norms on cyclic groups are indeed controlled by these sequences.

Read the rest of this entry »

Ben Green, and I have just uploaded to the arXiv our paper “An arithmetic regularity lemma, an associated counting lemma, and applications“, submitted (a little behind schedule) to the 70th birthday conference proceedings for Endre Szemerédi. In this paper we describe the general-degree version of the arithmetic regularity lemma, which can be viewed as the counterpart of the Szemerédi regularity lemma, in which the object being regularised is a function {f: [N] \rightarrow [0,1]} on a discrete interval {[N] = \{1,\ldots,N\}} rather than a graph, and the type of patterns one wishes to count are additive patterns (such as arithmetic progressions {n,n+d,\ldots,n+(k-1)d}) rather than subgraphs. Very roughly speaking, this regularity lemma asserts that all such functions can be decomposed as a degree {\leq s} nilsequence (or more precisely, a variant of a nilsequence that we call an virtual irrational nilsequence), plus a small error, plus a third error which is extremely tiny in the Gowers uniformity norm {U^{s+1}[N]}. In principle, at least, the latter two errors can be readily discarded in applications, so that the regularity lemma reduces many questions in additive combinatorics to questions concerning (virtual irrational) nilsequences. To work with these nilsequences, we also establish a arithmetic counting lemma that gives an integral formula for counting additive patterns weighted by such nilsequences.

The regularity lemma is a manifestation of the “dichotomy between structure and randomness”, as discussed for instance in my ICM article or FOCS article. In the degree {1} case {s=1}, this result is essentially due to Green. It is powered by the inverse conjecture for the Gowers norms, which we and Tamar Ziegler have recently established (paper to be forthcoming shortly; the {k=4} case of our argument is discussed here). The counting lemma is established through the quantitative equidistribution theory of nilmanifolds, which Ben and I set out in this paper.

The regularity and counting lemmas are designed to be used together, and in the paper we give three applications of this combination. Firstly, we give a new proof of Szemerédi’s theorem, which proceeds via an energy increment argument rather than a density increment one. Secondly, we establish a conjecture of Bergelson, Host, and Kra, namely that if {A \subset [N]} has density {\alpha}, and {\epsilon > 0}, then there exist {\gg_{\alpha,\epsilon} N} shifts {h} for which {A} contains at least {(\alpha^4 - \epsilon)N} arithmetic progressions of length {k=4} of spacing {h}. (The {k=3} case of this conjecture was established earlier by Green; the {k=5} case is false, as was shown by Ruzsa in an appendix to the Bergelson-Host-Kra paper.) Thirdly, we establish a variant of a recent result of Gowers-Wolf, showing that the true complexity of a system of linear forms over {[N]} indeed matches the conjectured value predicted in their first paper.

In all three applications, the scheme of proof can be described as follows:

  • Apply the arithmetic regularity lemma, and decompose a relevant function {f} into three pieces, {f_{nil}, f_{sml}, f_{unf}}.
  • The uniform part {f_{unf}} is so tiny in the Gowers uniformity norm that its contribution can be easily dealt with by an appropriate “generalised von Neumann theorem”.
  • The contribution of the (virtual, irrational) nilsequence {f_{nil}} can be controlled using the arithmetic counting lemma.
  • Finally, one needs to check that the contribution of the small error {f_{sml}} does not overwhelm the main term {f_{nil}}. This is the trickiest bit; one often needs to use the counting lemma again to show that one can find a set of arithmetic patterns for {f_{nil}} that is so sufficiently “equidistributed” that it is not impacted by the small error.

To illustrate the last point, let us give the following example. Suppose we have a set {A \subset [N]} of some positive density (say {|A| = 10^{-1} N}) and we have managed to prove that {A} contains a reasonable number of arithmetic progressions of length {5} (say), e.g. it contains at least {10^{-10} N^2} such progressions. Now we perturb {A} by deleting a small number, say {10^{-2} N}, elements from {A} to create a new set {A'}. Can we still conclude that the new set {A'} contains any arithmetic progressions of length {5}?

Unfortunately, the answer could be no; conceivably, all of the {10^{-10} N^2} arithmetic progressions in {A} could be wiped out by the {10^{-2} N} elements removed from {A}, since each such element of {A} could be associated with up to {|A|} (or even {5|A|}) arithmetic progressions in {A}.

But suppose we knew that the {10^{-10} N^2} arithmetic progressions in {A} were equidistributed, in the sense that each element in {A} belonged to the same number of such arithmetic progressions, namely {5 \times 10^{-9} N}. Then each element deleted from {A} only removes at most {5 \times 10^{-9} N} progressions, and so one can safely remove {10^{-2} N} elements from {A} and still retain some arithmetic progressions. The same argument works if the arithmetic progressions are only approximately equidistributed, in the sense that the number of progressions that a given element {a \in A} belongs to concentrates sharply around its mean (for instance, by having a small variance), provided that the equidistribution is sufficiently strong. Fortunately, the arithmetic regularity and counting lemmas are designed to give precisely such a strong equidistribution result.

A succinct (but slightly inaccurate) summation of the regularity+counting lemma strategy would be that in order to solve a problem in additive combinatorics, it “suffices to check it for nilsequences”. But this should come with a caveat, due to the issue of the small error above; in addition to checking it for nilsequences, the answer in the nilsequence case must be sufficiently “dispersed” in a suitable sense, so that it can survive the addition of a small (but not completely negligible) perturbation.

One last “production note”. Like our previous paper with Emmanuel Breuillard, we used Subversion to write this paper, which turned out to be a significant efficiency boost as we could work on different parts of the paper simultaneously (this was particularly important this time round as the paper was somewhat lengthy and complicated, and there was a submission deadline). When doing so, we found it convenient to split the paper into a dozen or so pieces (one for each section of the paper, basically) in order to avoid conflicts, and to help coordinate the writing process. I’m also looking into git (a more advanced version control system), and am planning to use it for another of my joint projects; I hope to be able to comment on the relative strengths of these systems (and with plain old email) in the future.

Ben Green and I have just uploaded to the arXiv our paper, “The Möbius function is asymptotically orthogonal to nilsequences“, which is a sequel to our earlier paper “The quantitative behaviour of polynomial orbits on nilmanifolds“, which I talked about in this post.  In this paper, we apply our previous results on quantitative equidistribution polynomial orbits in nilmanifolds to settle the Möbius and nilsequences conjecture from our earlier paper, as part of our program to detect and count solutions to linear equations in primes.  (The other major plank of that program, namely the inverse conjecture for the Gowers norm, remains partially unresolved at present.)  Roughly speaking, this conjecture asserts the asymptotic orthogonality

\displaystyle |\frac{1}{N} \sum_{n=1}^N \mu(n) f(n)| \ll_A \log^{-A} N (1)

between the Möbius function \mu(n) and any Lipschitz nilsequence f(n), by which we mean a sequence of the form f(n) = F(g^n x) for some orbit g^n x in a nilmanifold G/\Gamma, and some Lipschitz function F: G/\Gamma \to {\Bbb C} on that nilmanifold.  (The implied constant can depend on the nilmanifold and on the Lipschitz constant of F, but it is important that it be independent of the generator g of the orbit or the base point x.)  The case when f is constant is essentially the prime number theorem; the case when f is periodic is essentially the prime number theorem in arithmetic progressions.  The case when f is almost periodic (e.g. f(n) = e^{2\pi i \alpha n} for some irrational \alpha) was established by Davenport, using the method of Vinogradov.  The case when f was a 2-step nilsequence (such as the quadratic phase f(n) = e^{2\pi i \alpha n^2}; bracket quadratic phases such as f(n) = e^{2\pi \lfloor \alpha n \rfloor \beta n} can also be covered by an approximation argument, though the logarithmic decay in (1) is weakened as a consequence) was done by Ben and myself a few years ago, by a rather ad hoc adaptation of Vinogradov’s method.  By using the equidistribution theory of nilmanifolds, we were able to apply Vinogradov’s method more systematically, and in fact the proof is relatively short (20 pages), although it relies on the 64-page predecessor paper on equidistribution.  I’ll talk a little bit more about the proof after the fold.

There is an amusing way to interpret the conjecture (using the close relationship between nilsequences and bracket polynomials) as an assertion of the pseudorandomness of the Liouville function from a computational complexity perspective.    Suppose you possess a calculator with the wonderful property of being infinite precision: it can accept arbitrarily large real numbers as input, manipulate them precisely, and also store them in memory.  However, this calculator has two limitations.  Firstly, the only operations available are addition, subtraction, multiplication, integer part x \mapsto [x], fractional part x \mapsto \{x\}, memory store (into one of O(1) registers), and memory recall (from one of these O(1) registers).  In particular, there is no ability to perform division.  Secondly, the calculator only has a finite display screen, and when it shows a real number, it only shows O(1) digits before and after the decimal point.  (Thus, for instance, the real number 1234.56789 might be displayed only as \ldots 34.56\ldots.)

Now suppose you play the following game with an opponent.

  1. The opponent specifies a large integer d.
  2. You get to enter in O(1) real constants of your choice into your calculator.  These can be absolute constants such as \sqrt{2} and \pi, or they can depend on d (e.g. you can enter in 10^{-d}).
  3. The opponent randomly selects an d-digit integer n, and enters n into one of the registers of your calculator.
  4. You are allowed to perform O(1) operations on your calculator and record what is displayed on the calculator’s viewscreen.
  5. After this, you have to guess whether the opponent’s number n had an odd or even number of prime factors (i.e. you guess \lambda(n).)
  6. If you guess correctly, you win $1; otherwise, you lose $1.

For instance, using your calculator you can work out the first few digits of \{ \sqrt{2}n \lfloor \sqrt{3} n \rfloor \}, provided of course that you entered the constants \sqrt{2} and \sqrt{3} in advance.  You can also work out the leading digits of n by storing 10^{-d} in advance, and computing the first few digits of 10^{-d} n.

Our theorem is equivalent to the assertion that as d goes to infinity (keeping the O(1) constants fixed), your probability of winning this game converges to 1/2; in other words, your calculator becomes asymptotically useless to you for the purposes of guessing whether n has an odd or even number of prime factors, and you may as well just guess randomly.

[I should mention a recent result in a similar spirit by Mauduit and Rivat; in this language, their result asserts that knowing the last few digits of the digit-sum of n does not increase your odds of guessing \lambda(n) correctly.]

Read the rest of this entry »

Archives