The twin prime conjecture is one of the oldest unsolved problems in analytic number theory. There are several reasons why this conjecture remains out of reach of current techniques, but the most important obstacle is the parity problem which prevents purely sieve-theoretic methods (or many other popular methods in analytic number theory, such as the circle method) from detecting pairs of prime twins in a way that can distinguish them from other twins of almost primes. The parity problem is discussed in these previous blog posts; this obstruction is ultimately powered by the Möbius pseudorandomness principle that asserts that the Möbius function {\mu} is asymptotically orthogonal to all “structured” functions (and in particular, to the weight functions constructed from sieve theory methods).

However, there is an intriguing “alternate universe” in which the Möbius function is strongly correlated with some structured functions, and specifically with some Dirichlet characters, leading to the existence of the infamous “Siegel zero“. In this scenario, the parity problem obstruction disappears, and it becomes possible, in principle, to attack problems such as the twin prime conjecture. In particular, we have the following result of Heath-Brown:

Theorem 1 At least one of the following two statements are true:

  • (Twin prime conjecture) There are infinitely many primes {p} such that {p+2} is also prime.
  • (No Siegel zeroes) There exists a constant {c>0} such that for every real Dirichlet character {\chi} of conductor {q > 1}, the associated Dirichlet {L}-function {s \mapsto L(s,\chi)} has no zeroes in the interval {[1-\frac{c}{\log q}, 1]}.

Informally, this result asserts that if one had an infinite sequence of Siegel zeroes, one could use this to generate infinitely many twin primes. See this survey of Friedlander and Iwaniec for more on this “illusory” or “ghostly” parallel universe in analytic number theory that should not actually exist, but is surprisingly self-consistent and to date proven to be impossible to banish from the realm of possibility.

The strategy of Heath-Brown’s proof is fairly straightforward to describe. The usual starting point is to try to lower bound

\displaystyle  \sum_{x \leq n \leq 2x} \Lambda(n) \Lambda(n+2) \ \ \ \ \ (1)

for some large value of {x}, where {\Lambda} is the von Mangoldt function. Actually, in this post we will work with the slight variant

\displaystyle  \sum_{x \leq n \leq 2x} \Lambda_2(n(n+2)) \nu(n(n+2))

where

\displaystyle  \Lambda_2(n) = (\mu * L^2)(n) = \sum_{d|n} \mu(d) \log^2 \frac{n}{d}

is the second von Mangoldt function, and {*} denotes Dirichlet convolution, and {\nu} is an (unsquared) Selberg sieve that damps out small prime factors. This sum also detects twin primes, but will lead to slightly simpler computations. For technical reasons we will also smooth out the interval {x \leq n \leq 2x} and remove very small primes from {n}, but we will skip over these steps for the purpose of this informal discussion. (In Heath-Brown’s original paper, the Selberg sieve {\nu} is essentially replaced by the more combinatorial restriction {1_{(n(n+2),q^{1/C}\#)=1}} for some large {C}, where {q^{1/C}\#} is the primorial of {q^{1/C}}, but I found the computations to be slightly easier if one works with a Selberg sieve, particularly if the sieve is not squared to make it nonnegative.)

If there is a Siegel zero {L(\beta,\chi)=0} with {\beta} close to {1} and {\chi} a Dirichlet character of conductor {q}, then multiplicative number theory methods can be used to show that the Möbius function {\mu} “pretends” to be like the character {\chi} in the sense that {\mu(p) \approx \chi(p)} for “most” primes {p} near {q} (e.g. in the range {q^\varepsilon \leq p \leq q^C} for some small {\varepsilon>0} and large {C>0}). Traditionally, one uses complex-analytic methods to demonstrate this, but one can also use elementary multiplicative number theory methods to establish these results (qualitatively at least), as will be shown below the fold.

The fact that {\mu} pretends to be like {\chi} can be used to construct a tractable approximation (after inserting the sieve weight {\nu}) in the range {[x,2x]} (where {x = q^C} for some large {C}) for the second von Mangoldt function {\Lambda_2}, namely the function

\displaystyle  \tilde \Lambda_2(n) := (\chi * L)(n) = \sum_{d|n} \chi(d) \log^2 \frac{n}{d}.

Roughly speaking, we think of the periodic function {\chi} and the slowly varying function {\log^2} as being of about the same “complexity” as the constant function {1}, so that {\tilde \Lambda_2} is roughly of the same “complexity” as the divisor function

\displaystyle  \tau(n) := (1*1)(n) = \sum_{d|n} 1,

which is considerably simpler to obtain asymptotics for than the von Mangoldt function as the Möbius function is no longer present. (For instance, note from the Dirichlet hyperbola method that one can estimate {\sum_{x \leq n \leq 2x} \tau(n)} to accuracy {O(\sqrt{x})} with little difficulty, whereas to obtain a comparable level of accuracy for {\sum_{x \leq n \leq 2x} \Lambda(n)} or {\sum_{x \leq n \leq 2x} \Lambda_2(n)} is essentially the Riemann hypothesis.)

One expects {\tilde \Lambda_2(n)} to be a good approximant to {\Lambda_2(n)} if {n} is of size {O(x)} and has no prime factors less than {q^{1/C}} for some large constant {C}. The Selberg sieve {\nu} will be mostly supported on numbers with no prime factor less than {q^{1/C}}. As such, one can hope to approximate (1) by the expression

\displaystyle  \sum_{x \leq n \leq 2x} \tilde \Lambda_2(n(n+2)) \nu(n(n+2)); \ \ \ \ \ (2)

as it turns out, the error between this expression and (1) is easily controlled by sieve-theoretic techniques. Let us ignore the Selberg sieve for now and focus on the slightly simpler sum

\displaystyle  \sum_{x \leq n \leq 2x} \tilde \Lambda_2(n(n+2)).

As discussed above, this sum should be thought of as a slightly more complicated version of the sum

\displaystyle \sum_{x \leq n \leq 2x} \tau(n(n+2)). \ \ \ \ \ (3)

Accordingly, let us look (somewhat informally) at the task of estimating the model sum (3). One can think of this problem as basically that of counting solutions to the equation {ab+2=cd} with {a,b,c,d} in various ranges; this is clearly related to understanding the equidistribution of the hyperbola {\{ (a,b) \in {\bf Z}/d{\bf Z}: ab + 2 = 0 \hbox{ mod } d \}} in {({\bf Z}/d{\bf Z})^2}. Taking Fourier transforms, the latter problem is closely related to estimation of the Kloosterman sums

\displaystyle  \sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{a_1 m + a_2 \overline{m}}{r} )

where {\overline{m}} denotes the inverse of {m} in {({\bf Z}/r{\bf Z})^\times}. One can then use the Weil bound

\displaystyle  \sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{am+b\overline{m}}{r} ) \ll r^{1/2 + o(1)} (a,b,r)^{1/2} \ \ \ \ \ (4)

where {(a,b,r)} is the greatest common divisor of {a,b,r} (with the convention that this is equal to {r} if {a,b} vanish), and the {o(1)} decays to zero as {r \rightarrow \infty}. The Weil bound yields good enough control on error terms to estimate (3), and as it turns out the same method also works to estimate (2) (provided that {x=q^C} with {C} large enough).

Actually one does not need the full strength of the Weil bound here; any power savings over the trivial bound of {r} will do. In particular, it will suffice to use the weaker, but easier to prove, bounds of Kloosterman:

Lemma 2 (Kloosterman bound) One has

\displaystyle  \sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{am+b\overline{m}}{r} ) \ll r^{3/4 + o(1)} (a,b,r)^{1/4} \ \ \ \ \ (5)

whenever {r \geq 1} and {a,b} are coprime to {r}, where the {o(1)} is with respect to the limit {r \rightarrow \infty} (and is uniform in {a,b}).

Proof: Observe from change of variables that the Kloosterman sum {\sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{am+b\overline{m}}{r} )} is unchanged if one replaces {(a,b)} with {(\lambda a, \lambda^{-1} b)} for {\lambda \in ({\bf Z}/d{\bf Z})^\times}. For fixed {a,b}, the number of such pairs {(\lambda a, \lambda^{-1} b)} is at least {r^{1-o(1)} / (a,b,r)}, thanks to the divisor bound. Thus it will suffice to establish the fourth moment bound

\displaystyle  \sum_{a,b \in {\bf Z}/r{\bf Z}} |\sum_{m \in ({\bf Z}/r{\bf Z})^\times} e\left( \frac{am+b\overline{m}}{r} \right)|^4 \ll d^{4+o(1)}.

The left-hand side can be rearranged as

\displaystyle  \sum_{m_1,m_2,m_3,m_4 \in ({\bf Z}/r{\bf Z})^\times} \sum_{a,b \in {\bf Z}/d{\bf Z}}

\displaystyle  e\left( \frac{a(m_1+m_2-m_3-m_4) + b(\overline{m_1}+\overline{m_2}-\overline{m_3}-\overline{m_4})}{r} \right)

which by Fourier summation is equal to

\displaystyle  d^2 \# \{ (m_1,m_2,m_3,m_4) \in (({\bf Z}/r{\bf Z})^\times)^4:

\displaystyle  m_1+m_2-m_3-m_4 = \frac{1}{m_1} + \frac{1}{m_2} - \frac{1}{m_3} - \frac{1}{m_4} = 0 \hbox{ mod } r \}.

Observe from the quadratic formula and the divisor bound that each pair {(x,y)\in ({\bf Z}/r{\bf Z})^2} has at most {O(r^{o(1)})} solutions {(m_1,m_2)} to the system of equations {m_1+m_2=x; \frac{1}{m_1} + \frac{1}{m_2} = y}. Hence the number of quadruples {(m_1,m_2,m_3,m_4)} of the desired form is {r^{2+o(1)}}, and the claim follows. \Box

We will also need another easy case of the Weil bound to handle some other portions of (2):

Lemma 3 (Easy Weil bound) Let {\chi} be a primitive real Dirichlet character of conductor {q}, and let {a,b,c,d \in{\bf Z}/q{\bf Z}}. Then

\displaystyle  \sum_{n \in {\bf Z}/q{\bf Z}} \chi(an+b) \chi(cn+d) \ll q^{o(1)} (ad-bc, q).

Proof: As {q} is the conductor of a primitive real Dirichlet character, {q} is equal to {2^j} times a squarefree odd number for some {j \leq 3}. By the Chinese remainder theorem, it thus suffices to establish the claim when {q} is an odd prime. We may assume that {ad-bc} is not divisible by this prime {q}, as the claim is trivial otherwise. If {a} vanishes then {c} does not vanish, and the claim follows from the mean zero nature of {\chi}; similarly if {c} vanishes. Hence we may assume that {a,c} do not vanish, and then we can normalise them to equal {1}. By completing the square it now suffices to show that

\displaystyle  \sum_{n \in {\bf Z}/p{\bf Z}} \chi( n^2 - b ) \ll 1

whenever {b \neq 0 \hbox{ mod } p}. As {\chi} is {+1} on the quadratic residues and {-1} on the non-residues, it now suffices to show that

\displaystyle  \# \{ (m,n) \in ({\bf Z}/p{\bf Z})^2: n^2 - b = m^2 \} = p + O(1).

But by making the change of variables {(x,y) = (n+m,n-m)}, the left-hand side becomes {\# \{ (x,y) \in ({\bf Z}/p{\bf Z})^2: xy=b\}}, and the claim follows. \Box

While the basic strategy of Heath-Brown’s argument is relatively straightforward, implementing it requires a large amount of computation to control both main terms and error terms. I experimented for a while with rearranging the argument to try to reduce the amount of computation; I did not fully succeed in arriving at a satisfactorily minimal amount of superfluous calculation, but I was able to at least reduce this amount a bit, mostly by replacing a combinatorial sieve with a Selberg-type sieve (which was not needed to be positive, so I dispensed with the squaring aspect of the Selberg sieve to simplify the calculations a little further; also for minor reasons it was convenient to retain a tiny portion of the combinatorial sieve to eliminate extremely small primes). Also some modest reductions in complexity can be obtained by using the second von Mangoldt function {\Lambda_2(n(n+2))} in place of {\Lambda(n) \Lambda(n+2)}. These exercises were primarily for my own benefit, but I am placing them here in case they are of interest to some other readers.

Read the rest of this entry »

The Poincaré upper half-plane {{\mathbf H} := \{ z: \hbox{Im}(z) > 0 \}} (with a boundary consisting of the real line {{\bf R}} together with the point at infinity {\infty}) carries an action of the projective special linear group

\displaystyle  \hbox{PSL}_2({\bf R}) := \{ \begin{pmatrix} a & b \\ c & d \end{pmatrix}: a,b,c,d \in {\bf R}: ad-bc = 1 \} / \{\pm 1\}

via fractional linear transformations:

\displaystyle  \begin{pmatrix} a & b \\ c & d \end{pmatrix} z := \frac{az+b}{cz+d}. \ \ \ \ \ (1)

Here and in the rest of the post we will abuse notation by identifying elements {\begin{pmatrix} a & b \\ c & d \end{pmatrix}} of the special linear group {\hbox{SL}_2({\bf R})} with their equivalence class {\{ \pm \begin{pmatrix} a & b \\ c & d \end{pmatrix} \}} in {\hbox{PSL}_2({\bf R})}; this will occasionally create or remove a factor of two in our formulae, but otherwise has very little effect, though one has to check that various definitions and expressions (such as (1)) are unaffected if one replaces a matrix {\begin{pmatrix} a & b \\ c & d \end{pmatrix}} by its negation {\begin{pmatrix} -a & -b \\ -c & -d \end{pmatrix}}. In particular, we recommend that the reader ignore the signs {\pm} that appear from time to time in the discussion below.

As the action of {\hbox{PSL}_2({\bf R})} on {{\mathbf H}} is transitive, and any given point in {{\mathbf H}} (e.g. {i}) has a stabiliser isomorphic to the projective rotation group {\hbox{PSO}_2({\bf R})}, we can view the Poincaré upper half-plane {{\mathbf H}} as a homogeneous space for {\hbox{PSL}_2({\bf R})}, and more specifically the quotient space of {\hbox{PSL}_2({\bf R})} of a maximal compact subgroup {\hbox{PSO}_2({\bf R})}. In fact, we can make the half-plane a symmetric space for {\hbox{PSL}_2({\bf R})}, by endowing {{\mathbf H}} with the Riemannian metric

\displaystyle  dg^2 := \frac{dx^2 + dy^2}{y^2}

(using Cartesian coordinates {z=x+iy}), which is invariant with respect to the {\hbox{PSL}_2({\bf R})} action. Like any other Riemannian metric, the metric on {{\mathbf H}} generates a number of other important geometric objects on {{\mathbf H}}, such as the distance function {d(z,w)} which can be computed to be given by the formula

\displaystyle  2(\cosh(d(z_1,z_2))-1) = \frac{|z_1-z_2|^2}{\hbox{Im}(z_1) \hbox{Im}(z_2)}, \ \ \ \ \ (2)

the volume measure {\mu = \mu_{\mathbf H}}, which can be computed to be

\displaystyle  d\mu = \frac{dx dy}{y^2},

and the Laplace-Beltrami operator, which can be computed to be {\Delta = y^2 (\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2})} (here we use the negative definite sign convention for {\Delta}). As the metric {dg} was {\hbox{PSL}_2({\bf R})}-invariant, all of these quantities arising from the metric are similarly {\hbox{PSL}_2({\bf R})}-invariant in the appropriate sense.

The Gauss curvature of the Poincaré half-plane can be computed to be the constant {-1}, thus {{\mathbf H}} is a model for two-dimensional hyperbolic geometry, in much the same way that the unit sphere {S^2} in {{\bf R}^3} is a model for two-dimensional spherical geometry (or {{\bf R}^2} is a model for two-dimensional Euclidean geometry). (Indeed, {{\mathbf H}} is isomorphic (via projection to a null hyperplane) to the upper unit hyperboloid {\{ (x,t) \in {\bf R}^{2+1}: t = \sqrt{1+|x|^2}\}} in the Minkowski spacetime {{\bf R}^{2+1}}, which is the direct analogue of the unit sphere in Euclidean spacetime {{\bf R}^3} or the plane {{\bf R}^2} in Galilean spacetime {{\bf R}^2 \times {\bf R}}.)

One can inject arithmetic into this geometric structure by passing from the Lie group {\hbox{PSL}_2({\bf R})} to the full modular group

\displaystyle  \hbox{PSL}_2({\bf Z}) := \{ \begin{pmatrix} a & b \\ c & d \end{pmatrix}: a,b,c,d \in {\bf Z}: ad-bc = 1 \} / \{\pm 1\}

or congruence subgroups such as

\displaystyle  \Gamma_0(q) := \{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \hbox{PSL}_2({\bf Z}): c = 0\ (q) \} / \{ \pm 1 \} \ \ \ \ \ (3)

for natural number {q}, or to the discrete stabiliser {\Gamma_\infty} of the point at infinity:

\displaystyle  \Gamma_\infty := \{ \pm \begin{pmatrix} 1 & b \\ 0 & 1 \end{pmatrix}: b \in {\bf Z} \} / \{\pm 1\}. \ \ \ \ \ (4)

These are discrete subgroups of {\hbox{PSL}_2({\bf R})}, nested by the subgroup inclusions

\displaystyle  \Gamma_\infty \leq \Gamma_0(q) \leq \Gamma_0(1)=\hbox{PSL}_2({\bf Z}) \leq \hbox{PSL}_2({\bf R}).

There are many further discrete subgroups of {\hbox{PSL}_2({\bf R})} (known collectively as Fuchsian groups) that one could consider, but we will focus attention on these three groups in this post.

Any discrete subgroup {\Gamma} of {\hbox{PSL}_2({\bf R})} generates a quotient space {\Gamma \backslash {\mathbf H}}, which in general will be a non-compact two-dimensional orbifold. One can understand such a quotient space by working with a fundamental domain {\hbox{Fund}( \Gamma \backslash {\mathbf H})} – a set consisting of a single representative of each of the orbits {\Gamma z} of {\Gamma} in {{\mathbf H}}. This fundamental domain is by no means uniquely defined, but if the fundamental domain is chosen with some reasonable amount of regularity, one can view {\Gamma \backslash {\mathbf H}} as the fundamental domain with the boundaries glued together in an appropriate sense. Among other things, fundamental domains can be used to induce a volume measure {\mu = \mu_{\Gamma \backslash {\mathbf H}}} on {\Gamma \backslash {\mathbf H}} from the volume measure {\mu = \mu_{\mathbf H}} on {{\mathbf H}} (restricted to a fundamental domain). By abuse of notation we will refer to both measures simply as {\mu} when there is no chance of confusion.

For instance, a fundamental domain for {\Gamma_\infty \backslash {\mathbf H}} is given (up to null sets) by the strip {\{ z \in {\mathbf H}: |\hbox{Re}(z)| < \frac{1}{2} \}}, with {\Gamma_\infty \backslash {\mathbf H}} identifiable with the cylinder formed by gluing together the two sides of the strip. A fundamental domain for {\hbox{PSL}_2({\bf Z}) \backslash {\mathbf H}} is famously given (again up to null sets) by an upper portion {\{ z \in {\mathbf H}: |\hbox{Re}(z)| < \frac{1}{2}; |z| > 1 \}}, with the left and right sides again glued to each other, and the left and right halves of the circular boundary glued to itself. A fundamental domain for {\Gamma_0(q) \backslash {\mathbf H}} can be formed by gluing together

\displaystyle  [\hbox{PSL}_2({\bf Z}) : \Gamma_0(q)] = q \prod_{p|q} (1 + \frac{1}{p}) = q^{1+o(1)}

copies of a fundamental domain for {\hbox{PSL}_2({\bf Z}) \backslash {\mathbf H}} in a rather complicated but interesting fashion.

While fundamental domains can be a convenient choice of coordinates to work with for some computations (as well as for drawing appropriate pictures), it is geometrically more natural to avoid working explicitly on such domains, and instead work directly on the quotient spaces {\Gamma \backslash {\mathbf H}}. In order to analyse functions {f: \Gamma \backslash {\mathbf H} \rightarrow {\bf C}} on such orbifolds, it is convenient to lift such functions back up to {{\mathbf H}} and identify them with functions {f: {\mathbf H} \rightarrow {\bf C}} which are {\Gamma}-automorphic in the sense that {f( \gamma z ) = f(z)} for all {z \in {\mathbf H}} and {\gamma \in \Gamma}. Such functions will be referred to as {\Gamma}-automorphic forms, or automorphic forms for short (we always implicitly assume all such functions to be measurable). (Strictly speaking, these are the automorphic forms with trivial factor of automorphy; one can certainly consider other factors of automorphy, particularly when working with holomorphic modular forms, which corresponds to sections of a more non-trivial line bundle over {\Gamma \backslash {\mathbf H}} than the trivial bundle {(\Gamma \backslash {\mathbf H}) \times {\bf C}} that is implicitly present when analysing scalar functions {f: {\mathbf H} \rightarrow {\bf C}}. However, we will not discuss this (important) more general situation here.)

An important way to create a {\Gamma}-automorphic form is to start with a non-automorphic function {f: {\mathbf H} \rightarrow {\bf C}} obeying suitable decay conditions (e.g. bounded with compact support will suffice) and form the Poincaré series {P_\Gamma[f]: {\mathbf H} \rightarrow {\bf C}} defined by

\displaystyle  P_{\Gamma}[f](z) = \sum_{\gamma \in \Gamma} f(\gamma z),

which is clearly {\Gamma}-automorphic. (One could equivalently write {f(\gamma^{-1} z)} in place of {f(\gamma z)} here; there are good argument for both conventions, but I have ultimately decided to use the {f(\gamma z)} convention, which makes explicit computations a little neater at the cost of making the group actions work in the opposite order.) Thus we naturally see sums over {\Gamma} associated with {\Gamma}-automorphic forms. A little more generally, given a subgroup {\Gamma_\infty} of {\Gamma} and a {\Gamma_\infty}-automorphic function {f: {\mathbf H} \rightarrow {\bf C}} of suitable decay, we can form a relative Poincaré series {P_{\Gamma_\infty \backslash \Gamma}[f]: {\mathbf H} \rightarrow {\bf C}} by

\displaystyle  P_{\Gamma_\infty \backslash \Gamma}[f](z) = \sum_{\gamma \in \hbox{Fund}(\Gamma_\infty \backslash \Gamma)} f(\gamma z)

where {\hbox{Fund}(\Gamma_\infty \backslash \Gamma)} is any fundamental domain for {\Gamma_\infty \backslash \Gamma}, that is to say a subset of {\Gamma} consisting of exactly one representative for each right coset of {\Gamma_\infty}. As {f} is {\Gamma_\infty}-automorphic, we see (if {f} has suitable decay) that {P_{\Gamma_\infty \backslash \Gamma}[f]} does not depend on the precise choice of fundamental domain, and is {\Gamma}-automorphic. These operations are all compatible with each other, for instance {P_\Gamma = P_{\Gamma_\infty \backslash \Gamma} \circ P_{\Gamma_\infty}}. A key example of Poincaré series are the Eisenstein series, although there are of course many other Poincaré series one can consider by varying the test function {f}.

For future reference we record the basic but fundamental unfolding identities

\displaystyle  \int_{\Gamma \backslash {\mathbf H}} P_\Gamma[f] g\ d\mu_{\Gamma \backslash {\mathbf H}} = \int_{\mathbf H} f g\ d\mu_{\mathbf H} \ \ \ \ \ (5)

for any function {f: {\mathbf H} \rightarrow {\bf C}} with sufficient decay, and any {\Gamma}-automorphic function {g} of reasonable growth (e.g. {f} bounded and compact support, and {g} bounded, will suffice). Note that {g} is viewed as a function on {\Gamma \backslash {\mathbf H}} on the left-hand side, and as a {\Gamma}-automorphic function on {{\mathbf H}} on the right-hand side. More generally, one has

\displaystyle  \int_{\Gamma \backslash {\mathbf H}} P_{\Gamma_\infty \backslash \Gamma}[f] g\ d\mu_{\Gamma \backslash {\mathbf H}} = \int_{\Gamma_\infty \backslash {\mathbf H}} f g\ d\mu_{\Gamma_\infty \backslash {\mathbf H}} \ \ \ \ \ (6)

whenever {\Gamma_\infty \leq \Gamma} are discrete subgroups of {\hbox{PSL}_2({\bf R})}, {f} is a {\Gamma_\infty}-automorphic function with sufficient decay on {\Gamma_\infty \backslash {\mathbf H}}, and {g} is a {\Gamma}-automorphic (and thus also {\Gamma_\infty}-automorphic) function of reasonable growth. These identities will allow us to move fairly freely between the three domains {{\mathbf H}}, {\Gamma_\infty \backslash {\mathbf H}}, and {\Gamma \backslash {\mathbf H}} in our analysis.

When computing various statistics of a Poincaré series {P_\Gamma[f]}, such as its values {P_\Gamma[f](z)} at special points {z}, or the {L^2} quantity {\int_{\Gamma \backslash {\mathbf H}} |P_\Gamma[f]|^2\ d\mu}, expressions of interest to analytic number theory naturally emerge. We list three basic examples of this below, discussed somewhat informally in order to highlight the main ideas rather than the technical details.

The first example we will give concerns the problem of estimating the sum

\displaystyle  \sum_{n \leq x} \tau(n) \tau(n+1), \ \ \ \ \ (7)

where {\tau(n) := \sum_{d|n} 1} is the divisor function. This can be rewritten (by factoring {n=bc} and {n+1=ad}) as

\displaystyle  \sum_{ a,b,c,d \in {\bf N}: ad-bc = 1} 1_{bc \leq x} \ \ \ \ \ (8)

which is basically a sum over the full modular group {\hbox{PSL}_2({\bf Z})}. At this point we will “cheat” a little by moving to the related, but different, sum

\displaystyle  \sum_{a,b,c,d \in {\bf Z}: ad-bc = 1} 1_{a^2+b^2+c^2+d^2 \leq x}. \ \ \ \ \ (9)

This sum is not exactly the same as (8), but will be a little easier to handle, and it is plausible that the methods used to handle this sum can be modified to handle (8). Observe from (2) and some calculation that the distance between {i} and {\begin{pmatrix} a & b \\ c & d \end{pmatrix} i = \frac{ai+b}{ci+d}} is given by the formula

\displaystyle  2(\cosh(d(i,\begin{pmatrix} a & b \\ c & d \end{pmatrix} i))-1) = a^2+b^2+c^2+d^2 - 2

and so one can express the above sum as

\displaystyle  2 \sum_{\gamma \in \hbox{PSL}_2({\bf Z})} 1_{d(i,\gamma i) \leq \hbox{cosh}^{-1}(x/2)}

(the factor of {2} coming from the quotient by {\{\pm 1\}} in the projective special linear group); one can express this as {P_\Gamma[f](i)}, where {\Gamma = \hbox{PSL}_2({\bf Z})} and {f} is the indicator function of the ball {B(i, \hbox{cosh}^{-1}(x/2))}. Thus we see that expressions such as (7) are related to evaluations of Poincaré series. (In practice, it is much better to use smoothed out versions of indicator functions in order to obtain good control on sums such as (7) or (9), but we gloss over this technical detail here.)

The second example concerns the relative

\displaystyle  \sum_{n \leq x} \tau(n^2+1) \ \ \ \ \ (10)

of the sum (7). Note from multiplicativity that (7) can be written as {\sum_{n \leq x} \tau(n^2+n)}, which is superficially very similar to (10), but with the key difference that the polynomial {n^2+1} is irreducible over the integers.

As with (7), we may expand (10) as

\displaystyle  \sum_{A,B,C \in {\bf N}: B^2 - AC = -1} 1_{B \leq x}.

At first glance this does not look like a sum over a modular group, but one can manipulate this expression into such a form in one of two (closely related) ways. First, observe that any factorisation {B + i = (a-bi) (c+di)} of {B+i} into Gaussian integers {a-bi, c+di} gives rise (upon taking norms) to an identity of the form {B^2 - AC = -1}, where {A = a^2+b^2} and {C = c^2+d^2}. Conversely, by using the unique factorisation of the Gaussian integers, every identity of the form {B^2-AC=-1} gives rise to a factorisation of the form {B+i = (a-bi) (c+di)}, essentially uniquely up to units. Now note that {(a-bi)(c+di)} is of the form {B+i} if and only if {ad-bc=1}, in which case {B = ac+bd}. Thus we can essentially write the above sum as something like

\displaystyle  \sum_{a,b,c,d: ad-bc = 1} 1_{|ac+bd| \leq x} \ \ \ \ \ (11)

and one the modular group {\hbox{PSL}_2({\bf Z})} is now manifest. An equivalent way to see these manipulations is as follows. A triple {A,B,C} of natural numbers with {B^2-AC=1} gives rise to a positive quadratic form {Ax^2+2Bxy+Cy^2} of normalised discriminant {B^2-AC} equal to {-1} with integer coefficients (it is natural here to allow {B} to take integer values rather than just natural number values by essentially doubling the sum). The group {\hbox{PSL}_2({\bf Z})} acts on the space of such quadratic forms in a natural fashion (by composing the quadratic form with the inverse {\begin{pmatrix} d & -b \\ -c & a \end{pmatrix}} of an element {\begin{pmatrix} a & b \\ c & d \end{pmatrix}} of {\hbox{SL}_2({\bf Z})}). Because the discriminant {-1} has class number one (this fact is equivalent to the unique factorisation of the gaussian integers, as discussed in this previous post), every form {Ax^2 + 2Bxy + Cy^2} in this space is equivalent (under the action of some element of {\hbox{PSL}_2({\bf Z})}) with the standard quadratic form {x^2+y^2}. In other words, one has

\displaystyle  Ax^2 + 2Bxy + Cy^2 = (dx-by)^2 + (-cx+ay)^2

which (up to a harmless sign) is exactly the representation {B = ac+bd}, {A = c^2+d^2}, {C = a^2+b^2} introduced earlier, and leads to the same reformulation of the sum (10) in terms of expressions like (11). Similar considerations also apply if the quadratic polynomial {n^2+1} is replaced by another quadratic, although one has to account for the fact that the class number may now exceed one (so that unique factorisation in the associated quadratic ring of integers breaks down), and in the positive discriminant case the fact that the group of units might be infinite presents another significant technical problem.

Note that {\begin{pmatrix} a & b \\ c & d \end{pmatrix} i = \frac{ai+b}{ci+d}} has real part {\frac{ac+bd}{c^2+d^2}} and imaginary part {\frac{1}{c^2+d^2}}. Thus (11) is (up to a factor of two) the Poincaré series {P_\Gamma[f](i)} as in the preceding example, except that {f} is now the indicator of the sector {\{ z: |\hbox{Re} z| \leq x |\hbox{Im} z| \}}.

Sums involving subgroups of the full modular group, such as {\Gamma_0(q)}, often arise when imposing congruence conditions on sums such as (10), for instance when trying to estimate the expression {\sum_{n \leq x: q|n} \tau(n^2+1)} when {q} and {x} are large. As before, one then soon arrives at the problem of evaluating a Poincaré series at one or more special points, where the series is now over {\Gamma_0(q)} rather than {\hbox{PSL}_2({\bf Z})}.

The third and final example concerns averages of Kloosterman sums

\displaystyle  S(m,n;c) := \sum_{x \in ({\bf Z}/c{\bf Z})^\times} e( \frac{mx + n\overline{x}}{c} ) \ \ \ \ \ (12)

where {e(\theta) := e^{2p\i i\theta}} and {\overline{x}} is the inverse of {x} in the multiplicative group {({\bf Z}/c{\bf Z})^\times}. It turns out that the {L^2} norms of Poincaré series {P_\Gamma[f]} or {P_{\Gamma_\infty \backslash \Gamma}[f]} are closely tied to such averages. Consider for instance the quantity

\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} |P_{\Gamma_\infty \backslash \Gamma_0(q)}[f]|^2\ d\mu_{\Gamma \backslash {\mathbf H}} \ \ \ \ \ (13)

where {q} is a natural number and {f} is a {\Gamma_\infty}-automorphic form that is of the form

\displaystyle  f(x+iy) = F(my) e(m x)

for some integer {m} and some test function {f: (0,+\infty) \rightarrow {\bf C}}, which for sake of discussion we will take to be smooth and compactly supported. Using the unfolding formula (6), we may rewrite (13) as

\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} \overline{f} P_{\Gamma_\infty \backslash \Gamma_0(q)}[f]\ d\mu_{\Gamma_\infty \backslash {\mathbf H}}.

To compute this, we use the double coset decomposition

\displaystyle  \Gamma_0(q) = \Gamma_\infty \cup \bigcup_{c \in {\mathbf N}: q|c} \bigcup_{1 \leq d \leq c: (d,c)=1} \Gamma_\infty \begin{pmatrix} a & b \\ c & d \end{pmatrix} \Gamma_\infty,

where for each {c,d}, {a,b} are arbitrarily chosen integers such that {ad-bc=1}. To see this decomposition, observe that every element {\begin{pmatrix} a & b \\ c & d \end{pmatrix}} in {\Gamma_0(q)} outside of {\Gamma_\infty} can be assumed to have {c>0} by applying a sign {\pm}, and then using the row and column operations coming from left and right multiplication by {\Gamma_\infty} (that is, shifting the top row by an integer multiple of the bottom row, and shifting the right column by an integer multiple of the left column) one can place {d} in the interval {[1,c]} and {(a,b)} to be any specified integer pair with {ad-bc=1}. From this we see that

\displaystyle  P_{\Gamma_\infty \backslash \Gamma_0(q)}[f] = f + \sum_{c \in {\mathbf N}: q|c} \sum_{1 \leq d \leq c: (d,c)=1} P_{\Gamma_\infty}[ f( \begin{pmatrix} a & b \\ c & d \end{pmatrix} \cdot ) ]

and so from further use of the unfolding formula (5) we may expand (13) as

\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} |f|^2\ d\mu_{\Gamma_\infty \backslash {\mathbf H}}

\displaystyle  + \sum_{c \in {\mathbf N}} \sum_{1 \leq d \leq c: (d,c)=1} \int_{\mathbf H} \overline{f}(z) f( \begin{pmatrix} a & b \\ c & d \end{pmatrix} z)\ d\mu_{\mathbf H}.

The first integral is just {m \int_0^\infty |F(y)|^2 \frac{dy}{y^2}}. The second expression is more interesting. We have

\displaystyle  \begin{pmatrix} a & b \\ c & d \end{pmatrix} z = \frac{az+b}{cz+d} = \frac{a}{c} - \frac{1}{c(cz+d)}

\displaystyle  = \frac{a}{c} - \frac{cx+d}{c((cx+d)^2+c^2y^2)} + \frac{iy}{(cx+d)^2 + c^2y^2}

so we can write

\displaystyle  \int_{\mathbf H} \overline{f}(z) f( \begin{pmatrix} a & b \\ c & d \end{pmatrix} z)\ d\mu_{\mathbf H}

as

\displaystyle  \int_0^\infty \int_{\bf R} \overline{F}(my) F(\frac{imy}{(cx+d)^2 + c^2y^2}) e( -mx + \frac{ma}{c} - m \frac{cx+d}{c((cx+d)^2+c^2y^2)} )

\displaystyle \frac{dx dy}{y^2}

which on shifting {x} by {d/c} simplifies a little to

\displaystyle  e( \frac{ma}{c} + \frac{md}{c} ) \int_0^\infty \int_{\bf R} F(my) \bar{F}(\frac{imy}{c^2(x^2 + y^2)}) e(- mx - m \frac{x}{c^2(x^2+y^2)} )

\displaystyle  \frac{dx dy}{y^2}

and then on scaling {x,y} by {m} simplifies a little further to

\displaystyle  e( \frac{ma}{c} + \frac{md}{c} ) \int_0^\infty \int_{\bf R} F(y) \bar{F}(\frac{m^2}{c^2} \frac{iy}{x^2 + y^2}) e(- x - \frac{m^2}{c^2} \frac{x}{x^2+y^2} )\ \frac{dx dy}{y^2}.

Note that as {ad-bc=1}, we have {a = \overline{d}} modulo {c}. Comparing the above calculations with (12), we can thus write (13) as

\displaystyle  m (\int_0^\infty |F(y)|^2 \frac{dy}{y^2} + \sum_{q|c} \frac{S(m,m;c)}{c} V(\frac{m}{c})) \ \ \ \ \ (14)

where

\displaystyle  V(u) := \frac{1}{u} \int_0^\infty \int_{\bf R} F(y) \bar{F}(u^2 \frac{y}{x^2 + y^2}) e(- x - u^2 \frac{x}{x^2+y^2} )\ \frac{dx dy}{y^2}

is a certain integral involving {F} and a parameter {u}, but which does not depend explicitly on parameters such as {m,c,d}. Thus we have indeed expressed the {L^2} expression (13) in terms of Kloosterman sums. It is possible to invert this analysis and express varius weighted sums of Kloosterman sums in terms of {L^2} expressions (possibly involving inner products instead of norms) of Poincaré series, but we will not do so here; see Chapter 16 of Iwaniec and Kowalski for further details.

Traditionally, automorphic forms have been analysed using the spectral theory of the Laplace-Beltrami operator {-\Delta} on spaces such as {\Gamma\backslash {\mathbf H}} or {\Gamma_\infty \backslash {\mathbf H}}, so that a Poincaré series such as {P_\Gamma[f]} might be expanded out using inner products of {P_\Gamma[f]} (or, by the unfolding identities, {f}) with various generalised eigenfunctions of {-\Delta} (such as cuspidal eigenforms, or Eisenstein series). With this approach, special functions, and specifically the modified Bessel functions {K_{it}} of the second kind, play a prominent role, basically because the {\Gamma_\infty}-automorphic functions

\displaystyle  x+iy \mapsto y^{1/2} K_{it}(2\pi |m| y) e(mx)

for {t \in {\bf R}} and {m \in {\bf Z}} non-zero are generalised eigenfunctions of {-\Delta} (with eigenvalue {\frac{1}{4}+t^2}), and are almost square-integrable on {\Gamma_\infty \backslash {\mathbf H}} (the {L^2} norm diverges only logarithmically at one end {y \rightarrow 0^+} of the cylinder {\Gamma_\infty \backslash {\mathbf H}}, while decaying exponentially fast at the other end {y \rightarrow +\infty}).

However, as discussed in this previous post, the spectral theory of an essentially self-adjoint operator such as {-\Delta} is basically equivalent to the theory of various solution operators associated to partial differential equations involving that operator, such as the Helmholtz equation {(-\Delta + k^2) u = f}, the heat equation {\partial_t u = \Delta u}, the Schrödinger equation {i\partial_t u + \Delta u = 0}, or the wave equation {\partial_{tt} u = \Delta u}. Thus, one can hope to rephrase many arguments that involve spectral data of {-\Delta} into arguments that instead involve resolvents {(-\Delta + k^2)^{-1}}, heat kernels {e^{t\Delta}}, Schrödinger propagators {e^{it\Delta}}, or wave propagators {e^{\pm it\sqrt{-\Delta}}}, or involve the PDE more directly (e.g. applying integration by parts and energy methods to solutions of such PDE). This is certainly done to some extent in the existing literature; resolvents and heat kernels, for instance, are often utilised. In this post, I would like to explore the possibility of reformulating spectral arguments instead using the inhomogeneous wave equation

\displaystyle  \partial_{tt} u - \Delta u = F.

Actually it will be a bit more convenient to normalise the Laplacian by {\frac{1}{4}}, and look instead at the automorphic wave equation

\displaystyle  \partial_{tt} u + (-\Delta - \frac{1}{4}) u = F. \ \ \ \ \ (15)

This equation somewhat resembles a “Klein-Gordon” type equation, except that the mass is imaginary! This would lead to pathological behaviour were it not for the negative curvature, which in principle creates a spectral gap of {\frac{1}{4}} that cancels out this factor.

The point is that the wave equation approach gives access to some nice PDE techniques, such as energy methods, Sobolev inequalities and finite speed of propagation, which are somewhat submerged in the spectral framework. The wave equation also interacts well with Poincaré series; if for instance {u} and {F} are {\Gamma_\infty}-automorphic solutions to (15) obeying suitable decay conditions, then their Poincaré series {P_{\Gamma_\infty \backslash \Gamma}[u]} and {P_{\Gamma_\infty \backslash \Gamma}[F]} will be {\Gamma}-automorphic solutions to the same equation (15), basically because the Laplace-Beltrami operator commutes with translations. Because of these facts, it is possible to replicate several standard spectral theory arguments in the wave equation framework, without having to deal directly with things like the asymptotics of modified Bessel functions. The wave equation approach to automorphic theory was introduced by Faddeev and Pavlov (using the Lax-Phillips scattering theory), and developed further by by Lax and Phillips, to recover many spectral facts about the Laplacian on modular curves, such as the Weyl law and the Selberg trace formula. Here, I will illustrate this by deriving three basic applications of automorphic methods in a wave equation framework, namely

  • Using the Weil bound on Kloosterman sums to derive Selberg’s 3/16 theorem on the least non-trivial eigenvalue for {-\Delta} on {\Gamma_0(q) \backslash {\mathbf H}} (discussed previously here);
  • Conversely, showing that Selberg’s eigenvalue conjecture (improving Selberg’s {3/16} bound to the optimal {1/4}) implies an optimal bound on (smoothed) sums of Kloosterman sums; and
  • Using the same bound to obtain pointwise bounds on Poincaré series similar to the ones discussed above. (Actually, the argument here does not use the wave equation, instead it just uses the Sobolev inequality.)

This post originated from an attempt to finally learn this part of analytic number theory properly, and to see if I could use a PDE-based perspective to understand it better. Ultimately, this is not that dramatic a depature from the standard approach to this subject, but I found it useful to think of things in this fashion, probably due to my existing background in PDE.

I thank Bill Duke and Ben Green for helpful discussions. My primary reference for this theory was Chapters 15, 16, and 21 of Iwaniec and Kowalski.

Read the rest of this entry »

The equidistribution theorem asserts that if {\alpha \in {\bf R}/{\bf Z}} is an irrational phase, then the sequence {(n\alpha)_{n=1}^\infty} is equidistributed on the unit circle, or equivalently that

\displaystyle \frac{1}{N} \sum_{n=1}^N F(n\alpha) \rightarrow \int_{{\bf R}/{\bf Z}} F(x)\ dx

for any continuous (or equivalently, for any smooth) function {F: {\bf R}/{\bf Z} \rightarrow {\bf C}}. By approximating {F} uniformly by a Fourier series, this claim is equivalent to that of showing that

\displaystyle \frac{1}{N} \sum_{n=1}^N e(hn\alpha) \rightarrow 0

for any non-zero integer {h} (where {e(x) := e^{2\pi i x}}), which is easily verified from the irrationality of {\alpha} and the geometric series formula. Conversely, if {\alpha} is rational, then clearly {\frac{1}{N} \sum_{n=1}^N e(hn\alpha)} fails to go to zero when {h} is a multiple of the denominator of {\alpha}.

One can then ask for more quantitative information about the decay of exponential sums of {\frac{1}{N} \sum_{n=1}^N e(n \alpha)}, or more generally on exponential sums of the form {\frac{1}{|Q|} \sum_{n \in Q} e(P(n))} for an arithmetic progression {Q} (in this post all progressions are understood to be finite) and a polynomial {P: Q \rightarrow \/{\bf Z}}. It will be convenient to phrase such information in the form of an inverse theorem, describing those phases for which the exponential sum is large. Indeed, we have

Lemma 1 (Geometric series formula, inverse form) Let {Q \subset {\bf Z}} be an arithmetic progression of length at most {N} for some {N \geq 1}, and let {P(n) = n \alpha + \beta} be a linear polynomial for some {\alpha,\beta \in {\bf R}/{\bf Z}}. If

\displaystyle \frac{1}{N} |\sum_{n \in Q} e(P(n))| \geq \delta

for some {\delta > 0}, then there exists a subprogression {Q'} of {Q} of size {|Q'| \gg \delta^2 N} such that {P(n)} varies by at most {\delta} on {Q'} (that is to say, {P(n)} lies in a subinterval of {{\bf R}/{\bf Z}} of length at most {\delta}).

Proof: By a linear change of variable we may assume that {Q} is of the form {\{0,\dots,N'-1\}} for some {N' \geq 1}. We may of course assume that {\alpha} is non-zero in {{\bf R}/{\bf Z}}, so that {\|\alpha\|_{{\bf R}/{\bf Z}} > 0} ({\|x\|_{{\bf R}/{\bf Z}}} denotes the distance from {x} to the nearest integer). From the geometric series formula we see that

\displaystyle |\sum_{n \in Q} e(P(n))| \leq \frac{2}{|e(\alpha) - 1|} \ll \frac{1}{\|\alpha\|_{{\bf R}/{\bf Z}}},

and so {\|\alpha\|_{{\bf R}/{\bf Z}} \ll \frac{1}{\delta N}}. Setting {Q' := \{ n \in Q: n \leq c \delta^2 N \}} for some sufficiently small absolute constant {c}, we obtain the claim. \Box

Thus, in order for a linear phase {P(n)} to fail to be equidistributed on some long progression {Q}, {P} must in fact be almost constant on large piece of {Q}.

As is well known, this phenomenon generalises to higher order polynomials. To achieve this, we need two elementary additional lemmas. The first relates the exponential sums of {P} to the exponential sums of its “first derivatives” {n \mapsto P(n+h)-P(n)}.

Lemma 2 (Van der Corput lemma, inverse form) Let {Q \subset {\bf Z}} be an arithmetic progression of length at most {N}, and let {P: Q \rightarrow {\bf R}/{\bf Z}} be an arbitrary function such that

\displaystyle \frac{1}{N} |\sum_{n \in Q} e(P(n))| \geq \delta \ \ \ \ \ (1)

 

for some {\delta > 0}. Then, for {\gg \delta^2 N} integers {h \in Q-Q}, there exists a subprogression {Q_h} of {Q}, of the same spacing as {Q}, such that

\displaystyle \frac{1}{N} |\sum_{n \in Q_h} e(P(n+h)-P(n))| \gg \delta^2. \ \ \ \ \ (2)

 

Proof: Squaring (1), we see that

\displaystyle \sum_{n,n' \in Q} e(P(n') - P(n)) \geq \delta^2 N^2.

We write {n' = n+h} and conclude that

\displaystyle \sum_{h \in Q-Q} \sum_{n \in Q_h} e( P(n+h)-P(n) ) \geq \delta^2 N^2

where {Q_h := Q \cap (Q-h)} is a subprogression of {Q} of the same spacing. Since {\sum_{n \in Q_h} e( P(n+h)-P(n) ) = O(N)}, we conclude that

\displaystyle |\sum_{n \in Q_h} e( P(n+h)-P(n) )| \gg \delta^2 N

for {\gg \delta^2 N} values of {h} (this can be seen, much like the pigeonhole principle, by arguing via contradiction for a suitable choice of implied constants). The claim follows. \Box

The second lemma (which we recycle from this previous blog post) is a variant of the equidistribution theorem.

Lemma 3 (Vinogradov lemma) Let {I \subset [-N,N] \cap {\bf Z}} be an interval for some {N \geq 1}, and let {\theta \in{\bf R}/{\bf Z}} be such that {\|n\theta\|_{{\bf R}/{\bf Z}} \leq \varepsilon} for at least {\delta N} values of {n \in I}, for some {0 < \varepsilon, \delta < 1}. Then either

\displaystyle N < \frac{2}{\delta}

or

\displaystyle \varepsilon > 10^{-2} \delta

or else there is a natural number {q \leq 2/\delta} such that

\displaystyle \| q \theta \|_{{\bf R}/{\bf Z}} \ll \frac{\varepsilon}{\delta N}.

Proof: We may assume that {N \geq \frac{2}{\delta}} and {\varepsilon \leq 10^{-2} \delta}, since we are done otherwise. Then there are at least two {n \in I} with {\|n \theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon}, and by the pigeonhole principle we can find {n_1 < n_2} in {Q} with {\|n_1 \theta \|_{{\bf R}/{\bf Z}}, \|n_2 \theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon} and {n_2-n_1 \leq \frac{2}{\delta}}. By the triangle inequality, we conclude that there exists at least one natural number {q \leq \frac{2}{\delta}} for which

\displaystyle \| q \theta \|_{{\bf R}/{\bf Z}} \leq 2\varepsilon.

We take {q} to be minimal amongst all such natural numbers, then we see that there exists {a} coprime to {q} and {|\kappa| \leq 2\varepsilon} such that

\displaystyle \theta = \frac{a}{q} + \frac{\kappa}{q}. \ \ \ \ \ (3)

 

If {\kappa=0} then we are done, so suppose that {\kappa \neq 0}. Suppose that {n < m} are elements of {I} such that {\|n\theta \|_{{\bf R}/{\bf Z}}, \|m\theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon} and {m-n \leq \frac{1}{10 \kappa}}. Writing {m-n = qk + r} for some {0 \leq r < q}, we have

\displaystyle \| (m-n) \theta \|_{{\bf R}/{\bf Z}} = \| \frac{ra}{q} + (m-n) \frac{\kappa}{q} \|_{{\bf R}/{\bf Z}} \leq 2\varepsilon.

By hypothesis, {(m-n) \frac{\kappa}{q} \leq \frac{1}{10 q}}; note that as {q \leq 2/\delta} and {\varepsilon \leq 10^{-2} \delta} we also have {\varepsilon \leq \frac{1}{10q}}. This implies that {\| \frac{ra}{q} \|_{{\bf R}/{\bf Z}} < \frac{1}{q}} and thus {r=0}. We then have

\displaystyle |k \kappa| \leq 2 \varepsilon.

We conclude that for fixed {n \in I} with {\|n\theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon}, there are at most {\frac{2\varepsilon}{|\kappa|}} elements {m} of {[n, n + \frac{1}{10 |\kappa|}]} such that {\|m\theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon}. Iterating this with a greedy algorithm, we see that the number of {n \in I} with {\|n\theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon} is at most {(\frac{N}{1/10|\kappa|} + 1) 2\varepsilon/|\kappa|}; since {\varepsilon < 10^{-2} \delta}, this implies that

\displaystyle \delta N \ll 2 \varepsilon / \kappa

and the claim follows. \Box

Now we can quickly obtain a higher degree version of Lemma 1:

Proposition 4 (Weyl exponential sum estimate, inverse form) Let {Q \subset {\bf Z}} be an arithmetic progression of length at most {N} for some {N \geq 1}, and let {P: {\bf Z} \rightarrow {\bf R}/{\bf Z}} be a polynomial of some degree at most {d \geq 0}. If

\displaystyle \frac{1}{N} |\sum_{n \in Q} e(P(n))| \geq \delta

for some {\delta > 0}, then there exists a subprogression {Q'} of {Q} with {|Q'| \gg_d \delta^{O_d(1)} N} such that {P} varies by at most {\delta} on {Q'}.

Proof: We induct on {d}. The cases {d=0,1} are immediate from Lemma 1. Now suppose that {d \geq 2}, and that the claim had already been proven for {d-1}. To simplify the notation we allow implied constants to depend on {d}. Let the hypotheses be as in the proposition. Clearly {\delta} cannot exceed {1}. By shrinking {\delta} as necessary we may assume that {\delta \leq c} for some sufficiently small constant {c} depending on {d}.

By rescaling we may assume {Q \subset [0,N] \cap {\bf Z}}. By Lemma 3, we see that for {\gg \delta^2 N} choices of {h \in [-N,N] \cap {\bf Z}} such that

\displaystyle \frac{1}{N} |\sum_{n \in I_h} e(P(n+h) - P(n))| \gg \delta^2

for some interval {I_h \subset [0,N] \cap {\bf Z}}. We write {P(n) = \sum_{i \leq d} \alpha_i n^i}, then {P(n+h)-P(n)} is a polynomial of degree at most {d-1} with leading coefficient {h \alpha_d n^{d-1}}. We conclude from induction hypothesis that for each such {h}, there exists a natural number {q_h \ll \delta^{-O(1)}} such that {\|q_h h \alpha_d \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N^{d-1}}, by double-counting, this implies that there are {\gg \delta^{O(1)} N} integers {n} in the interval {[-\delta^{-O(1)} N, \delta^{-O(1)} N] \cap {\bf Z}} such that {\|n \alpha_d \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N^{d-1}}. Applying Lemma 3, we conclude that either {N \ll \delta^{-O(1)}}, or that

\displaystyle \| q \alpha_d \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N^d. \ \ \ \ \ (4)

 

In the former case the claim is trivial (just take {Q'} to be a point), so we may assume that we are in the latter case.

We partition {Q} into arithmetic progressions {Q'} of spacing {q} and length comparable to {\delta^{-C} N} for some large {C} depending on {d} to be chosen later. By hypothesis, we have

\displaystyle \frac{1}{|Q|} |\sum_{n \in Q} e(P(n))| \geq \delta

so by the pigeonhole principle, we have

\displaystyle \frac{1}{|Q'|} |\sum_{n \in Q'} e(P(n))| \geq \delta

for at least one such progression {Q'}. On this progression, we may use the binomial theorem and (4) to write {\alpha_d n^d} as a polynomial in {n} of degree at most {d-1}, plus an error of size {O(\delta^{C - O(1)})}. We thus can write {P(n) = P'(n) + O(\delta^{C-O(1)})} for {n \in Q'} for some polynomial {P'} of degree at most {d-1}. By the triangle inequality, we thus have (for {C} large enough) that

\displaystyle \frac{1}{|Q'|} |\sum_{n \in Q'} e(P'(n))| \gg \delta

and hence by induction hypothesis we may find a subprogression {Q''} of {Q'} of size {|Q''| \gg \delta^{O(1)} N} such that {P'} varies by most {\delta/2} on {Q''}, and thus (for {C} large enough again) that {P} varies by at most {\delta} on {Q''}, and the claim follows. \Box

This gives the following corollary (also given as Exercise 16 in this previous blog post):

Corollary 5 (Weyl exponential sum estimate, inverse form II) Let {I \subset [-N,N] \cap {\bf Z}} be a discrete interval for some {N \geq 1}, and let {P(n) = \sum_{i \leq d} \alpha_i n^i} polynomial of some degree at most {d \geq 0} for some {\alpha_0,\dots,\alpha_d \in {\bf R}/{\bf Z}}. If

\displaystyle \frac{1}{N} |\sum_{n \in I} e(P(n))| \geq \delta

for some {\delta > 0}, then there is a natural number {q \ll_d \delta^{-O_d(1)}} such that {\| q\alpha_i \|_{{\bf R}/{\bf Z}} \ll_d \delta^{-O_d(1)} N^{-i}} for all {i=0,\dots,d}.

One can obtain much better exponents here using Vinogradov’s mean value theorem; see Theorem 1.6 this paper of Wooley. (Thanks to Mariusz Mirek for this reference.) However, this weaker result already suffices for many applications, and does not need any result as deep as the mean value theorem.

Proof: To simplify notation we allow implied constants to depend on {d}. As before, we may assume that {\delta \leq c} for some small constant {c>0} depending only on {d}. We may also assume that {N \geq \delta^{-C}} for some large {C}, as the claim is trivial otherwise (set {q=1}).

Applying Proposition 4, we can find a natural number {q \ll \delta^{-O(1)}} and an arithmetic subprogression {Q} of {I} such that {|Q| \gg \delta^{O(1)}} and such that {P} varies by at most {\delta} on {Q}. Writing {Q = \{ qn+r: n \in I'\}} for some interval {I' \subset [0,N] \cap {\bf Z}} of length {\gg \delta^{O(1)}} and some {0 \leq r < q}, we conclude that the polynomial {n \mapsto P(qn+r)} varies by at most {\delta} on {I'}. Taking {d^{th}} order differences, we conclude that the {d^{th}} coefficient of this polynomial is {O(\delta^{-O(1)} / N^d)}; by the binomial theorem, this implies that {n \mapsto P(qn+r)} differs by at most {O(\delta)} on {I'} from a polynomial of degree at most {d-1}. Iterating this, we conclude that the {i^{th}} coefficient of {n \mapsto P(qn+r)} is {O(\delta N^{-i})} for {i=0,\dots,d}, and the claim then follows by inverting the change of variables {n \mapsto qn+r} (and replacing {q} with a larger quantity such as {q^d} as necessary). \Box

For future reference we also record a higher degree version of the Vinogradov lemma.

Lemma 6 (Polynomial Vinogradov lemma) Let {I \subset [-N,N] \cap {\bf Z}} be a discrete interval for some {N \geq 1}, and let {P: {\bf Z} \rightarrow {\bf R}/{\bf Z}} be a polynomial {P(n) = \sum_{i \leq d} \alpha_i n^i} of degree at most {d} for some {d \geq 1} such that {\|P(n)\|_{{\bf R}/{\bf Z}} \leq \varepsilon} for at least {\delta N} values of {n \in I}, for some {0 < \varepsilon, \delta < 1}. Then either

\displaystyle N \ll_d \delta^{-O_d(1)} \ \ \ \ \ (5)

 

or

\displaystyle \varepsilon \gg_d \delta^{O_d(1)} \ \ \ \ \ (6)

 

or else there is a natural number {q \ll_d \delta^{-O_d(1)}} such that

\displaystyle \| q \alpha_i \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^i}

for all {i=0,\dots,d}.

Proof: We induct on {d}. For {d=1} this follows from Lemma 3 (noting that if {\|P(n)\|_{{\bf R}/{\bf Z}}, \|P(n_0)\|_{{\bf R}/Z} \leq \varepsilon} then {\|P(n)-P(n_0)\|_{{\bf R}/{\bf Z}} \leq 2\varepsilon}), so suppose that {d \geq 2} and that the claim is already proven for {d-1}. We now allow all implied constants to depend on {d}.

For each {h \in [-2N,2N] \cap {\bf Z}}, let {N_h} denote the number of {n \in [-N,N] \cap {\bf Z}} such that {\| P(n+h)\|_{{\bf R}/{\bf Z}}, \|P(n)\|_{{\bf R}/{\bf Z}} \leq \varepsilon}. By hypothesis, {\sum_{h \in [-2N,2N] \cap {\bf Z}} N_h \gg \delta^2 N^2}, and clearly {N_h = O(N)}, so we must have {N_h \gg \delta^2 N} for {\gg \delta^2 N} choices of {h}. For each such {h}, we then have {\|P(n+h)-P(n)\|_{{\bf R}/{\bf Z}} \leq 2\varepsilon} for {\gg \delta^2 N} choices of {n \in [-N,N] \cap {\bf Z}}, so by induction hypothesis, either (5) or (6) holds, or else for {\gg \delta^{O(1)} N} choices of {h \in [-2N,2N] \cap {\bf Z}}, there is a natural number {q_h \ll \delta^{-O(1)}} such that

\displaystyle \| q_h \alpha_{i,h} \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^i}

for {i=1,\dots,d-1}, where {\alpha_{i,h}} are the coefficients of the degree {d-1} polynomial {n \mapsto P(n+h)-P(n)}. We may of course assume it is the latter which holds. By the pigeonhole principle we may take {q_h= q} to be independent of {h}.

Since {\alpha_{d-1,h} = dh \alpha_d}, we have

\displaystyle \| qd h \alpha_d \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^{d-1}}

for {\gg \delta^{O(1)} N} choices of {h}, so by Lemma 3, either (5) or (6) holds, or else (after increasing {q} as necessary) we have

\displaystyle \| q \alpha_d \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^d}.

We can again assume it is the latter that holds. This implies that {q \alpha_{d-2,h} = (d-1) h \alpha_{d-1} + O( \delta^{-O(1)} \varepsilon / N^{d-2} )} modulo {1}, so that

\displaystyle \| q(d-1) h \alpha_{d-1} \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^{d-2}}

for {\gg \delta^{O(1)} N} choices of {h}. Arguing as before and iterating, we obtain the claim. \Box

The above results also extend to higher dimensions. Here is the higher dimensional version of Proposition 4:

Proposition 7 (Multidimensional Weyl exponential sum estimate, inverse form) Let {k \geq 1} and {N_1,\dots,N_k \geq 1}, and let {Q_i \subset {\bf Z}} be arithmetic progressions of length at most {N_i} for each {i=1,\dots,k}. Let {P: {\bf Z}^k \rightarrow {\bf R}/{\bf Z}} be a polynomial of degrees at most {d_1,\dots,d_k} in each of the {k} variables {n_1,\dots,n_k} separately. If

\displaystyle \frac{1}{N_1 \dots N_k} |\sum_{n \in Q_1 \times \dots \times Q_k} e(P(n))| \geq \delta

for some {\delta > 0}, then there exists a subprogression {Q'_i} of {Q_i} with {|Q'_i| \gg_{k,d_1,\dots,d_k} \delta^{O_{k,d_1,\dots,d_k}(1)} N_i} for each {i=1,\dots,k} such that {P} varies by at most {\delta} on {Q'_1 \times \dots \times Q'_k}.

A much more general statement, in which the polynomial phase {n \mapsto e(P(n))} is replaced by a nilsequence, and in which one does not necessarily assume the exponential sum is small, is given in Theorem 8.6 of this paper of Ben Green and myself, but it involves far more notation to even state properly.

Proof: We induct on {k}. The case {k=1} was established in Proposition 5, so we assume that {k \geq 2} and that the claim has already been proven for {k-1}. To simplify notation we allow all implied constants to depend on {k,d_1,\dots,d_k}. We may assume that {\delta \leq c} for some small {c>0} depending only on {k,d_1,\dots,d_k}.

By a linear change of variables, we may assume that {Q_i \subset [0,N_i] \cap {\bf Z}} for all {i=1,\dots,k}.

We write {n' := (n_1,\dots,n_{k-1})}. First suppose that {N_k = O(\delta^{-O(1)})}. Then by the pigeonhole principle we can find {n_k \in I_k} such that

\displaystyle \frac{1}{N_1 \dots N_{k-1}} |\sum_{n' \in Q_1 \times \dots \times Q_{k-1}} e(P(n',n_k))| \geq \delta

and the claim then follows from the induction hypothesis. Thus we may assume that {N_k \geq \delta^{-C}} for some large {C} depending only on {k,d_1,\dots,d_k}. Similarly we may assume that {N_i \geq \delta^{-C}} for all {i=1,\dots,k}.

By the triangle inequality, we have

\displaystyle \frac{1}{N_1 \dots N_k} \sum_{n_k \in Q_k} |\sum_{n' \in Q_1 \times \dots \times Q_{k-1}} e(P(n',n_k))| \geq \delta.

The inner sum is {O(N_k)}, and the outer sum has {O(N_1 \dots N_{k-1})} terms. Thus, for {\gg \delta N_1 \dots N_{k-1}} choices of {n' \in Q_1 \times \dots \times Q_{k-1}}, one has

\displaystyle \frac{1}{N_k} |\sum_{n_k \in Q_k} e(P(n',n_k))| \gg \delta. \ \ \ \ \ (7)

 

We write

\displaystyle P(n',n_k) = \sum_{i_k \leq d_k} P_{i_k}(n') n_k^i

for some polynomials {P_{i_k}: {\bf Z}^{k-1} \rightarrow {\bf R}/{\bf Z}} of degrees at most {d_1,\dots,d_{k-1}} in the variables {n_1,\dots,n_{k-1}}. For each {n'} obeying (7), we apply Corollary 5 to conclude that there exists a natural number {q_{n'} \ll \delta^{-O(1)}} such that

\displaystyle \| q_{n'} P_{i_k}(n') \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N_k^{i_k}

for {i_k=1,\dots,d_k} (the claim also holds for {i_k=0} but we discard it as being trivial). By the pigeonhole principle, there thus exists a natural number {q \ll \delta^{-O(1)}} such that

\displaystyle \| q P_{i_k}(n') \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N_k^{i_k}

for all {i_k=1,\dots,d_k} and for {\gg \delta^{O(1)} N_1 \dots N_{k-1}} choices of {n' \in Q_1 \times \dots \times Q_{k-1}}. If we write

\displaystyle P_{i_k}(n') = \sum_{i_{k-1} \leq d_{k-1}} P_{i_{k-1},i_k}(n_1,\dots,n_{k-2}) n_{k-1}^{i_{k-1}},

where {P_{i_{k-1},i_k}: {\bf Z}^{k-2} \rightarrow {\bf R}/{\bf Z}} is a polynomial of degrees at most {d_1,\dots,d_{k-2}}, then for {\gg \delta^{O(1)} N_1 \dots N_{k-2}} choices of {(n_1,\dots,n_{k-2}) \in Q_1 \times \dots \times Q_{k-2}} we then have

\displaystyle \| \sum_{i_{k-1} \leq d_{k-1}} q P_{i_{k-1},i_k}(n_1,\dots,n_{k-2}) n_{k-1}^{i_{k-1}} \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N_k^{i_k}.

Applying Lemma 6 in the {n_{k-1}} and the largeness hypotheses on the {N_i} (and also the assumption that {i_k \geq 1}) we conclude (after enlarging {q} as necessary, and pigeonholing to keep {q} independent of {n_1,\dots,n_{k-2}}) that

\displaystyle \| q P_{i_{k-1},i_k}(n_1,\dots,n_{k-2}) \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)}}{N_{k-1}^{i_{k-1}} N_k^{i_k}}

for all {i_{k-1}=0,\dots,d_{k-1}} (note that we now include that {i_{k-1}=0} case, which is no longer trivial) and for {\gg \delta^{O(1)} N_1 \dots N_{k-2}} choices of {(n_1,\dots,n_{k-2}) \in Q_1 \times \dots \times Q_{k-2}}. Iterating this, we eventually conclude (after enlarging {q} as necessary) that

\displaystyle \| q \alpha_{i_1,\dots,i_k} \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)}}{N_1^{i_1} \dots N_k^{i_k}} \ \ \ \ \ (8)

 

whenever {i_j \in \{0,\dots,d_j\}} for {j=1,\dots,k}, with {i_k} nonzero. Permuting the indices, and observing that the claim is trivial for {(i_1,\dots,i_k) = (0,\dots,0)}, we in fact obtain (8) for all {(i_1,\dots,i_k) \in \{0,\dots,d_1\} \times \dots \times \{0,\dots,d_k\}}, at which point the claim easily follows by taking {Q'_j := \{ qn_j: n_j \leq \delta^C N_j\}} for each {j=1,\dots,k}. \Box

An inspection of the proof of the above result (or alternatively, by combining the above result again with many applications of Lemma 6) reveals the following general form of Proposition 4, which was posed as Exercise 17 in this previous blog post, but had a slight misprint in it (it did not properly treat the possibility that some of the {N_j} could be small) and was a bit trickier to prove than anticipated (in fact, the reason for this post was that I was asked to supply a more detailed solution for this exercise):

Proposition 8 (Multidimensional Weyl exponential sum estimate, inverse form, II) Let {k \geq 1} be an natural number, and for each {j=1,\dots,k}, let {I_j \subset [0,N_j]_{\bf Z}} be a discrete interval for some {N_j \geq 1}. Let

\displaystyle P(n_1,\dots,n_k) = \sum_{i_1 \leq d_1, \dots, i_k \leq d_k} \alpha_{i_1,\dots,i_k} n_1^{i_1} \dots n_k^{i_k}

be a polynomial in {k} variables of multidegrees {d_1,\dots,d_k \geq 0} for some {\alpha_{i_1,\dots,i_k} \in {\bf R}/{\bf Z}}. If

\displaystyle \frac{1}{N_1 \dots N_k} |\sum_{n \in I_1 \times \dots \times I_k} e(P(n))| \geq \delta \ \ \ \ \ (9)

 

for some {\delta > 0}, then either

\displaystyle N_j \ll_{k,d_1,\dots,d_k} \delta^{-O_{k,d_1,\dots,d_k}(1)} \ \ \ \ \ (10)

 

for some {1 \leq j \leq d_k}, or else there is a natural number {q \ll_{k,d_1,\dots,d_k} \delta^{-O_{k,d_1,\dots,d_k}(1)}} such that

\displaystyle \| q\alpha_{i_1,\dots,i_k} \|_{{\bf R}/{\bf Z}} \ll_{k,d_1,\dots,d_k} \delta^{-O_d(1)} N_1^{-i_1} \dots N_k^{-i_k} \ \ \ \ \ (11)

 

whenever {i_j \leq d_j} for {j=1,\dots,k}.

Again, the factor of {N_1^{-i_1} \dots N_k^{-i_k}} is natural in this bound. In the {k=1} case, the option (10) may be deleted since (11) trivially holds in this case, but this simplification is no longer available for {k>1} since one needs (10) to hold for all {j} (not just one {j}) to make (11) completely trivial. Indeed, the above proposition fails for {k \geq 2} if one removes (10) completely, as can be seen for instance by inspecting the exponential sum {\sum_{n_1 \in \{0,1\}} \sum_{n_2 \in [1,N] \cap {\bf Z}} e( \alpha n_1 n_2)}, which has size comparable to {N} regardless of how irrational {\alpha} is.

Chantal David, Andrew Granville, Emmanuel Kowalski, Phillipe Michel, Kannan Soundararajan, and I are running a program at MSRI in the Spring of 2017 (more precisely, from Jan 17, 2017 to May 26, 2017) in the area of analytic number theory, with the intention to bringing together many of the leading experts in all aspects of the subject and to present recent work on the many active areas of the subject (the discussion on previous blog posts here have mostly focused on advances in the study of the distribution of the prime numbers, but there have been many other notable recent developments too, such as refinements of the circle method, a deeper understanding of the asymptotics of bounded multiplicative functions and of the “pretentious” approach to analytic number theory, more “analysis-friendly” formulations of the theorems of Deligne and others involving trace functions over fields, and new subconvexity theorems for automorphic forms, to name a few).  Like any other semester MSRI program, there will be a number of workshops, seminars, and similar activities taking place while the members are in residence.  I’m personally looking forward to the program, which should be occurring in the midst of a particularly productive time for the subject.  Needless to say, I (and the rest of the organising committee) plan to be present for most of the program.

Applications for Postdoctoral Fellowships, Research Memberships, and Research Professorships for this program (and for other MSRI programs in this time period, namely the companion program in Harmonic Analysis and the Fall program in Geometric Group Theory, as well as the complementary program in all other areas of mathematics) have just opened up today.  Applications are open to everyone (until they close on Dec 1), but require supporting documentation, such as a CV, statement of purpose, and letters of recommendation from other mathematicians; see the application page for more details.

This week I have been at a Banff workshop “Combinatorics meets Ergodic theory“, focused on the combinatorics surrounding Szemerédi’s theorem and the Gowers uniformity norms on one hand, and the ergodic theory surrounding Furstenberg’s multiple recurrence theorem and the Host-Kra structure theory on the other. This was quite a fruitful workshop, and directly inspired the various posts this week on this blog. Incidentally, BIRS being as efficient as it is, videos for this week’s talks are already online.

As mentioned in the previous two posts, Ben Green, Tamar Ziegler, and myself proved the following inverse theorem for the Gowers norms:

Theorem 1 (Inverse theorem for Gowers norms) Let {N \geq 1} and {s \geq 1} be integers, and let {\delta > 0}. Suppose that {f: {\bf Z} \rightarrow [-1,1]} is a function supported on {[N] := \{1,\dots,N\}} such that

\displaystyle \frac{1}{N^{s+2}} \sum_{n,h_1,\dots,h_{s+1} \in {\bf Z}} \prod_{\omega \in \{0,1\}^{s+1}} f(n+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}) \geq \delta.

Then there exists a filtered nilmanifold {G/\Gamma} of degree {\leq s} and complexity {O_{s,\delta}(1)}, a polynomial sequence {g: {\bf Z} \rightarrow G}, and a Lipschitz function {F: G/\Gamma \rightarrow {\bf R}} of Lipschitz constant {O_{s,\delta}(1)} such that

\displaystyle \frac{1}{N} \sum_n f(n) F(g(n) \Gamma) \gg_{s,\delta} 1.

There is a higher dimensional generalisation, which first appeared explicitly (in a more general form) in this preprint of Szegedy (which used a slightly different argument than the one of Ben, Tammy, and myself; see also this previous preprint of Szegedy with related results):

Theorem 2 (Inverse theorem for multidimensional Gowers norms) Let {N \geq 1} and {s,d \geq 1} be integers, and let {\delta > 0}. Suppose that {f: {\bf Z}^d \rightarrow [-1,1]} is a function supported on {[N]^d} such that

\displaystyle \frac{1}{N^{d(s+2)}} \sum_{n,h_1,\dots,h_{s+1} \in {\bf Z}^d} \prod_{\omega \in \{0,1\}^{s+1}} f(n+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}) \geq \delta. \ \ \ \ \ (1)

Then there exists a filtered nilmanifold {G/\Gamma} of degree {\leq s} and complexity {O_{s,\delta,d}(1)}, a polynomial sequence {g: {\bf Z}^d \rightarrow G}, and a Lipschitz function {F: G/\Gamma \rightarrow {\bf R}} of Lipschitz constant {O_{s,\delta,d}(1)} such that

\displaystyle \frac{1}{N^d} \sum_{n \in {\bf Z}^d} f(n) F(g(n) \Gamma) \gg_{s,\delta,d} 1.

The {d=2} case of this theorem was recently used by Wenbo Sun. One can replace the polynomial sequence with a linear sequence if desired by using a lifting trick (essentially due to Furstenberg, but which appears explicitly in Appendix C of my paper with Ben and Tammy).

In this post I would like to record a very neat and simple observation of Ben Green and Nikos Frantzikinakis, that uses the tool of Freiman isomorphisms to derive Theorem 2 as a corollary of the one-dimensional theorem. Namely, consider the linear map {\phi: {\bf Z}^d \rightarrow {\bf Z}} defined by

\displaystyle \phi( n_1,\dots,n_d ) := \sum_{i=1}^d (10 N)^{i-1} n_i,

that is to say {\phi} is the digit string base {10N} that has digits {n_d \dots n_1}. This map is a linear map from {[N]^d} to a subset of {[d 10^d N^d]} of density {1/(d10^d)}. Furthermore it has the following “Freiman isomorphism” property: if {n, h_1,\dots,h_{s+1}} lie in {{\bf Z}} with {n + \omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}} in the image set {\phi( [N]^d )} of {[N]^d} for all {\omega}, then there exist (unique) lifts {\tilde n \in {\bf Z}^d, \tilde h_1,\dots,\tilde h_{s+1} \in {\bf Z}} such that

\displaystyle \tilde n + \omega_1 \tilde h_1 + \dots + \omega_{s+1} \tilde h_{s+1} \in [N]^d

and

\displaystyle \phi( \tilde n + \omega_1 \tilde h_1 + \dots + \omega_{s+1} \tilde h_{s+1} ) = n + \omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}

for all {\omega}. Indeed, the injectivity of {\phi} on {[N]^d} uniquely determines the sum {\tilde n + \omega_1 \tilde h_1 + \dots + \omega_{s+1} \tilde h_{s+1}} for each {\omega}, and one can use base {10N} arithmetic to verify that the alternating sum of these sums on any {2}-facet of the cube {\{0,1\}^{s+1}} vanishes, which gives the claim. (In the language of additive combinatorics, the point is that {\phi} is a Freiman isomorphism of order (say) {8} on {[N]^d}.)

Now let {\tilde f: {\bf Z} \rightarrow [-1,1]} be the function defined by setting {\tilde f( \phi(n) ) := f(n)} whenever {n \in [N]^d}, with {\tilde f} vanishing outside of {\phi([N]^d)}. If {f} obeys (1), then from the above Freiman isomorphism property we have

\displaystyle \frac{1}{N^{d(s+2)}} \sum_{n, h_1,\dots,h_{s+1} \in {\bf Z}} \prod_{\omega \in \{0,1\}^{s+1}} \tilde f(n+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}) \geq \delta.

Applying the one-dimensional inverse theorem (Theorem 1), with {\delta} reduced by a factor of {d 10^d} and {N} replaced by {d 10^d N^d}, this implies the existence of a filtered nilmanifold {G/\Gamma} of degree {\leq s} and complexity {O_{s,\delta,d}(1)}, a polynomial sequence {g: {\bf Z} \rightarrow G}, and a Lipschitz function {F: G/\Gamma \rightarrow {\bf R}} of Lipschitz constant {O_{s,\delta,d}(1)} such that

\displaystyle \frac{1}{N^{d(s+2)}} \sum_{n \in {\bf Z}} \tilde f(n) F(g(n) \Gamma) \gg_{s,\delta,d} 1

which by the Freiman isomorphism property again implies that

\displaystyle \frac{1}{N^{d(s+2)}} \sum_{n \in {\bf Z}^d} f(n) F(g(\phi(n)) \Gamma) \gg_{s,\delta,d} 1.

But the map {n \mapsto g(\phi(n))} is clearly a polynomial map from {{\bf Z}^d} to {G} (the composition of two polynomial maps is polynomial, see e.g. Appendix B of my paper with Ben and Tammy), and the claim follows.

Remark 3 This trick appears to be largely restricted to the case of boundedly generated groups such as {{\bf Z}^d}; I do not see any easy way to deduce an inverse theorem for, say, {\bigcup_{n=1}^\infty {\mathbb F}_p^n} from the {{\bf Z}}-inverse theorem by this method.

Remark 4 By combining this argument with the one in the previous post, one can obtain a weak ergodic inverse theorem for {{\bf Z}^d}-actions. Interestingly, the Freiman isomorphism argument appears to be difficult to implement directly in the ergodic category; in particular, there does not appear to be an obvious direct way to derive the Host-Kra inverse theorem for {{\bf Z}^d} actions (a result first obtained in the PhD thesis of Griesmer) from the counterpart for {{\bf Z}} actions.

Note: this post is of a particularly technical nature, in particular presuming familiarity with nilsequences, nilsystems, characteristic factors, etc., and is primarily intended for experts.

As mentioned in the previous post, Ben Green, Tamar Ziegler, and myself proved the following inverse theorem for the Gowers norms:

Theorem 1 (Inverse theorem for Gowers norms) Let {N \geq 1} and {s \geq 1} be integers, and let {\delta > 0}. Suppose that {f: {\bf Z} \rightarrow [-1,1]} is a function supported on {[N] := \{1,\dots,N\}} such that

\displaystyle \frac{1}{N^{s+2}} \sum_{n,h_1,\dots,h_{s+1}} \prod_{\omega \in \{0,1\}^{s+1}} f(n+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}) \geq \delta.

Then there exists a filtered nilmanifold {G/\Gamma} of degree {\leq s} and complexity {O_{s,\delta}(1)}, a polynomial sequence {g: {\bf Z} \rightarrow G}, and a Lipschitz function {F: G/\Gamma \rightarrow {\bf R}} of Lipschitz constant {O_{s,\delta}(1)} such that

\displaystyle \frac{1}{N} \sum_n f(n) F(g(n) \Gamma) \gg_{s,\delta} 1.

This result was conjectured earlier by Ben Green and myself; this conjecture was strongly motivated by an analogous inverse theorem in ergodic theory by Host and Kra, which we formulate here in a form designed to resemble Theorem 1 as closely as possible:

Theorem 2 (Inverse theorem for Gowers-Host-Kra seminorms) Let {s \geq 1} be an integer, and let {(X, T)} be an ergodic, countably generated measure-preserving system. Suppose that one has

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N^{s+1}} \sum_{h_1,\dots,h_{s+1} \in [N]} \int_X \prod_{\omega \in \{0,1\}^{s+1}} f(T^{\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}}x)\ d\mu(x)

\displaystyle > 0

for all non-zero {f \in L^\infty(X)} (all {L^p} spaces are real-valued in this post). Then {(X,T)} is an inverse limit (in the category of measure-preserving systems, up to almost everywhere equivalence) of ergodic degree {\leq s} nilsystems, that is to say systems of the form {(G/\Gamma, x \mapsto gx)} for some degree {\leq s} filtered nilmanifold {G/\Gamma} and a group element {g \in G} that acts ergodically on {G/\Gamma}.

It is a natural question to ask if there is any logical relationship between the two theorems. In the finite field category, one can deduce the combinatorial inverse theorem from the ergodic inverse theorem by a variant of the Furstenberg correspondence principle, as worked out by Tamar Ziegler and myself, however in the current context of {{\bf Z}}-actions, the connection is less clear.

One can split Theorem 2 into two components:

Theorem 3 (Weak inverse theorem for Gowers-Host-Kra seminorms) Let {s \geq 1} be an integer, and let {(X, T)} be an ergodic, countably generated measure-preserving system. Suppose that one has

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N^{s+1}} \sum_{h_1,\dots,h_{s+1} \in [N]} \int_X \prod_{\omega \in \{0,1\}^{s+1}} T^{\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}} f\ d\mu

\displaystyle > 0

for all non-zero {f \in L^\infty(X)}, where {T^h f := f \circ T^h}. Then {(X,T)} is a factor of an inverse limit of ergodic degree {\leq s} nilsystems.

Theorem 4 (Pro-nilsystems closed under factors) Let {s \geq 1} be an integer. Then any factor of an inverse limit of ergodic degree {\leq s} nilsystems, is again an inverse limit of ergodic degree {\leq s} nilsystems.

Indeed, it is clear that Theorem 2 implies both Theorem 3 and Theorem 4, and conversely that the two latter theorems jointly imply the former. Theorem 4 is, in principle, purely a fact about nilsystems, and should have an independent proof, but this is not known; the only known proofs go through the full machinery needed to prove Theorem 2 (or the closely related theorem of Ziegler). (However, the fact that a factor of a nilsystem is again a nilsystem was established previously by Parry.)

The purpose of this post is to record a partial implication in reverse direction to the correspondence principle:

Proposition 5 Theorem 1 implies Theorem 3.

As mentioned at the start of the post, a fair amount of familiarity with the area is presumed here, and some routine steps will be presented with only a fairly brief explanation.

Read the rest of this entry »

A few years ago, Ben Green, Tamar Ziegler, and myself proved the following (rather technical-looking) inverse theorem for the Gowers norms:

Theorem 1 (Discrete inverse theorem for Gowers norms) Let {N \geq 1} and {s \geq 1} be integers, and let {\delta > 0}. Suppose that {f: {\bf Z} \rightarrow [-1,1]} is a function supported on {[N] := \{1,\dots,N\}} such that

\displaystyle  \frac{1}{N^{s+2}} \sum_{n,h_1,\dots,h_{s+1}} \prod_{\omega \in \{0,1\}^{s+1}} f(n+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}) \geq \delta.

Then there exists a filtered nilmanifold {G/\Gamma} of degree {\leq s} and complexity {O_{s,\delta}(1)}, a polynomial sequence {g: {\bf Z} \rightarrow G}, and a Lipschitz function {F: G/\Gamma \rightarrow {\bf R}} of Lipschitz constant {O_{s,\delta}(1)} such that

\displaystyle  \frac{1}{N} \sum_n f(n) F(g(n) \Gamma) \gg_{s,\delta} 1.

For the definitions of “filtered nilmanifold”, “degree”, “complexity”, and “polynomial sequence”, see the paper of Ben, Tammy, and myself. (I should caution the reader that this blog post will presume a fair amount of familiarity with this subfield of additive combinatorics.) This result has a number of applications, for instance to establishing asymptotics for linear equations in the primes, but this will not be the focus of discussion here.

The purpose of this post is to record the observation that this “discrete” inverse theorem, together with an equidistribution theorem for nilsequences that Ben and I worked out in a separate paper, implies a continuous version:

Theorem 2 (Continuous inverse theorem for Gowers norms) Let {s \geq 1} be an integer, and let {\delta>0}. Suppose that {f: {\bf R} \rightarrow [-1,1]} is a measurable function supported on {[0,1]} such that

\displaystyle  \int_{{\bf R}^{s+1}} \prod_{\omega \in \{0,1\}^{s+1}} f(t+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1})\ dt dh_1 \dots dh_{s+1} \geq \delta. \ \ \ \ \ (1)

Then there exists a filtered nilmanifold {G/\Gamma} of degree {\leq s} and complexity {O_{s,\delta}(1)}, a (smooth) polynomial sequence {g: {\bf R} \rightarrow G}, and a Lipschitz function {F: G/\Gamma \rightarrow {\bf R}} of Lipschitz constant {O_{s,\delta}(1)} such that

\displaystyle  \int_{\bf R} f(t) F(g(t) \Gamma)\ dt \gg_{s,\delta} 1.

The interval {[0,1]} can be easily replaced with any other fixed interval by a change of variables. A key point here is that the bounds are completely uniform in the choice of {f}. Note though that the coefficients of {g} can be arbitrarily large (and this is necessary, as can be seen just by considering functions of the form {f(t) = \cos( \xi t)} for some arbitrarily large frequency {\xi}).

It is likely that one could prove Theorem 2 by carefully going through the proof of Theorem 1 and replacing all instances of {{\bf Z}} with {{\bf R}} (and making appropriate modifications to the argument to accommodate this). However, the proof of Theorem 1 is quite lengthy. Here, we shall proceed by the usual limiting process of viewing the continuous interval {[0,1]} as a limit of the discrete interval {\frac{1}{N} \cdot [N]} as {N \rightarrow \infty}. However there will be some problems taking the limit due to a failure of compactness, and specifically with regards to the coefficients of the polynomial sequence {g: {\bf N} \rightarrow G} produced by Theorem 1, after normalising these coefficients by {N}. Fortunately, a factorisation theorem from a paper of Ben Green and myself resolves this problem by splitting {g} into a “smooth” part which does enjoy good compactness properties, as well as “totally equidistributed” and “periodic” parts which can be eliminated using the measurability (and thus, approximate smoothness), of {f}.

Read the rest of this entry »

Szemerédi’s theorem asserts that any subset of the integers of positive upper density contains arbitrarily large arithmetic progressions. Here is an equivalent quantitative form of this theorem:

Theorem 1 (Szemerédi’s theorem) Let {N} be a positive integer, and let {f: {\bf Z}/N{\bf Z} \rightarrow [0,1]} be a function with {{\bf E}_{x \in {\bf Z}/N{\bf Z}} f(x) \geq \delta} for some {\delta>0}, where we use the averaging notation {{\bf E}_{x \in A} f(x) := \frac{1}{|A|} \sum_{x \in A} f(x)}, {{\bf E}_{x,r \in A} f(x) := \frac{1}{|A|^2} \sum_{x, r \in A} f(x)}, etc.. Then for {k \geq 3} we have

\displaystyle  {\bf E}_{x,r \in {\bf Z}/N{\bf Z}} f(x) f(x+r) \dots f(x+(k-1)r) \geq c(k,\delta)

for some {c(k,\delta)>0} depending only on {k,\delta}.

The equivalence is basically thanks to an averaging argument of Varnavides; see for instance Chapter 11 of my book with Van Vu or this previous blog post for a discussion. We have removed the cases {k=1,2} as they are trivial and somewhat degenerate.

There are now many proofs of this theorem. Some time ago, I took an ergodic-theoretic proof of Furstenberg and converted it to a purely finitary proof of the theorem. The argument used some simplifying innovations that had been developed since the original work of Furstenberg (in particular, deployment of the Gowers uniformity norms, as well as a “dual” norm that I called the uniformly almost periodic norm, and an emphasis on van der Waerden’s theorem for handling the “compact extension” component of the argument). But the proof was still quite messy. However, as discussed in this previous blog post, messy finitary proofs can often be cleaned up using nonstandard analysis. Thus, there should be a nonstandard version of the Furstenberg ergodic theory argument that is relatively clean. I decided (after some encouragement from Ben Green and Isaac Goldbring) to write down most of the details of this argument in this blog post, though for sake of brevity I will skim rather quickly over arguments that were already discussed at length in other blog posts. In particular, I will presume familiarity with nonstandard analysis (in particular, the notion of a standard part of a bounded real number, and the Loeb measure construction), see for instance this previous blog post for a discussion.

Read the rest of this entry »

In analytic number theory, there is a well known analogy between the prime factorisation of a large integer, and the cycle decomposition of a large permutation; this analogy is central to the topic of “anatomy of the integers”, as discussed for instance in this survey article of Granville. Consider for instance the following two parallel lists of facts (stated somewhat informally). Firstly, some facts about the prime factorisation of large integers:

  • Every positive integer {m} has a prime factorisation

    \displaystyle  m = p_1 p_2 \dots p_r

    into (not necessarily distinct) primes {p_1,\dots,p_r}, which is unique up to rearrangement. Taking logarithms, we obtain a partition

    \displaystyle  \log m = \log p_1 + \log p_2 + \dots + \log p_r

    of {\log m}.

  • (Prime number theorem) A randomly selected integer {m} of size {m \sim N} will be prime with probability {\approx \frac{1}{\log N}} when {N} is large.
  • If {m \sim N} is a randomly selected large integer of size {N}, and {p = p_i} is a randomly selected prime factor of {m = p_1 \dots p_r} (with each index {i} being chosen with probability {\frac{\log p_i}{\log m}}), then {\log p_i} is approximately uniformly distributed between {0} and {\log N}. (See Proposition 9 of this previous blog post.)
  • The set of real numbers {\{ \frac{\log p_i}{\log m}: i=1,\dots,r \}} arising from the prime factorisation {m = p_1 \dots p_r} of a large random number {m \sim N} converges (away from the origin, and in a suitable weak sense) to the Poisson-Dirichlet process in the limit {N \rightarrow \infty}. (See the previously mentioned blog post for a definition of the Poisson-Dirichlet process, and a proof of this claim.)

Now for the facts about the cycle decomposition of large permutations:

  • Every permutation {\sigma \in S_n} has a cycle decomposition

    \displaystyle  \sigma = C_1 \dots C_r

    into disjoint cycles {C_1,\dots,C_r}, which is unique up to rearrangement, and where we count each fixed point of {\sigma} as a cycle of length {1}. If {|C_i|} is the length of the cycle {C_i}, we obtain a partition

    \displaystyle  n = |C_1| + \dots + |C_r|

    of {n}.

  • (Prime number theorem for permutations) A randomly selected permutation of {S_n} will be an {n}-cycle with probability exactly {1/n}. (This was noted in this previous blog post.)
  • If {\sigma} is a random permutation in {S_n}, and {C_i} is a randomly selected cycle of {\sigma} (with each {i} being selected with probability {|C_i|/n}), then {|C_i|} is exactly uniformly distributed on {\{1,\dots,n\}}. (See Proposition 8 of this blog post.)
  • The set of real numbers {\{ \frac{|C_i|}{n} \}} arising from the cycle decomposition {\sigma = C_1 \dots C_r} of a random permutation {\sigma \in S_n} converges (in a suitable sense) to the Poisson-Dirichlet process in the limit {n \rightarrow \infty}. (Again, see this previous blog post for details.)

See this previous blog post (or the aforementioned article of Granville, or the Notices article of Arratia, Barbour, and Tavaré) for further exploration of the analogy between prime factorisation of integers and cycle decomposition of permutations.

There is however something unsatisfying about the analogy, in that it is not clear why there should be such a kinship between integer prime factorisation and permutation cycle decomposition. It turns out that the situation is clarified if one uses another fundamental analogy in number theory, namely the analogy between integers and polynomials {P \in {\mathbf F}_q[T]} over a finite field {{\mathbf F}_q}, discussed for instance in this previous post; this is the simplest case of the more general function field analogy between number fields and function fields. Just as we restrict attention to positive integers when talking about prime factorisation, it will be reasonable to restrict attention to monic polynomials {P}. We then have another analogous list of facts, proven very similarly to the corresponding list of facts for the integers:

  • Every monic polynomial {f \in {\mathbf F}_q[T]} has a factorisation

    \displaystyle  f = P_1 \dots P_r

    into irreducible monic polynomials {P_1,\dots,P_r \in {\mathbf F}_q[T]}, which is unique up to rearrangement. Taking degrees, we obtain a partition

    \displaystyle  \hbox{deg} f = \hbox{deg} P_1 + \dots + \hbox{deg} P_r

    of {\hbox{deg} f}.

  • (Prime number theorem for polynomials) A randomly selected monic polynomial {f \in {\mathbf F}_q[T]} of degree {n} will be irreducible with probability {\approx \frac{1}{n}} when {q} is fixed and {n} is large.
  • If {f \in {\mathbf F}_q[T]} is a random monic polynomial of degree {n}, and {P_i} is a random irreducible factor of {f = P_1 \dots P_r} (with each {i} selected with probability {\hbox{deg} P_i / n}), then {\hbox{deg} P_i} is approximately uniformly distributed in {\{1,\dots,n\}} when {q} is fixed and {n} is large.
  • The set of real numbers {\{ \hbox{deg} P_i / n \}} arising from the factorisation {f = P_1 \dots P_r} of a randomly selected polynomial {f \in {\mathbf F}_q[T]} of degree {n} converges (in a suitable sense) to the Poisson-Dirichlet process when {q} is fixed and {n} is large.

The above list of facts addressed the large {n} limit of the polynomial ring {{\mathbf F}_q[T]}, where the order {q} of the field is held fixed, but the degrees of the polynomials go to infinity. This is the limit that is most closely analogous to the integers {{\bf Z}}. However, there is another interesting asymptotic limit of polynomial rings to consider, namely the large {q} limit where it is now the degree {n} that is held fixed, but the order {q} of the field goes to infinity. Actually to simplify the exposition we will use the slightly more restrictive limit where the characteristic {p} of the field goes to infinity (again keeping the degree {n} fixed), although all of the results proven below for the large {p} limit turn out to be true as well in the large {q} limit.

The large {q} (or large {p}) limit is technically a different limit than the large {n} limit, but in practice the asymptotic statistics of the two limits often agree quite closely. For instance, here is the prime number theorem in the large {q} limit:

Theorem 1 (Prime number theorem) The probability that a random monic polynomial {f \in {\mathbf F}_q[T]} of degree {n} is irreducible is {\frac{1}{n}+o(1)} in the limit where {n} is fixed and the characteristic {p} goes to infinity.

Proof: There are {q^n} monic polynomials {f \in {\mathbf F}_q[T]} of degree {n}. If {f} is irreducible, then the {n} zeroes of {f} are distinct and lie in the finite field {{\mathbf F}_{q^n}}, but do not lie in any proper subfield of that field. Conversely, every element {\alpha} of {{\mathbf F}_{q^n}} that does not lie in a proper subfield is the root of a unique monic polynomial in {{\mathbf F}_q[T]} of degree {f} (the minimal polynomial of {\alpha}). Since the union of all the proper subfields of {{\mathbf F}_{q^n}} has size {o(q^n)}, the total number of irreducible polynomials of degree {n} is thus {\frac{q^n - o(q^n)}{n}}, and the claim follows. \Box

Remark 2 The above argument and inclusion-exclusion in fact gives the well known exact formula {\frac{1}{n} \sum_{d|n} \mu(\frac{n}{d}) q^d} for the number of irreducible monic polynomials of degree {n}.

Now we can give a precise connection between the cycle distribution of a random permutation, and (the large {p} limit of) the irreducible factorisation of a polynomial, giving a (somewhat indirect, but still connected) link between permutation cycle decomposition and integer factorisation:

Theorem 3 The partition {\{ \hbox{deg}(P_1), \dots, \hbox{deg}(P_r) \}} of a random monic polynomial {f= P_1 \dots P_r\in {\mathbf F}_q[T]} of degree {n} converges in distribution to the partition {\{ |C_1|, \dots, |C_r|\}} of a random permutation {\sigma = C_1 \dots C_r \in S_n} of length {n}, in the limit where {n} is fixed and the characteristic {p} goes to infinity.

We can quickly prove this theorem as follows. We first need a basic fact:

Lemma 4 (Most polynomials square-free in large {q} limit) A random monic polynomial {f \in {\mathbf F}_q[T]} of degree {n} will be square-free with probability {1-o(1)} when {n} is fixed and {q} (or {p}) goes to infinity. In a similar spirit, two randomly selected monic polynomials {f,g} of degree {n,m} will be coprime with probability {1-o(1)} if {n,m} are fixed and {q} or {p} goes to infinity.

Proof: For any polynomial {g} of degree {m}, the probability that {f} is divisible by {g^2} is at most {1/q^{2m}}. Summing over all polynomials of degree {1 \leq m \leq n/2}, and using the union bound, we see that the probability that {f} is not squarefree is at most {\sum_{1 \leq m \leq n/2} \frac{q^m}{q^{2m}} = o(1)}, giving the first claim. For the second, observe from the first claim (and the fact that {fg} has only a bounded number of factors) that {fg} is squarefree with probability {1-o(1)}, giving the claim. \Box

Now we can prove the theorem. Elementary combinatorics tells us that the probability of a random permutation {\sigma \in S_n} consisting of {c_k} cycles of length {k} for {k=1,\dots,r}, where {c_k} are nonnegative integers with {\sum_{k=1}^r k c_k = n}, is precisely

\displaystyle  \frac{1}{\prod_{k=1}^r c_k! k^{c_k}},

since there are {\prod_{k=1}^r c_k! k^{c_k}} ways to write a given tuple of cycles {C_1,\dots,C_r} in cycle notation in nondecreasing order of length, and {n!} ways to select the labels for the cycle notation. On the other hand, by Theorem 1 (and using Lemma 4 to isolate the small number of cases involving repeated factors) the number of monic polynomials of degree {n} that are the product of {c_k} irreducible polynomials of degree {k} is

\displaystyle  \frac{1}{\prod_{k=1}^r c_k!} \prod_{k=1}^r ( (\frac{1}{k}+o(1)) q^k )^{c_k} + o( q^n )

which simplifies to

\displaystyle  \frac{1+o(1)}{\prod_{k=1}^r c_k! k^{c_k}} q^n,

and the claim follows.

This was a fairly short calculation, but it still doesn’t quite explain why there is such a link between the cycle decomposition {\sigma = C_1 \dots C_r} of permutations and the factorisation {f = P_1 \dots P_r} of a polynomial. One immediate thought might be to try to link the multiplication structure of permutations in {S_n} with the multiplication structure of polynomials; however, these structures are too dissimilar to set up a convincing analogy. For instance, the multiplication law on polynomials is abelian and non-invertible, whilst the multiplication law on {S_n} is (extremely) non-abelian but invertible. Also, the multiplication of a degree {n} and a degree {m} polynomial is a degree {n+m} polynomial, whereas the group multiplication law on permutations does not take a permutation in {S_n} and a permutation in {S_m} and return a permutation in {S_{n+m}}.

I recently found (after some discussions with Ben Green) what I feel to be a satisfying conceptual (as opposed to computational) explanation of this link, which I will place below the fold.

Read the rest of this entry »

I’ve just uploaded to the arXiv my paper “Inverse theorems for sets and measures of polynomial growth“. This paper was motivated by two related questions. The first question was to obtain a qualitatively precise description of the sets of polynomial growth that arise in Gromov’s theorem, in much the same way that Freiman’s theorem (and its generalisations) provide a qualitatively precise description of sets of small doubling. The other question was to obtain a non-abelian analogue of inverse Littlewood-Offord theory.

Let me discuss the former question first. Gromov’s theorem tells us that if a finite subset {A} of a group {G} exhibits polynomial growth in the sense that {|A^n|} grows polynomially in {n}, then the group generated by {A} is virtually nilpotent (the converse direction also true, and is relatively easy to establish). This theorem has been strengthened a number of times over the years. For instance, a few years ago, I proved with Shalom that the condition that {|A^n|} grew polynomially in {n} could be replaced by {|A^n| \leq C n^d} for a single {n}, as long as {n} was sufficiently large depending on {C,d} (in fact we gave a fairly explicit quantitative bound on how large {n} needed to be). A little more recently, with Breuillard and Green, the condition {|A^n| \leq C n^d} was weakened to {|A^n| \leq n^d |A|}, that is to say it sufficed to have polynomial relative growth at a finite scale. In fact, the latter paper gave more information on {A} in this case, roughly speaking it showed (at least in the case when {A} was a symmetric neighbourhood of the identity) that {A^n} was “commensurate” with a very structured object known as a coset nilprogression. This can then be used to establish further control on {A}. For instance, it was recently shown by Breuillard and Tointon (again in the symmetric case) that if {|A^n| \leq n^d |A|} for a single {n} that was sufficiently large depending on {d}, then all the {A^{n'}} for {n' \geq n} have a doubling constant bounded by a bound {C_d} depending only on {d}, thus {|A^{2n'}| \leq C_d |A^{n'}|} for all {n' \geq n}.

In this paper we are able to refine this analysis a bit further; under the same hypotheses, we can show an estimate of the form

\displaystyle  \log |A^{n'}| = \log |A^n| + f( \log n' - \log n ) + O_d(1)

for all {n' \geq n} and some piecewise linear, continuous, non-decreasing function {f: [0,+\infty) \rightarrow [0,+\infty)} with {f(0)=0}, where the error {O_d(1)} is bounded by a constant depending only on {d}, and where {f} has at most {O_d(1)} pieces, each of which has a slope that is a natural number of size {O_d(1)}. To put it another way, the function {n' \mapsto |A^{n'}|} for {n' \geq n} behaves (up to multiplicative constants) like a piecewise polynomial function, where the degree of the function and number of pieces is bounded by a constant depending on {d}.

One could ask whether the function {f} has any convexity or concavity properties. It turns out that it can exhibit either convex or concave behaviour (or a combination of both). For instance, if {A} is contained in a large finite group, then {n \mapsto |A^n|} will eventually plateau to a constant, exhibiting concave behaviour. On the other hand, in nilpotent groups one can see convex behaviour; for instance, in the Heisenberg group {\begin{pmatrix}{} {1} {\mathbf Z} {\mathbf Z} \\ {0} {1} {\mathbf Z} \\ {0} {1} \end{pmatrix}}, if one sets {A} to be a set of matrices of the form {\begin{pmatrix} 1 & O(N) & O(N^3) \\ 0 & 1 & O(N) \\ 0 & 0 & 1 \end{pmatrix}} for some large {N} (abusing the {O()} notation somewhat), then {n \mapsto A^n} grows cubically for {n \leq N} but then grows quartically for {n > N}.

To prove this proposition, it turns out (after using a somewhat difficult inverse theorem proven previously by Breuillard, Green, and myself) that one has to analyse the volume growth {n \mapsto |P^n|} of nilprogressions {P}. In the “infinitely proper” case where there are no unexpected relations between the generators of the nilprogression, one can lift everything to a simply connected Lie group (where one can take logarithms and exploit the Baker-Campbell-Hausdorff formula heavily), eventually describing {P^n} with fair accuracy by a certain convex polytope with vertices depending polynomially on {n}, which implies that {|P^n|} depends polynomially on {n} up to constants. If one is not in the “infinitely proper” case, then at some point {n_0} the nilprogression {P^{n_0}} develops a “collision”, but then one can use this collision to show (after some work) that the dimension of the “Lie model” of {P^{n_0}} has dropped by at least one from the dimension of {P} (the notion of a Lie model being developed in the previously mentioned paper of Breuillard, Greenm, and myself), so that this sort of collision can only occur a bounded number of times, with essentially polynomial volume growth behaviour between these collisions.

The arguments also give a precise description of the location of a set {A} for which {A^n} grows polynomially in {n}. In the symmetric case, what ends up happening is that {A^n} becomes commensurate to a “coset nilprogression” {HP} of bounded rank and nilpotency class, whilst {A} is “virtually” contained in a scaled down version {HP^{1/n}} of that nilprogression. What “virtually” means is a little complicated; roughly speaking, it means that there is a set {X} of bounded cardinality such that {aXHP^{1/n} \approx XHP^{1/n}} for all {a \in A}. Conversely, if {A} is virtually contained in {HP^{1/n}}, then {A^n} is commensurate to {HP} (and more generally, {A^{mn}} is commensurate to {HP^m} for any natural number {m}), giving quite a (qualitatively) precise description of {A} in terms of coset nilprogressions.

The main tool used to prove these results is the structure theorem for approximate groups established by Breuillard, Green, and myself, which roughly speaking asserts that approximate groups are always commensurate with coset nilprogressions. A key additional trick is a pigeonholing argument of Sanders, which in this context is the assertion that if {A^n} is comparable to {A^{2n}}, then there is an {n'} between {n} and {2n} such that {A \cdot A^{n'}} is very close in size to {A^{n'}} (up to a relative error of {1/n}). It is this fact, together with the comparability of {A^{n'}} to a coset nilprogression {HP}, that allows us (after some combinatorial argument) to virtually place {A} inside {HP^{1/n}}.

Similar arguments apply when discussing iterated convolutions {\mu^{*n}} of (symmetric) probability measures on a (discrete) group {G}, rather than combinatorial powers {A^n} of a finite set. Here, the analogue of volume {A^n} is given by the negative power {\| \mu^{*n} \|_{\ell^2}^{-2}} of the {\ell^2} norm of {\mu^{*n}} (thought of as a non-negative function on {G} of total mass 1). One can also work with other norms here than {\ell^2}, but this norm has some minor technical conveniences (and other measures of the “spread” of {\mu^{*n}} end up being more or less equivalent for our purposes). There is an analogous structure theorem that asserts that if {\mu^{*n}} spreads at most polynomially in {n}, then {\mu^{*n}} is “commensurate” with the uniform probability distribution on a coset progression {HP}, and {\mu} itself is largely concentrated near {HP^{1/\sqrt{n}}}. The factor of {\sqrt{n}} here is the familiar scaling factor in random walks that arises for instance in the central limit theorem. The proof of (the precise version of) this statement proceeds similarly to the combinatorial case, using pigeonholing to locate a scale {n'} where {\mu *\mu^{n'}} has almost the same {\ell^2} norm as {\mu^{n'}}.

A special case of this theory occurs when {\mu} is the uniform probability measure on {n} elements {v_1,\dots,v_n} of {G} and their inverses. The probability measure {\mu^{*n}} is then the distribution of a random product {w_1 \dots w_n}, where each {w_i} is equal to one of {v_{j_i}} or its inverse {v_{j_i}^{-1}}, selected at random with {j_i} drawn uniformly from {\{1,\dots,n\}} with replacement. This is very close to the Littlewood-Offord situation of random products {u_1 \dots u_n} where each {u_i} is equal to {v_i} or {v_i^{-1}} selected independently at random (thus {j_i} is now fixed to equal {i} rather than being randomly drawn from {\{1,\dots,n\}}. In the case when {G} is abelian, it turns out that a little bit of Fourier analysis shows that these two random walks have “comparable” distributions in a certain {\ell^2} sense. As a consequence, the results in this paper can be used to recover an essentially optimal abelian inverse Littlewood-Offord theorem of Nguyen and Vu. In the nonabelian case, the only Littlewood-Offord theorem I am aware of is a recent result of Tiep and Vu for matrix groups, but in this case I do not know how to relate the above two random walks to each other, and so we can only obtain an analogue of the Tiep-Vu results for the symmetrised random walk {w_1 \dots w_n} instead of the ordered random walk {u_1 \dots u_n}.

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 5,011 other followers