You are currently browsing the category archive for the ‘expository’ category.

Suppose {F: X \rightarrow Y} is a continuous (but nonlinear) map from one normed vector space {X} to another {Y}. The continuity means, roughly speaking, that if {x_0, x \in X} are such that {\|x-x_0\|_X} is small, then {\|F(x)-F(x_0)\|_Y} is also small (though the precise notion of “smallness” may depend on {x} or {x_0}, particularly if {F} is not known to be uniformly continuous). If {F} is known to be differentiable (in, say, the Fréchet sense), then we in fact have a linear bound of the form

\displaystyle  \|F(x)-F(x_0)\|_Y \leq C(x_0) \|x-x_0\|_X

for some {C(x_0)} depending on {x_0}, if {\|x-x_0\|_X} is small enough; one can of course make {C(x_0)} independent of {x_0} (and drop the smallness condition) if {F} is known instead to be Lipschitz continuous.

In many applications in analysis, one would like more explicit and quantitative bounds that estimate quantities like {\|F(x)-F(x_0)\|_Y} in terms of quantities like {\|x-x_0\|_X}. There are a number of ways to do this. First of all, there is of course the trivial estimate arising from the triangle inequality:

\displaystyle  \|F(x)-F(x_0)\|_Y \leq \|F(x)\|_Y + \|F(x_0)\|_Y. \ \ \ \ \ (1)

This estimate is usually not very good when {x} and {x_0} are close together. However, when {x} and {x_0} are far apart, this estimate can be more or less sharp. For instance, if the magnitude of {F} varies so much from {x_0} to {x} that {\|F(x)\|_Y} is more than (say) twice that of {\|F(x_0)\|_Y}, or vice versa, then (1) is sharp up to a multiplicative constant. Also, if {F} is oscillatory in nature, and the distance between {x} and {x_0} exceeds the “wavelength” of the oscillation of {F} at {x_0} (or at {x}), then one also typically expects (1) to be close to sharp. Conversely, if {F} does not vary much in magnitude from {x_0} to {x}, and the distance between {x} and {x_0} is less than the wavelength of any oscillation present in {F}, one expects to be able to improve upon (1).

When {F} is relatively simple in form, one can sometimes proceed simply by substituting {x = x_0 + h}. For instance, if {F: R \rightarrow R} is the squaring function {F(x) = x^2} in a commutative ring {R}, one has

\displaystyle  F(x_0+h) = (x_0+h)^2 = x_0^2 + 2x_0 h+ h^2

and thus

\displaystyle  F(x_0+h) - F(x_0) = 2x_0 h + h^2

or in terms of the original variables {x, x_0} one has

\displaystyle  F(x) - F(x_0) = 2 x_0 (x-x_0) + (x-x_0)^2.

If the ring {R} is not commutative, one has to modify this to

\displaystyle  F(x) - F(x_0) = x_0 (x-x_0) + (x-x_0) x_0 + (x-x_0)^2.

Thus, for instance, if {A, B} are {n \times n} matrices and {\| \|_{op}} denotes the operator norm, one sees from the triangle inequality and the sub-multiplicativity {\| AB\|_{op} \leq \| A \|_{op} \|B\|_{op}} of operator norm that

\displaystyle  \| A^2 - B^2 \|_{op} \leq \| A - B \|_{op} ( 2 \|B\|_{op} + \|A - B \|_{op} ). \ \ \ \ \ (2)

If {F(x)} involves {x} (or various components of {x}) in several places, one can sometimes get a good estimate by “swapping” {x} with {x_0} at each of the places in turn, using a telescoping series. For instance, if we again use the squaring function {F(x) = x^2 = x x} in a non-commutative ring, we have

\displaystyle  F(x) - F(x_0) = x x - x_0 x_0

\displaystyle  = (x x - x_0 x) + (x_0 x - x_0 x_0)

\displaystyle  = (x-x_0) x + x_0 (x-x_0)

which for instance leads to a slight improvement of (2):

\displaystyle  \| A^2 - B^2 \|_{op} \leq \| A - B \|_{op} ( \| A\|_{op} + \|B\|_{op} ).

More generally, for any natural number {n}, one has the identity

\displaystyle  x^n - x_0^n = (x-x_0) (x^{n-1} + x^{n-2} x_0 + \dots + x x_0^{n-2} + x_0^{n-1}) \ \ \ \ \ (3)

in a commutative ring, while in a non-commutative ring one must modify this to

\displaystyle  x^n - x_0^n = \sum_{i=0}^{n-1} x_0^i (x-x_0) x^{n-1-i},

and for matrices one has

\displaystyle  \| A^n - B^n \|_{op} \leq \| A-B\|_{op} ( \|A\|_{op}^{n-1} + \| A\|_{op}^{n-2} \| B\|_{op} + \dots + \|B\|_{op}^{n-1} ).

Exercise 1 If {U} and {V} are unitary {n \times n} matrices, show that the commutator {[U,V] := U V U^{-1} V^{-1}} obeys the inequality

\displaystyle  \| [U,V] - I \|_{op} \leq 2 \| U - I \|_{op} \| V - I \|_{op}.

(Hint: first control {\| UV - VU \|_{op}}.)

Now suppose (for simplicity) that {F: {\bf R}^d \rightarrow {\bf R}^{d'}} is a map between Euclidean spaces. If {F} is continuously differentiable, then one can use the fundamental theorem of calculus to write

\displaystyle  F(x) - F(x_0) = \int_0^1 \frac{d}{dt} F( \gamma(t) )\ dt

where {\gamma: [0,1] \rightarrow Y} is any continuously differentiable path from {x_0} to {x}. For instance, if one uses the straight line path {\gamma(t) := (1-t) x_0 + tx}, one has

\displaystyle  F(x) - F(x_0) = \int_0^1 ((x-x_0) \cdot \nabla F)( (1-t) x_0 + t x )\ dt.

In the one-dimensional case {d=1}, this simplifies to

\displaystyle  F(x) - F(x_0) = (x-x_0) \int_0^1 F'( (1-t) x_0 + t x )\ dt. \ \ \ \ \ (4)

Among other things, this immediately implies the factor theorem for {C^k} functions: if {F} is a {C^k({\bf R})} function for some {k \geq 1} that vanishes at some point {x_0}, then {F(x)} factors as the product of {x-x_0} and some {C^{k-1}} function {G}. Another basic consequence is that if {\nabla F} is uniformly bounded in magnitude by some constant {C}, then {F} is Lipschitz continuous with the same constant {C}.

Applying (4) to the power function {x \mapsto x^n}, we obtain the identity

\displaystyle  x^n - x_0^n = n (x-x_0) \int_0^1 ((1-t) x_0 + t x)^{n-1}\ dt \ \ \ \ \ (5)

which can be compared with (3). Indeed, for {x_0} and {x} close to {1}, one can use logarithms and Taylor expansion to arrive at the approximation {((1-t) x_0 + t x)^{n-1} \approx x_0^{(1-t) (n-1)} x^{t(n-1)}}, so (3) behaves a little like a Riemann sum approximation to (5).

Exercise 2 For each {i=1,\dots,n}, let {X^{(1)}_i} and {X^{(0)}_i} be random variables taking values in a measurable space {R_i}, and let {F: R_1 \times \dots \times R_n \rightarrow {\bf R}^m} be a bounded measurable function.

  • (i) (Lindeberg exchange identity) Show that

    \displaystyle  \mathop{\bf E} F(X^{(1)}_1,\dots,X^{(1)}_n) - \mathop{\bf E} F(X^{(0)}_1,\dots,X^{(0)}_n)

    \displaystyle = \sum_{i=1}^n \mathop{\bf E} F( X^{(1)}_1,\dots, X^{(1)}_{i-1}, X^{(1)}_i, X^{(0)}_{i+1}, \dots, X^{(0)}_n)

    \displaystyle - \mathop{\bf E} F( X^{(1)}_1,\dots, X^{(1)}_{i-1}, X^{(0)}_i, X^{(0)}_{i+1}, \dots, X^{(0)}_n).

  • (ii) (Knowles-Yin exchange identity) Show that

    \displaystyle  \mathop{\bf E} F(X^{(1)}_1,\dots,X^{(1)}_n) - \mathop{\bf E} F(X^{(0)}_1,\dots,X^{(0)}_n)

    \displaystyle = \int_0^1 \sum_{i=1}^n \mathop{\bf E} F( X^{(t)}_1,\dots, X^{(t)}_{i-1}, X^{(1)}_i, X^{(t)}_{i+1}, \dots, X^{(t)}_n)

    \displaystyle - \mathop{\bf E} F( X^{(t)}_1,\dots, X^{(t)}_{i-1}, X^{(0)}_i, X^{(t)}_{i+1}, \dots, X^{(t)}_n)\ dt,

    where {X^{(t)}_i = 1_{I_i \leq t} X^{(0)}_i + 1_{I_i > t} X^{(1)}_i} is a mixture of {X^{(0)}_i} and {X^{(1)}_i}, with {I_1,\dots,I_n} uniformly drawn from {[0,1]} independently of each other and of the {X^{(0)}_1,\dots,X^{(0)}_n, X^{(1)}_0,\dots,X^{(1)}_n}.

  • (iii) Discuss the relationship between the identities in parts (i), (ii) with the identities (3), (5).

(The identity in (i) is the starting point for the Lindeberg exchange method in probability theory, discussed for instance in this previous post. The identity in (ii) can also be used in the Lindeberg exchange method; the terms in the right-hand side are slightly more symmetric in the indices {1,\dots,n}, which can be a technical advantage in some applications; see this paper of Knowles and Yin for an instance of this.)

Exercise 3 If {F: {\bf R}^d \rightarrow {\bf R}^{d'}} is continuously {k} times differentiable, establish Taylor’s theorem with remainder

\displaystyle  F(x) = \sum_{j=0}^{k-1} \frac{1}{j!} (((x-x_0) \cdot \nabla)^j F)( x_0 )

\displaystyle + \int_0^1 \frac{(1-t)^{k-1}}{(k-1)!} (((x-x_0) \cdot \nabla)^k F)((1-t) x_0 + t x)\ dt.

If {\nabla^k F} is bounded, conclude that

\displaystyle  |F(x) - \sum_{j=0}^{k-1} \frac{1}{j!} (((x-x_0) \cdot \nabla)^j F)( x_0 )|

\displaystyle \leq \frac{|x-x_0|^k}{k!} \sup_{y \in {\bf R}^d} |\nabla^k F(y)|.

For real scalar functions {F: {\bf R}^d \rightarrow {\bf R}}, the average value of the continuous real-valued function {(x - x_0) \cdot \nabla F((1-t) x_0 + t x)} must be attained at some point {t} in the interval {[0,1]}. We thus conclude the mean-value theorem

\displaystyle  F(x) - F(x_0) = ((x - x_0) \cdot \nabla F)((1-t) x_0 + t x)

for some {t \in [0,1]} (that can depend on {x}, {x_0}, and {F}). This can for instance give a second proof of fact that continuously differentiable functions {F} with bounded derivative are Lipschitz continuous. However it is worth stressing that the mean-value theorem is only available for real scalar functions; it is false for instance for complex scalar functions. A basic counterexample is given by the function {e(x) := e^{2\pi i x}}; there is no {t \in [0,1]} for which {e(1) - e(0) = e'(t)}. On the other hand, as {e'} has magnitude {2\pi}, we still know from (4) that {e} is Lipschitz of constant {2\pi}, and when combined with (1) we obtain the basic bounds

\displaystyle  |e(x) - e(y)| \leq \min( 2, 2\pi |x-y| )

which are already very useful for many applications.

Exercise 4 Let {H_0, V} be {n \times n} matrices, and let {t} be a non-negative real.

  • (i) Establish the Duhamel formula

    \displaystyle  e^{t(H_0+V)} = e^{tH_0} + \int_0^t e^{(t-s) H_0} V e^{s (H_0+V)}\ ds

    \displaystyle  = e^{tH_0} + \int_0^t e^{(t-s) (H_0+V)} V e^{s H_0}\ ds

    where {e^A} denotes the matrix exponential of {A}. (Hint: Differentiate {e^{(t-s) H_0} e^{s (H_0+V)}} or {e^{(t-s) (H_0+V)} e^{s H_0}} in {s}.)

  • (ii) Establish the iterated Duhamel formula

    \displaystyle  e^{t(H_0+V)} = e^{tH_0} + \sum_{j=1}^k \int_{0 \leq t_1 \leq \dots \leq t_j \leq t}

    \displaystyle e^{(t-t_j) H_0} V e^{(t_j-t_{j-1}) H_0} V \dots e^{(t_2-t_1) H_0} V e^{t_1 H_0}\ dt_1 \dots dt_j

    \displaystyle  + \int_{0 \leq t_1 \leq \dots \leq t_{k+1} \leq t}

    \displaystyle  e^{(t-t_{k+1}) (H_0+V)} V e^{(t_{k+1}-t_k) H_0} V \dots e^{(t_2-t_1) H_0} V e^{t_1 H_0}\ dt_1 \dots dt_{k+1}

    for any {k \geq 0}.

  • (iii) Establish the infinitely iterated Duhamel formula

    \displaystyle  e^{t(H_0+V)} = e^{tH_0} + \sum_{j=1}^\infty \int_{0 \leq t_1 \leq \dots \leq t_j \leq t}

    \displaystyle e^{(t-t_j) H_0} V e^{(t_j-t_{j-1}) H_0} V \dots e^{(t_2-t_1) H_0} V e^{t_1 H_0}\ dt_1 \dots dt_j.

  • (iv) If {H(t)} is an {n \times n} matrix depending in a continuously differentiable fashion on {t}, establish the variation formula

    \displaystyle  \frac{d}{dt} e^{H(t)} = (F(\mathrm{ad}(H(t))) H'(t)) e^{H(t)}

    where {\mathrm{ad}(H)} is the adjoint representation {\mathrm{ad}(H)(V) = HV - VH} applied to {H}, and {F} is the function

    \displaystyle  F(z) := \int_0^1 e^{sz}\ ds

    (thus {F(z) = \frac{e^z-1}{z}} for non-zero {z}), with {F(\mathrm{ad}(H(t)))} defined using functional calculus.

We remark that further manipulation of (iv) of the above exercise using the fundamental theorem of calculus eventually leads to the Baker-Campbell-Hausdorff-Dynkin formula, as discussed in this previous blog post.

Exercise 5 Let {A, B} be positive definite {n \times n} matrices, and let {Y} be an {n \times n} matrix. Show that there is a unique solution {X} to the Sylvester equation

\displaystyle  AX + X B = Y

which is given by the formula

\displaystyle  X = \int_0^\infty e^{-tA} Y e^{-tB}\ dt.

In the above examples we had applied the fundamental theorem of calculus along linear curves {\gamma(t) = (1-t) x_0 + t x}. However, it is sometimes better to use other curves. For instance, the circular arc {\gamma(t) = \cos(\pi t/2) x_0 + \sin(\pi t/2) x} can be useful, particularly if {x_0} and {x} are “orthogonal” or “independent” in some sense; a good example of this is the proof by Maurey and Pisier of the gaussian concentration inequality, given in Theorem 8 of this previous blog post. In a similar vein, if one wishes to compare a scalar random variable {X} of mean zero and variance one with a Gaussian random variable {G} of mean zero and variance one, it can be useful to introduce the intermediate random variables {\gamma(t) := (1-t)^{1/2} X + t^{1/2} G} (where {X} and {G} are independent); note that these variables have mean zero and variance one, and after coupling them together appropriately they evolve by the Ornstein-Uhlenbeck process, which has many useful properties. For instance, one can use these ideas to establish monotonicity formulae for entropy; see e.g. this paper of Courtade for an example of this and further references. More generally, one can exploit curves {\gamma} that flow according to some geometrically natural ODE or PDE; several examples of this occur famously in Perelman’s proof of the Poincaré conjecture via Ricci flow, discussed for instance in this previous set of lecture notes.

In some cases, it is difficult to compute {F(x)-F(x_0)} or the derivative {\nabla F} directly, but one can instead proceed by implicit differentiation, or some variant thereof. Consider for instance the matrix inversion map {F(A) := A^{-1}} (defined on the open dense subset of {n \times n} matrices consisting of invertible matrices). If one wants to compute {F(B)-F(A)} for {B} close to {A}, one can write temporarily write {F(B) - F(A) = E}, thus

\displaystyle  B^{-1} - A^{-1} = E.

Multiplying both sides on the left by {B} to eliminate the {B^{-1}} term, and on the right by {A} to eliminate the {A^{-1}} term, one obtains

\displaystyle  A - B = B E A

and thus on reversing these steps we arrive at the basic identity

\displaystyle  B^{-1} - A^{-1} = B^{-1} (A - B) A^{-1}. \ \ \ \ \ (6)

For instance, if {H_0, V} are {n \times n} matrices, and we consider the resolvents

\displaystyle  R_0(z) := (H_0 - z I)^{-1}; \quad R_V(z) := (H_0 + V - zI)^{-1}

then we have the resolvent identity

\displaystyle  R_V(z) - R_0(z) = - R_V(z) V R_0(z) \ \ \ \ \ (7)

as long as {z} does not lie in the spectrum of {H_0} or {H_0+V} (for instance, if {H_0}, {V} are self-adjoint then one can take {z} to be any strictly complex number). One can iterate this identity to obtain

\displaystyle  R_V(z) = \sum_{j=0}^k (-R_0(z) V)^j R_0(z) + (-R_V(z) V) (-R_0(z) V)^k R_0(z)

for any natural number {k}; in particular, if {R_0(z) V} has operator norm less than one, one has the Neumann series

\displaystyle  R_V(z) = \sum_{j=0}^\infty (-R_0(z) V)^j R_0(z).

Similarly, if {A(t)} is a family of invertible matrices that depends in a continuously differentiable fashion on a time variable {t}, then by implicitly differentiating the identity

\displaystyle  A(t) A(t)^{-1} = I

in {t} using the product rule, we obtain

\displaystyle  (\frac{d}{dt} A(t)) A(t)^{-1} + A(t) \frac{d}{dt} A(t)^{-1} = 0

and hence

\displaystyle  \frac{d}{dt} A(t)^{-1} = - A(t)^{-1} (\frac{d}{dt} A(t)) A(t)^{-1}

(this identity may also be easily derived from (6)). One can then use the fundamental theorem of calculus to obtain variants of (6), for instance by using the curve {\gamma(t) = (1-t) A + tB} we arrive at

\displaystyle  B^{-1} - A^{-1} = \int_0^1 ((1-t)A + tB)^{-1} (A-B) ((1-t)A + tB)^{-1}\ dt

assuming that the curve stays entirely within the set of invertible matrices. While this identity may seem more complicated than (6), it is more symmetric, which conveys some advantages. For instance, using this identity it is easy to see that if {A, B} are positive definite with {A>B} in the sense of positive definite matrices (that is, {A-B} is positive definite), then {B^{-1} > A^{-1}}. (Try to prove this using (6) instead!)

Exercise 6 If {A} is an invertible {n \times n} matrix and {u, v} are {n \times 1} vectors, establish the Sherman-Morrison formula

\displaystyle  (A + t uv^T)^{-1} = A^{-1} - \frac{t}{1 + t v^T A^{-1} u} A^{-1} uv^T A^{-1}

whenever {t} is a scalar such that {1 + t v^T A^{-1} u} is non-zero. (See also this previous blog post for more discussion of these sorts of identities.)

One can use the Cauchy integral formula to extend these identities to other functions of matrices. For instance, if {F: {\bf C} \rightarrow {\bf C}} is an entire function, and {\gamma} is a counterclockwise contour that goes around the spectrum of both {H_0} and {H_0+V}, then we have

\displaystyle  F(H_0+V) = \frac{-1}{2\pi i} \int_\gamma F(z) R_V(z)\ dz

and similarly

\displaystyle  F(H_0) = \frac{-1}{2\pi i} \int_\gamma F(z) R_0(z)\ dz

and hence by (7) one has

\displaystyle  F(H_0+V) - F(H_0) = \frac{1}{2\pi i} \int_\gamma F(z) R_V(z) V F_0(z)\ dz;

similarly, if {H(t)} depends on {t} in a continuously differentiable fashion, then

\displaystyle  \frac{d}{dt} F(H(t)) = \frac{1}{2\pi i} \int_\gamma F(z) (H(t) - zI)^{-1} H'(t) (z) (H(t)-zI)^{-1}\ dz

as long as {\gamma} goes around the spectrum of {H(t)}.

Exercise 7 If {H(t)} is an {n \times n} matrix depending continuously differentiably on {t}, and {F: {\bf C} \rightarrow {\bf C}} is an entire function, establish the tracial chain rule

\displaystyle  \frac{d}{dt} \hbox{tr} F(H(t)) = \hbox{tr}(F'(H(t)) H'(t)).

In a similar vein, given that the logarithm function is the antiderivative of the reciprocal, one can express the matrix logarithm {\log A} of a positive definite matrix by the fundamental theorem of calculus identity

\displaystyle  \log A = \int_0^\infty (I + sI)^{-1} - (A + sI)^{-1}\ ds

(with the constant term {(I+tI)^{-1}} needed to prevent a logarithmic divergence in the integral). Differentiating, we see that if {A(t)} is a family of positive definite matrices depending continuously on {t}, that

\displaystyle  \frac{d}{dt} \log A(t) = \int_0^\infty (A(t) + sI)^{-1} A'(t) (A(t)+sI)^{-1}\ dt.

This can be used for instance to show that {\log} is a monotone increasing function, in the sense that {\log A> \log B} whenever {A > B > 0} in the sense of positive definite matrices. One can of course integrate this formula to obtain some formulae for the difference {\log A - \log B} of the logarithm of two positive definite matrices {A,B}.

To compare the square root {A^{1/2} - B^{1/2}} of two positive definite matrices {A,B} is trickier; there are multiple ways to proceed. One approach is to use contour integration as before (but one has to take some care to avoid branch cuts of the square root). Another to express the square root in terms of exponentials via the formula

\displaystyle  A^{1/2} = \frac{1}{\Gamma(-1/2)} \int_0^\infty (e^{-tA} - I) t^{-1/2} \frac{dt}{t}

where {\Gamma} is the gamma function; this formula can be verified by first diagonalising {A} to reduce to the scalar case and using the definition of the Gamma function. Then one has

\displaystyle  A^{1/2} - B^{1/2} = \frac{1}{\Gamma(-1/2)} \int_0^\infty (e^{-tA} - e^{-tB}) t^{-1/2} \frac{dt}{t}

and one can use some of the previous identities to control {e^{-tA} - e^{-tB}}. This is pretty messy though. A third way to proceed is via implicit differentiation. If for instance {A(t)} is a family of positive definite matrices depending continuously differentiably on {t}, we can differentiate the identity

\displaystyle  A(t)^{1/2} A(t)^{1/2} = A(t)

to obtain

\displaystyle  A(t)^{1/2} \frac{d}{dt} A(t)^{1/2} + (\frac{d}{dt} A(t)^{1/2}) A(t)^{1/2} = \frac{d}{dt} A(t).

This can for instance be solved using Exercise 5 to obtain

\displaystyle  \frac{d}{dt} A(t)^{1/2} = \int_0^\infty e^{-sA(t)^{1/2}} A'(t) e^{-sA(t)^{1/2}}\ ds

and this can in turn be integrated to obtain a formula for {A^{1/2} - B^{1/2}}. This is again a rather messy formula, but it does at least demonstrate that the square root is a monotone increasing function on positive definite matrices: {A > B > 0} implies {A^{1/2} > B^{1/2} > 0}.

Several of the above identities for matrices can be (carefully) extended to operators on Hilbert spaces provided that they are sufficiently well behaved (in particular, if they have a good functional calculus, and if various spectral hypotheses are obeyed). We will not attempt to do so here, however.

Suppose one has a bounded sequence {(a_n)_{n=1}^\infty = (a_1, a_2, \dots)} of real numbers. What kinds of limits can one form from this sequence?

Of course, we have the usual notion of limit {\lim_{n \rightarrow \infty} a_n}, which in this post I will refer to as the classical limit to distinguish from the other limits discussed in this post. The classical limit, if it exists, is the unique real number {L} such that for every {\varepsilon>0}, one has {|a_n-L| \leq \varepsilon} for all sufficiently large {n}. We say that a sequence is (classically) convergent if its classical limit exists. The classical limit obeys many useful limit laws when applied to classically convergent sequences. Firstly, it is linear: if {(a_n)_{n=1}^\infty} and {(b_n)_{n=1}^\infty} are classically convergent sequences, then {(a_n+b_n)_{n=1}^\infty} is also classically convergent with

\displaystyle  \lim_{n \rightarrow \infty} (a_n + b_n) = (\lim_{n \rightarrow \infty} a_n) + (\lim_{n \rightarrow \infty} b_n) \ \ \ \ \ (1)

and similarly for any scalar {c}, {(ca_n)_{n=1}^\infty} is classically convergent with

\displaystyle  \lim_{n \rightarrow \infty} (ca_n) = c \lim_{n \rightarrow \infty} a_n. \ \ \ \ \ (2)

It is also an algebra homomorphism: {(a_n b_n)_{n=1}^\infty} is also classically convergent with

\displaystyle  \lim_{n \rightarrow \infty} (a_n b_n) = (\lim_{n \rightarrow \infty} a_n) (\lim_{n \rightarrow \infty} b_n). \ \ \ \ \ (3)

We also have shift invariance: if {(a_n)_{n=1}^\infty} is classically convergent, then so is {(a_{n+1})_{n=1}^\infty} with

\displaystyle  \lim_{n \rightarrow \infty} a_{n+1} = \lim_{n \rightarrow \infty} a_n \ \ \ \ \ (4)

and more generally in fact for any injection {\phi: {\bf N} \rightarrow {\bf N}}, {(a_{\phi(n)})_{n=1}^\infty} is classically convergent with

\displaystyle  \lim_{n \rightarrow \infty} a_{\phi(n)} = \lim_{n \rightarrow \infty} a_n. \ \ \ \ \ (5)

The classical limit of a sequence is unchanged if one modifies any finite number of elements of the sequence. Finally, we have boundedness: for any classically convergent sequence {(a_n)_{n=1}^\infty}, one has

\displaystyle  \inf_n a_n \leq \lim_{n \rightarrow \infty} a_n \leq \sup_n a_n. \ \ \ \ \ (6)

One can in fact show without much difficulty that these laws uniquely determine the classical limit functional on convergent sequences.

One would like to extend the classical limit notion to more general bounded sequences; however, when doing so one must give up one or more of the desirable limit laws that were listed above. Consider for instance the sequence {a_n = (-1)^n}. On the one hand, one has {a_n^2 = 1} for all {n}, so if one wishes to retain the homomorphism property (3), any “limit” of this sequence {a_n} would have to necessarily square to {1}, that is to say it must equal {+1} or {-1}. On the other hand, if one wished to retain the shift invariance property (4) as well as the homogeneity property (2), any “limit” of this sequence would have to equal its own negation and thus be zero.

Nevertheless there are a number of useful generalisations and variants of the classical limit concept for non-convergent sequences that obey a significant portion of the above limit laws. For instance, we have the limit superior

\displaystyle  \limsup_{n \rightarrow \infty} a_n := \inf_N \sup_{n \geq N} a_n

and limit inferior

\displaystyle  \liminf_{n \rightarrow \infty} a_n := \sup_N \inf_{n \geq N} a_n

which are well-defined real numbers for any bounded sequence {(a_n)_{n=1}^\infty}; they agree with the classical limit when the sequence is convergent, but disagree otherwise. They enjoy the shift-invariance property (4), and the boundedness property (6), but do not in general obey the homomorphism property (3) or the linearity property (1); indeed, we only have the subadditivity property

\displaystyle  \limsup_{n \rightarrow \infty} (a_n + b_n) \leq (\limsup_{n \rightarrow \infty} a_n) + (\limsup_{n \rightarrow \infty} b_n)

for the limit superior, and the superadditivity property

\displaystyle  \liminf_{n \rightarrow \infty} (a_n + b_n) \geq (\liminf_{n \rightarrow \infty} a_n) + (\liminf_{n \rightarrow \infty} b_n)

for the limit inferior. The homogeneity property (2) is only obeyed by the limits superior and inferior for non-negative {c}; for negative {c}, one must have the limit inferior on one side of (2) and the limit superior on the other, thus for instance

\displaystyle  \limsup_{n \rightarrow \infty} (-a_n) = - \liminf_{n \rightarrow \infty} a_n.

The limit superior and limit inferior are examples of limit points of the sequence, which can for instance be defined as points that are limits of at least one subsequence of the original sequence. Indeed, the limit superior is always the largest limit point of the sequence, and the limit inferior is always the smallest limit point. However, limit points can be highly non-unique (indeed they are unique if and only if the sequence is classically convergent), and so it is difficult to sensibly interpret most of the usual limit laws in this setting, with the exception of the homogeneity property (2) and the boundedness property (6) that are easy to state for limit points.

Another notion of limit are the Césaro limits

\displaystyle  \mathrm{C}\!-\!\lim_{n \rightarrow \infty} a_n := \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N a_n;

if this limit exists, we say that the sequence is Césaro convergent. If the sequence {(a_n)_{n=1}^\infty} already has a classical limit, then it also has a Césaro limit that agrees with the classical limit; but there are additional sequences that have a Césaro limit but not a classical one. For instance, the non-classically convergent sequence {a_n= (-1)^n} discussed above is Césaro convergent, with a Césaro limit of {0}. However, there are still bounded sequences that do not have Césaro limit, such as {a_n := \sin( \log n )} (exercise!). The Césaro limit is linear, bounded, and shift invariant, but not an algebra homomorphism and also does not obey the rearrangement property (5).

Using the Hahn-Banach theorem, one can extend the classical limit functional to generalised limit functionals {\mathop{\widetilde \lim}_{n \rightarrow \infty} a_n}, defined to be bounded linear functionals from the space {\ell^\infty({\bf N})} of bounded real sequences to the real numbers {{\bf R}} that extend the classical limit functional (defined on the space {c_0({\bf N}) + {\bf R}} of convergent sequences) without any increase in the operator norm. (In some of my past writings I made the slight error of referring to these generalised limit functionals as Banach limits, though as discussed below, the latter actually refers to a subclass of generalised limit functionals.) It is not difficult to see that such generalised limit functionals will range between the limit inferior and limit superior. In fact, for any specific sequence {(a_n)_{n=1}^\infty} and any number {L} lying in the closed interval {[\liminf_{n \rightarrow \infty} a_n, \limsup_{n \rightarrow \infty} a_n]}, there exists at least one generalised limit functional {\mathop{\widetilde \lim}_{n \rightarrow \infty}} that takes the value {L} when applied to {a_n}; for instance, for any number {\theta} in {[-1,1]}, there exists a generalised limit functional that assigns that number {\theta} as the “limit” of the sequence {a_n = (-1)^n}. This claim can be seen by first designing such a limit functional on the vector space spanned by the convergent sequences and by {(a_n)_{n=1}^\infty}, and then appealing to the Hahn-Banach theorem to extend to all sequences. This observation also gives a necessary and sufficient criterion for a bounded sequence {(a_n)_{n=1}^\infty} to classically converge to a limit {L}, namely that all generalised limits of this sequence must equal {L}.

Because of the reliance on the Hahn-Banach theorem, the existence of generalised limits requires the axiom of choice (or some weakened version thereof); there are presumably models of set theory without the axiom of choice in which no generalised limits exist, but I do not know of an explicit reference for this.

Generalised limits can obey the shift-invariance property (4) or the algebra homomorphism property (2), but as the above analysis of the sequence {a_n = (-1)^n} shows, they cannot do both. Generalised limits that obey the shift-invariance property (4) are known as Banach limits; one can for instance construct them by applying the Hahn-Banach theorem to the Césaro limit functional; alternatively, if {\mathop{\widetilde \lim}} is any generalised limit, then the Césaro-type functional {(a_n)_{n=1}^\infty \mapsto \mathop{\widetilde \lim}_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N a_n} will be a Banach limit. The existence of Banach limits can be viewed as a demonstration of the amenability of the natural numbers (or integers); see this previous blog post for further discussion.

Generalised limits that obey the algebra homomorphism property (2) are known as ultrafilter limits. If one is given a generalised limit functional {p\!-\!\lim_{n \rightarrow \infty}} that obeys (2), then for any subset {A} of the natural numbers {{\bf N}}, the generalised limit {p\!-\!\lim_{n \rightarrow \infty} 1_A(n)} must equal its own square (since {1_A(n)^2 = 1_A(n)}) and is thus either {0} or {1}. If one defines {p \subset 2^{2^{\bf N}}} to be the collection of all subsets {A} of {{\bf N}} for which {p\!-\!\lim_{n \rightarrow \infty} 1_A(n) = 1}, one can verify that {p} obeys the axioms of a non-principal ultrafilter. Conversely, if {p} is a non-principal ultrafilter, one can define the associated generalised limit {p\!-\!\lim_{n \rightarrow \infty} a_n} of any bounded sequence {(a_n)_{n=1}^\infty} to be the unique real number {L} such that the sets {\{ n \in {\bf N}: |a_n - L| \leq \varepsilon \}} lie in {p} for all {\varepsilon>0}; one can check that this does indeed give a well-defined generalised limit that obeys (2). Non-principal ultrafilters can be constructed using Zorn’s lemma. In fact, they do not quite need the full strength of the axiom of choice; see the Wikipedia article on the ultrafilter lemma for examples.

We have previously noted that generalised limits of a sequence can converge to any point between the limit inferior and limit superior. The same is not true if one restricts to Banach limits or ultrafilter limits. For instance, by the arguments already given, the only possible Banach limit for the sequence {a_n = (-1)^n} is zero. Meanwhile, an ultrafilter limit must converge to a limit point of the original sequence, but conversely every limit point can be attained by at least one ultrafilter limit; we leave these assertions as an exercise to the interested reader. In particular, a bounded sequence converges classically to a limit {L} if and only if all ultrafilter limits converge to {L}.

There is no generalisation of the classical limit functional to any space that includes non-classically convergent sequences that obeys the subsequence property (5), since any non-classically-convergent sequence will have one subsequence that converges to the limit superior, and another subsequence that converges to the limit inferior, and one of these will have to violate (5) since the limit superior and limit inferior are distinct. So the above limit notions come close to the best generalisations of limit that one can use in practice.

We summarise the above discussion in the following table:

Limit Always defined Linear Shift-invariant Homomorphism Constructive
Classical No Yes Yes Yes Yes
Superior Yes No Yes No Yes
Inferior Yes No Yes No Yes
Césaro No Yes Yes No Yes
Generalised Yes Yes Depends Depends No
Banach Yes Yes Yes No No
Ultrafilter Yes Yes No Yes No

A sequence {a: {\bf Z} \rightarrow {\bf C}} of complex numbers is said to be quasiperiodic if it is of the form

\displaystyle a(n) = F( \alpha_1 n \hbox{ mod } 1, \dots, \alpha_k n \hbox{ mod } 1)

for some real numbers {\alpha_1,\dots,\alpha_k} and continuous function {F: ({\bf R}/{\bf Z})^k \rightarrow {\bf C}}. For instance, linear phases such as {n \mapsto e(\alpha n + \beta)} (where {e(\theta) := e^{2\pi i \theta}}) are examples of quasiperiodic sequences; the top order coefficient {\alpha} (modulo {1}) can be viewed as a “frequency” of the integers, and an element of the Pontryagin dual {\hat {\bf Z} \equiv {\bf R}/{\bf Z}} of the integers. Any periodic sequence is also quasiperiodic (taking {k=1} and {\alpha_1} to be the reciprocal of the period). A sequence is said to be almost periodic if it is the uniform limit of quasiperiodic sequences. For instance any Fourier series of the form

\displaystyle a(n) = \sum_{j=1}^\infty c_j e(\alpha_j n)

with {\alpha_1,\alpha_2,\dots} real numbers and {c_1,c_2,\dots} an absolutely summable sequence of complex coefficients, will be almost periodic.

These sequences arise in various “complexity one” problems in arithmetic combinatorics and ergodic theory. For instance, if {(X, \mu, T)} is a measure-preserving system – a probability space {(X,\mu)} equipped with a measure-preserving shift, and {f_1,f_2 \in L^\infty(X)} are bounded measurable functions, then the correlation sequence

\displaystyle a(n) := \int_X f_1(x) f_2(T^n x)\ d\mu(x)

can be shown to be an almost periodic sequence, plus an error term {b_n} which is “null” in the sense that it has vanishing uniform density:

\displaystyle \sup_N \frac{1}{M} \sum_{n=N+1}^{N+M} |b_n| \rightarrow 0 \hbox{ as } M \rightarrow \infty.

This can be established in a number of ways, for instance by writing {a(n)} as the Fourier coefficients of the spectral measure of the shift {T} with respect to the functions {f_1,f_2}, and then decomposing that measure into pure point and continuous components.

In the last two decades or so, it has become clear that there are natural higher order versions of these concepts, in which linear polynomials such as {\alpha n + \beta} are replaced with higher degree counterparts. The most obvious candidates for these counterparts would be the polynomials {\alpha_d n^d + \dots + \alpha_0}, but this turns out to not be a complete set of higher degree objects needed for the theory. Instead, the higher order versions of quasiperiodic and almost periodic sequences are now known as basic nilsequences and nilsequences respectively, while the higher order version of a linear phase is a nilcharacter; each nilcharacter then has a symbol that is a higher order generalisation of the concept of a frequency (and the collection of all symbols forms a group that can be viewed as a higher order version of the Pontryagin dual of {{\bf Z}}). The theory of these objects is spread out in the literature across a number of papers; in particular, the theory of nilcharacters is mostly developed in Appendix E of this 116-page paper of Ben Green, Tamar Ziegler, and myself, and is furthermore written using nonstandard analysis and treating the more general setting of higher dimensional sequences. I therefore decided to rewrite some of that material in this blog post, in the simpler context of the qualitative asymptotic theory of one-dimensional nilsequences and nilcharacters rather than the quantitative single-scale theory that is needed for combinatorial applications (and which necessitated the use of nonstandard analysis in the previous paper).

For technical reasons (having to do with the non-trivial topological structure on nilmanifolds), it will be convenient to work with vector-valued sequences, that take values in a finite-dimensional complex vector space {{\bf C}^m} rather than in {{\bf C}}. By doing so, the space of sequences is now, technically, no longer a ring, as the operations of addition and multiplication on vector-valued sequences become ill-defined. However, we can still take complex conjugates of a sequence, and add sequences taking values in the same vector space {{\bf C}^m}, and for sequences taking values in different vector spaces {{\bf C}^m}, {{\bf C}^{m'}}, we may utilise the tensor product {\otimes: {\bf C}^m \times {\bf C}^{m'} \rightarrow {\bf C}^{mm'}}, which we will normalise by defining

\displaystyle (z_1,\dots,z_m) \otimes (w_1,\dots,w_{m'}) = (z_1 w_1, \dots, z_1 w_{m'}, \dots, z_m w_1, \dots, z_m w_{m'} ).

This product is associative and bilinear, and also commutative up to permutation of the indices. It also interacts well with the Hermitian norm

\displaystyle \| (z_1,\dots,z_m) \| := \sqrt{|z_1|^2 + \dots + |z_m|^2}

since we have {\|z \otimes w\| = \|z\| \|w\|}.

The traditional definition of a basic nilsequence (as defined for instance by Bergelson, Host, and Kra) is as follows:

Definition 1 (Basic nilsequence, first definition) A nilmanifold of step at most {d} is a quotient {G/\Gamma}, where {G} is a connected, simply connected nilpotent Lie group of step at most {d} (thus, all {d+1}-fold commutators vanish) and {\Gamma} is a discrete cocompact lattice in {G}. A basic nilsequence of degree at most {d} is a sequence of the form {n \mapsto F(g^n g_0 \Gamma)}, where {g_0 \Gamma \in G/\Gamma}, {g \in G}, and {F: G/\Gamma \rightarrow {\bf C}^m} is a continuous function.

For instance, it is not difficult using this definition to show that a sequence is a basic nilsequence of degree at most {1} if and only if it is quasiperiodic. The requirement that {G} be simply connected can be easily removed if desired by passing to a universal cover, but it is technically convenient to assume it (among other things, it allows for a well-defined logarithm map that obeys the Baker-Campbell-Hausdorff formula). When one wishes to perform a more quantitative analysis of nilsequences (particularly when working on a “single scale”. sich as on a single long interval {\{ N+1, \dots, N+M\}}), it is common to impose additional regularity conditions on the function {F}, such as Lipschitz continuity or smoothness, but ordinary continuity will suffice for the qualitative discussion in this blog post.

Nowadays, and particularly when one needs to understand the “single-scale” equidistribution properties of nilsequences, it is more convenient (as is for instance done in this ICM paper of Green) to use an alternate definition of a nilsequence as follows.

Definition 2 Let {d \geq 0}. A filtered group of degree at most {d} is a group {G} together with a sequence {G_\bullet = (G_0,G_1,G_2,\dots)} of subgroups {G \geq G_0 \geq G_1 \geq \dots} with {G_{d+1}=\{\hbox{id}\}} and {[G_i,G_j] \subset G_{i+j}} for {i,j \geq 0}. A polynomial sequence {g: {\bf Z} \rightarrow G} into a filtered group is a function such that {\partial_{h_i} \dots \partial_{h_1} g(n) \in G_i} for all {i \geq 0} and {n,h_1,\dots,h_i \in{\bf Z}}, where {\partial_h g(n) := g(n+h) g(n)^{-1}} is the difference operator. A filtered nilmanifold of degree at most {s} is a quotient {G/\Gamma}, where {G} is a filtered group of degree at most {s} such that {G} and all of the subgroups {G_i} are connected, simply connected nilpotent filtered Lie group, and {\Gamma} is a discrete cocompact subgroup of {G} such that {\Gamma_i := \Gamma \cap G_i} is a discrete cocompact subgroup of {G_i}. A basic nilsequence of degree at most {d} is a sequence of the form {n \mapsto F(g(n))}, where {g: {\bf Z} \rightarrow G} is a polynomial sequence, {G/\Gamma} is a filtered nilmanifold of degree at most {d}, and {F: G \rightarrow {\bf C}^m} is a continuous function which is {\Gamma}-automorphic, in the sense that {F(g \gamma) = F(g)} for all {g \in G} and {\gamma \in \Gamma}.

One can easily identify a {\Gamma}-automorphic function on {G} with a function on {G/\Gamma}, but there are some (very minor) advantages to working on the group {G} instead of the quotient {G/\Gamma}, as it becomes slightly easier to modify the automorphy group {\Gamma} when needed. (But because the action of {\Gamma} on {G} is free, one can pass from {\Gamma}-automorphic functions on {G} to functions on {G/\Gamma} with very little difficulty.) The main reason to work with polynomial sequences {n \mapsto g(n)} rather than geometric progressions {n \mapsto g^n g_0 \Gamma} is that they form a group, a fact essentially established by by Lazard and Leibman; see Corollary B.4 of this paper of Green, Ziegler, and myself for a proof in the filtered group setting.

It is easy to see that any sequence that is a basic nilsequence of degree at most {d} in the sense of the first definition, is also a basic nilsequence of degree at most {d} in the second definition, since a nilmanifold of degree at most {d} can be filtered using the lower central series, and any linear sequence {n \mapsto g^n g_0} will be a polynomial sequence with respect to that filtration. The converse implication is a little trickier, but still not too hard to show: see Appendix C of this paper of Ben Green, Tamar Ziegler, and myself. There are two key examples of basic nilsequences to keep in mind. The first are the polynomially quasiperiodic sequences

\displaystyle a(n) = F( P_1(n), \dots, P_k(n) ),

where {P_1,\dots,P_k: {\bf Z} \rightarrow {\bf R}} are polynomials of degree at most {d}, and {F: {\bf R}^k \rightarrow {\bf C}^m} is a {{\bf Z}^k}-automorphic (i.e., {{\bf Z}^k}-periodic) continuous function. The map {P: {\bf Z} \rightarrow {\bf R}^k} defined by {P(n) := (P_1(n),\dots,P_k(n))} is a polynomial map of degree at most {d}, if one filters {{\bf R}^k} by defining {({\bf R}^k)_i} to equal {{\bf R}^k} when {i \leq d}, and {\{0\}} for {i > d}. The torus {{\bf R}^k/{\bf Z}^k} then becomes a filtered nilmanifold of degree at most {d}, and {a(n)} is thus a basic nilsequence of degree at most {d} as per the second definition. It is also possible explicitly describe {a_n} as a basic nilsequence of degree at most {d} as per the first definition, for instance (in the {k=1} case) by taking {G} to be the space of upper triangular unipotent {d+1 \times d+1} real matrices, and {\Gamma} the subgroup with integer coefficients; we leave the details to the interested reader.

The other key example is a sequence of the form

\displaystyle a(n) = F( \alpha n, \{ \alpha n \} \beta n )

where {\alpha,\beta} are real numbers, {\{ \alpha n \} = \alpha n - \lfloor \alpha n \rfloor} denotes the fractional part of {\alpha n}, and and {F: {\bf R}^2 \rightarrow {\bf C}^m} is a {{\bf Z}^2}-automorphic continuous function that vanishes in a neighbourhood of {{\bf Z} \times {\bf R}}. To describe this as a nilsequence, we use the nilpotent connected, simply connected degree {2}, Heisenberg group

\displaystyle G := \begin{pmatrix} 1 & {\bf R} & {\bf R} \\ 0 & 1 & {\bf R} \\ 0 & 0 & 1 \end{pmatrix}

with the lower central series filtration {G_0=G_1=G}, {G_2= [G,G] = \begin{pmatrix} 1 &0 & {\bf R} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}}, and {G_i = \{ \mathrm{id} \}} for {i > 2}, {\Gamma} to be the discrete compact subgroup

\displaystyle \Gamma := \begin{pmatrix} 1 & {\bf Z} & {\bf Z} \\ 0 & 1 & {\bf Z} \\ 0 & 0 & 1 \end{pmatrix},

{g: {\bf Z} \rightarrow G} to be the polynomial sequence

\displaystyle g(n) := \begin{pmatrix} 1 & \beta n & \alpha \beta n^2 \\ 0 & 1 & \alpha n \\ 0 & 0 & 1 \end{pmatrix}

and {\tilde F: G \rightarrow {\bf C}^m} to be the {\Gamma}-automorphic function

\displaystyle \tilde F( \begin{pmatrix} 1 & x & y \\ 0 & 1 & z \\ 0 & 0 & 1 \end{pmatrix} ) = F( \{ z \}, y - \lfloor z \rfloor x );

one easily verifies that this function is indeed {\Gamma}-automorphic, and it is continuous thanks to the vanishing properties of {F}. Also we have {a(n) = \tilde F(g(n))}, so {a} is a basic nilsequence of degree at most {2}. One can concoct similar examples with {\{ \alpha n \} \beta n} replaced by other “bracket polynomials” of {n}; for instance

\displaystyle a(n) = F( \alpha n, \{ \alpha n - \frac{1}{2} \} \beta n )

will be a basic nilsequence if {F} now vanishes in a neighbourhood of {(\frac{1}{2}+{\bf Z}) \times {\bf R}} rather than {{\bf Z} \times {\bf R}}. See this paper of Bergelson and Leibman for more discussion of bracket polynomials (also known as generalised polynomials) and their relationship to nilsequences.

A nilsequence of degree at most {d} is defined to be a sequence that is the uniform limit of basic nilsequences of degree at most {d}. Thus for instance a sequence is a nilsequence of degree at most {1} if and only if it is almost periodic, while a sequence is a nilsequence of degree at most {0} if and only if it is constant. Such objects arise in higher order recurrence: for instance, if {h_0,\dots,h_d} are integers, {(X,\mu,T)} is a measure-preserving system, and {f_0,\dots,f_d \in L^\infty(X)}, then it was shown by Leibman that the sequence

\displaystyle n \mapsto \int_X f_0(T^{h_0 n} x) \dots f_d(T^{h_d n} x)\ d\mu(x)

is equal to a nilsequence of degree at most {d}, plus a null sequence. (The special case when the measure-preserving system was ergodic and {h_i = i} for {i=0,\dots,d} was previously established by Bergelson, Host, and Kra.) Nilsequences also arise in the inverse theory of the Gowers uniformity norms, as discussed for instance in this previous post.

It is easy to see that a sequence {a: {\bf Z} \rightarrow {\bf C}^m} is a basic nilsequence of degree at most {d} if and only if each of its {m} components are. The scalar basic nilsequences {a: {\bf Z} \rightarrow {\bf C}} of degree {d} are easily seen to form a {*}-algebra (that is to say, they are a complex vector space closed under pointwise multiplication and complex conjugation), which implies similarly that vector-valued basic nilsequences {a: {\bf Z} \rightarrow {\bf C}^m} of degree at most {d} form a complex vector space closed under complex conjugation for each {m}, and that the tensor product of any two basic nilsequences of degree at most {d} is another basic nilsequence of degree at most {d}. Similarly with “basic nilsequence” replaced by “nilsequence” throughout.

Now we turn to the notion of a nilcharacter, as defined in this paper of Ben Green, Tamar Ziegler, and myself:

Definition 3 (Nilcharacters) Let {d \geq 1}. A sub-nilcharacter of degree {d} is a basic nilsequence {\chi: n \mapsto F(g(n))} of degree at most {d}, such that {F} obeys the additional modulation property

\displaystyle F( g_d g ) = e( \xi \cdot g_d ) F(g) \ \ \ \ \ (1)

for all {g \in G} and {g_d \in G_d}, where {\xi: G_d \rightarrow {\bf R}} is a continuous homomorphism {g_d \mapsto \xi \cdot g_d}. (Note from (1) and {\Gamma}-automorphy that unless {F} vanishes identically, {\xi} must map {\Gamma_d} to {{\bf Z}}, thus without loss of generality one can view {\xi} as an element of the Pontryagial dual of the torus {G_d / \Gamma_d}.) If in addition one has {\|F(g)\|=1} for all {g \in G}, we call {\chi} a nilcharacter of degree {d \geq 1}.

In the degree one case {d=1}, the only sub-nilcharacters are of the form {\chi(n) = e(\alpha n)} for some vector {c \in {\bf C}^m} and {\alpha \in {\bf R}}, and this is a nilcharacter if {c} is a unit vector. Similarly, in higher degree, any sequence of the form {\chi(n) = c e(P(n))}, where {c \in {\bf C}^m} is a vector and {P: {\bf Z} \rightarrow {\bf R}} is a polynomial of degree at most {d}, is a sub-nilcharacter of degree {d}, and a character if {c} is a unit vector. A nilsequence of degree at most {d-1} is automatically a sub-nilcharacter of degree {d}, and a nilcharacter if it is of magnitude {1}. A further example of a nilcharacter is provided by the two-dimensional sequence {\chi: {\bf Z} \rightarrow {\bf C}^2} defined by

\displaystyle \chi(n) := ( F_0( \alpha n ) e( \{ \alpha n \} \beta n ), F_{1/2}( \alpha n ) e( \{ \alpha n - \frac{1}{2} \} \beta n ) ) \ \ \ \ \ (2)

where {F_0, F_{1/2}: {\bf R} \rightarrow {\bf C}} are continuous, {{\bf Z}}-automorphic functions that vanish on a neighbourhood of {{\bf Z}} and {\frac{1}{2}+{\bf Z}} respectively, and which form a partition of unity in the sense that

\displaystyle |F_0(x)|^2 + |F_{1/2}(x)|^2 = 1

for all {x \in {\bf R}}. Note that one needs both {F_0} and {F_{1/2}} to be not identically zero in order for all these conditions to be satisfied; it turns out (for topological reasons) that there is no scalar nilcharacter that is “equivalent” to this nilcharacter in a sense to be defined shortly. In some literature, one works exclusively with sub-nilcharacters rather than nilcharacters, however the former space contains zero-divisors, which is a little annoying technically. Nevertheless, both nilcharacters and sub-nilcharacters generate the same set of “symbols” as we shall see later.

We claim that every degree {d} sub-nilcharacter {f: {\bf Z} \rightarrow {\bf C}^m} can be expressed in the form {f = c \chi}, where {\chi: {\bf Z} \rightarrow {\bf C}^{m'}} is a degree {d} nilcharacter, and {c: {\bf C}^{m'} \rightarrow {\bf C}^m} is a linear transformation. Indeed, by scaling we may assume {f(n) = F(g(n))} where {|F| < 1} uniformly. Using partitions of unity, one can find further functions {F_1,\dots,F_m} also obeying (1) for the same character {\xi} such that {|F_1|^2 + \dots + |F_m|^2} is non-zero; by dividing out the {F_1,\dots,F_m} by the square root of this quantity, and then multiplying by {\sqrt{1-|F|^2}}, we may assume that

\displaystyle |F|^2 + |F_1|^2 + \dots + |F_m|^2 = 1,

and then

\displaystyle \chi(n) := (F(g(n)), F_1(g(n)), \dots, F_m(g(n)))

becomes a degree {d} nilcharacter that contains {f(n)} amongst its components, giving the claim.

As we shall show below, nilsequences can be approximated uniformly by linear combinations of nilcharacters, in much the same way that quasiperiodic or almost periodic sequences can be approximated uniformly by linear combinations of linear phases. In particular, nilcharacters can be used as “obstructions to uniformity” in the sense of the inverse theory of the Gowers uniformity norms.

The space of degree {d} nilcharacters forms a semigroup under tensor product, with the constant sequence {1} as the identity. One can upgrade this semigroup to an abelian group by quotienting nilcharacters out by equivalence:

Definition 4 Let {d \geq 1}. We say that two degree {d} nilcharacters {\chi: {\bf Z} \rightarrow {\bf C}^m}, {\chi': {\bf Z} \rightarrow {\bf C}^{m'}} are equivalent if {\chi \otimes \overline{\chi'}: {\bf Z} \rightarrow {\bf C}^{mm'}} is equal (as a sequence) to a basic nilsequence of degree at most {d-1}. (We will later show that this is indeed an equivalence relation.) The equivalence class {[\chi]_{\mathrm{Symb}^d({\bf Z})}} of such a nilcharacter will be called the symbol of that nilcharacter (in analogy to the symbol of a differential or pseudodifferential operator), and the collection of such symbols will be denoted {\mathrm{Symb}^d({\bf Z})}.

As we shall see below the fold, {\mathrm{Symb}^d({\bf Z})} has the structure of an abelian group, and enjoys some nice “symbol calculus” properties; also, one can view symbols as precisely describing the obstruction to equidistribution for nilsequences. For {d=1}, the group is isomorphic to the Ponytragin dual {\hat {\bf Z} = {\bf R}/{\bf Z}} of the integers, and {\mathrm{Symb}^d({\bf Z})} for {d > 1} should be viewed as higher order generalisations of this Pontryagin dual. In principle, this group can be explicitly described for all {d}, but the theory rapidly gets complicated as {d} increases (much as the classification of nilpotent Lie groups or Lie algebras of step {d} rapidly gets complicated even for medium-sized {d} such as {d=3} or {d=4}). We will give an explicit description of the {d=2} case here. There is however one nice (and non-trivial) feature of {\mathrm{Symb}^d({\bf Z})} for {d \geq 2} – it is not just an abelian group, but is in fact a vector space over the rationals {{\bf Q}}!

Read the rest of this entry »

How many groups of order four are there? Technically, there are an enormous number, so much so, in fact, that the class of groups of order four is not even a set, but merely a proper class. This is because any four objects {a,b,c,d} can be turned into a group {\{a,b,c,d\}} by designating one of the four objects, say {a}, to be the group identity, and imposing a suitable multiplication table (and inversion law) on the four elements in a manner that obeys the usual group axioms. Since all sets are themselves objects, the class of four-element groups is thus at least as large as the class of all sets, which by Russell’s paradox is known not to itself be a set (assuming the usual ZFC axioms of set theory).

A much better question is to ask how many groups of order four there are up to isomorphism, counting each isomorphism class of groups exactly once. Now, as one learns in undergraduate group theory classes, the answer is just “two”: the cyclic group {C_4} and the Klein four-group {C_2 \times C_2}.

More generally, given a class of objects {X} and some equivalence relation {\sim} on {X} (which one should interpret as describing the property of two objects in {X} being “isomorphic”), one can consider the number {|X / \sim|} of objects in {X} “up to isomorphism”, which is simply the cardinality of the collection {X/\sim} of equivalence classes {[x]:=\{y\in X:x \sim {}y \}} of {X}. In the case where {X} is finite, one can express this cardinality by the formula

\displaystyle |X/\sim| = \sum_{x \in X} \frac{1}{|[x]|}, \ \ \ \ \ (1)

thus one counts elements in {X}, weighted by the reciprocal of the number of objects they are isomorphic to.

Example 1 Let {X} be the five-element set {\{-2,-1,0,1,2\}} of integers between {-2} and {2}. Let us say that two elements {x, y} of {X} are isomorphic if they have the same magnitude: {x \sim y \iff |x| = |y|}. Then the quotient space {X/\sim} consists of just three equivalence classes: {\{-2,2\} = [2] = [-2]}, {\{-1,1\} = [-1] = [1]}, and {\{0\} = [0]}. Thus there are three objects in {X} up to isomorphism, and the identity (1) is then just

\displaystyle 3 = \frac{1}{2} + \frac{1}{2} + 1 + \frac{1}{2} + \frac{1}{2}.

Thus, to count elements in {X} up to equivalence, the elements {-2,-1,1,2} are given a weight of {1/2} because they are each isomorphic to two elements in {X}, while the element {0} is given a weight of {1} because it is isomorphic to just one element in {X} (namely, itself).

Given a finite probability set {X}, there is also a natural probability distribution on {X}, namely the uniform distribution, according to which a random variable {\mathbf{x} \in X} is set equal to any given element {x} of {X} with probability {\frac{1}{|X|}}:

\displaystyle {\bf P}( \mathbf{x} = x ) = \frac{1}{|X|}.

Given a notion {\sim} of isomorphism on {X}, one can then define the random equivalence class {[\mathbf{x}] \in X/\sim} that the random element {\mathbf{x}} belongs to. But if the isomorphism classes are unequal in size, we now encounter a biasing effect: even if {\mathbf{x}} was drawn uniformly from {X}, the equivalence class {[\mathbf{x}]} need not be uniformly distributed in {X/\sim}. For instance, in the above example, if {\mathbf{x}} was drawn uniformly from {\{-2,-1,0,1,2\}}, then the equivalence class {[\mathbf{x}]} will not be uniformly distributed in the three-element space {X/\sim}, because it has a {2/5} probability of being equal to the class {\{-2,2\}} or to the class {\{-1,1\}}, and only a {1/5} probability of being equal to the class {\{0\}}.

However, it is possible to remove this bias by changing the weighting in (1), and thus changing the notion of what cardinality means. To do this, we generalise the previous situation. Instead of considering sets {X} with an equivalence relation {\sim} to capture the notion of isomorphism, we instead consider groupoids, which are sets {X} in which there are allowed to be multiple isomorphisms between elements in {X} (and in particular, there are allowed to be multiple automorphisms from an element to itself). More precisely:

Definition 2 A groupoid is a set (or proper class) {X}, together with a (possibly empty) collection {\mathrm{Iso}(x \rightarrow y)} of “isomorphisms” between any pair {x,y} of elements of {X}, and a composition map {f, g \mapsto g \circ f} from isomorphisms {f \in \mathrm{Iso}(x \rightarrow y)}, {g \in \mathrm{Iso}(y \rightarrow z)} to isomorphisms in {\mathrm{Iso}(x \rightarrow z)} for every {x,y,z \in X}, obeying the following group-like axioms:

  • (Identity) For every {x \in X}, there is an identity isomorphism {\mathrm{id}_x \in \mathrm{Iso}(x \rightarrow x)}, such that {f \circ \mathrm{id}_x = \mathrm{id}_y \circ f = f} for all {f \in \mathrm{Iso}(x \rightarrow y)} and {x,y \in X}.
  • (Associativity) If {f \in \mathrm{Iso}(x \rightarrow y)}, {g \in \mathrm{Iso}(y \rightarrow z)}, and {h \in \mathrm{Iso}(z \rightarrow w)} for some {x,y,z,w \in X}, then {h \circ (g \circ f) = (h \circ g) \circ f}.
  • (Inverse) If {f \in \mathrm{Iso}(x \rightarrow y)} for some {x,y \in X}, then there exists an inverse isomorphism {f^{-1} \in \mathrm{Iso}(y \rightarrow x)} such that {f \circ f^{-1} = \mathrm{id}_y} and {f^{-1} \circ f = \mathrm{id}_x}.

We say that two elements {x,y} of a groupoid are isomorphic, and write {x \sim y}, if there is at least one isomorphism from {x} to {y}.

Example 3 Any category gives a groupoid by taking {X} to be the set (or class) of objects, and {\mathrm{Iso}(x \rightarrow y)} to be the collection of invertible morphisms from {x} to {y}. For instance, in the category {\mathbf{Set}} of sets, {\mathrm{Iso}(x \rightarrow y)} would be the collection of bijections from {x} to {y}; in the category {\mathbf{Vec}/k} of linear vector spaces over some given base field {k}, {\mathrm{Iso}(x \rightarrow y)} would be the collection of invertible linear transformations from {x} to {y}; and so forth.

Every set {X} equipped with an equivalence relation {\sim} can be turned into a groupoid by assigning precisely one isomorphism {\iota_{x \rightarrow y}} from {x} to {y} for any pair {x,y \in X} with {x \sim y}, and no isomorphisms from {x} to {y} when {x \not \sim y}, with the groupoid operations of identity, composition, and inverse defined in the only way possible consistent with the axioms. We will call this the simply connected groupoid associated with this equivalence relation. For instance, with {X = \{-2,-1,0,1,2\}} as above, if we turn {X} into a simply connected groupoid, there will be precisely one isomorphism from {2} to {-2}, and also precisely one isomorphism from {2} to {2}, but no isomorphisms from {2} to {-1}, {0}, or {1}.

However, one can also form multiply-connected groupoids in which there can be multiple isomorphisms from one element of {X} to another. For instance, one can view {X = \{-2,-1,0,1,2\}} as a space that is acted on by multiplication by the two-element group {\{-1,+1\}}. This gives rise to two types of isomorphisms, an identity isomorphism {(+1)_x} from {x} to {x} for each {x \in X}, and a negation isomorphism {(-1)_x} from {x} to {-x} for each {x \in X}; in particular, there are two automorphisms of {0} (i.e., isomorphisms from {0} to itself), namely {(+1)_0} and {(-1)_0}, whereas the other four elements of {X} only have a single automorphism (the identity isomorphism). One defines composition, identity, and inverse in this groupoid in the obvious fashion (using the group law of the two-element group {\{-1,+1\}}); for instance, we have {(-1)_{-2} \circ (-1)_2 = (+1)_2}.

For a finite multiply-connected groupoid, it turns out that the natural notion of “cardinality” (or as I prefer to call it, “cardinality up to isomorphism”) is given by the variant

\displaystyle \sum_{x \in X} \frac{1}{|\{ f: f \in \mathrm{Iso}(x \rightarrow y) \hbox{ for some } y\}|}

of (1). That is to say, in the multiply connected case, the denominator is no longer the number of objects {y} isomorphic to {x}, but rather the number of isomorphisms from {x} to other objects {y}. Grouping together all summands coming from a single equivalence class {[x]} in {X/\sim}, we can also write this expression as

\displaystyle \sum_{[x] \in X/\sim} \frac{1}{|\mathrm{Aut}(x)|} \ \ \ \ \ (2)

where {\mathrm{Aut}(x) := \mathrm{Iso}(x \rightarrow x)} is the automorphism group of {x}, that is to say the group of isomorphisms from {x} to itself. (Note that if {x,x'} belong to the same equivalence class {[x]}, then the two groups {\mathrm{Aut}(x)} and {\mathrm{Aut}(x')} will be isomorphic and thus have the same cardinality, and so the above expression is well-defined.

For instance, if we take {X} to be the simply connected groupoid on {\{-2,-1,0,1,2\}}, then the number of elements of {X} up to isomorphism is

\displaystyle \frac{1}{2} + \frac{1}{2} + 1 + \frac{1}{2} + \frac{1}{2} = 1 + 1 + 1 = 3

exactly as before. If however we take the multiply connected groupoid on {\{-2,-1,0,1,2\}}, in which {0} has two automorphisms, the number of elements of {X} up to isomorphism is now the smaller quantity

\displaystyle \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} = 1 + \frac{1}{2} + 1 = \frac{5}{2};

the equivalence class {[0]} is now counted with weight {1/2} rather than {1} due to the two automorphisms on {0}. Geometrically, one can think of this groupoid as being formed by taking the five-element set {\{-2,-1,0,1,2\}}, and “folding it in half” around the fixed point {0}, giving rise to two “full” quotient points {[1], [2]} and one “half” point {[0]}. More generally, given a finite group {G} acting on a finite set {X}, and forming the associated multiply connected groupoid, the cardinality up to isomorphism of this groupoid will be {|X|/|G|}, since each element {x} of {X} will have {|G|} isomorphisms on it (whether they be to the same element {x}, or to other elements of {X}).

The definition (2) can also make sense for some infinite groupoids; to my knowledge this was first explicitly done in this paper of Baez and Dolan. Consider for instance the category {\mathbf{FinSet}} of finite sets, with isomorphisms given by bijections as in Example 3. Every finite set is isomorphic to {\{1,\dots,n\}} for some natural number {n}, so the equivalence classes of {\mathbf{FinSet}} may be indexed by the natural numbers. The automorphism group {S_n} of {\{1,\dots,n\}} has order {n!}, so the cardinality of {\mathbf{FinSet}} up to isomorphism is

\displaystyle \sum_{n=0}^\infty \frac{1}{n!} = e.

(This fact is sometimes loosely stated as “the number of finite sets is {e}“, but I view this statement as somewhat misleading if the qualifier “up to isomorphism” is not added.) Similarly, when one allows for multiple isomorphisms from a group to itself, the number of groups of order four up to isomorphism is now

\displaystyle \frac{1}{2} + \frac{1}{6} = \frac{2}{3}

because the cyclic group {C_4} has two automorphisms, whereas the Klein four-group {C_2 \times C_2} has six.

In the case that the cardinality of a groupoid {X} up to isomorphism is finite and non-empty, one can now define the notion of a random isomorphism class {[\mathbf{x}]} in {X/\sim} drawn “uniformly up to isomorphism”, by requiring the probability of attaining any given isomorphism class {[x]} to be

\displaystyle {\mathbf P}([\mathbf{x}] = [x]) = \frac{1 / |\mathrm{Aut}(x)|}{\sum_{[y] \in X/\sim} 1/|\mathrm{Aut}(y)|},

thus the probability of being isomorphic to a given element {x} will be inversely proportional to the number of automorphisms that {x} has. For instance, if we take {X} to be the set {\{-2,-1,0,1,2\}} with the simply connected groupoid, {[\mathbf{x}]} will be drawn uniformly from the three available equivalence classes {[0], [1], [2]}, with a {1/3} probability of attaining each; but if instead one uses the multiply connected groupoid coming from the action of {\{-1,+1\}}, and draws {[\mathbf{x}]} uniformly up to isomorphism, then {[1]} and {[2]} will now be selected with probability {2/5} each, and {[0]} will be selected with probability {1/5}. Thus this distribution has accounted for the bias mentioned previously: if a finite group {G} acts on a finite space {X}, and {\mathbf{x}} is drawn uniformly from {X}, then {[\mathbf{x}]} now still be drawn uniformly up to isomorphism from {X/G}, if we use the multiply connected groupoid coming from the {G} action, rather than the simply connected groupoid coming from just the {G}-orbit structure on {X}.

Using the groupoid of finite sets, we see that a finite set chosen uniformly up to isomorphism will have a cardinality that is distributed according to the Poisson distribution of parameter {1}, that is to say it will be of cardinality {n} with probability {\frac{e^{-1}}{n!}}.

One important source of groupoids are the fundamental groupoids {\pi_1(M)} of a manifold {M} (one can also consider more general topological spaces than manifolds, but for simplicity we will restrict this discussion to the manifold case), in which the underlying space is simply {M}, and the isomorphisms from {x} to {y} are the equivalence classes of paths from {x} to {y} up to homotopy; in particular, the automorphism group of any given point is just the fundamental group of {M} at that base point. The equivalence class {[x]} of a point in {M} is then the connected component of {x} in {M}. The cardinality up to isomorphism of the fundamental groupoid is then

\displaystyle \sum_{M' \in \pi_0(M)} \frac{1}{|\pi_1(M')|}

where {\pi_0(M)} is the collection of connected components {M'} of {M}, and {|\pi_1(M')|} is the order of the fundamental group of {M'}. Thus, simply connected components of {M} count for a full unit of cardinality, whereas multiply connected components (which can be viewed as quotients of their simply connected cover by their fundamental group) will count for a fractional unit of cardinality, inversely to the order of their fundamental group.

This notion of cardinality up to isomorphism of a groupoid behaves well with respect to various basic notions. For instance, suppose one has an {n}-fold covering map {\pi: X \rightarrow Y} of one finite groupoid {Y} by another {X}. This means that {\pi} is a functor that is surjective, with all preimages of cardinality {n}, with the property that given any pair {y,y'} in the base space {Y} and any {x} in the preimage {\pi^{-1}(\{y\})} of {y}, every isomorphism {f \in \mathrm{Iso}(y \rightarrow y')} has a unique lift {\tilde f \in \mathrm{Iso}(x \rightarrow x')} from the given initial point {x} (and some {x'} in the preimage of {y'}). Then one can check that the cardinality up to isomorphism of {X} is {n} times the cardinality up to isomorphism of {Y}, which fits well with the geometric picture of {X} as the {n}-fold cover of {Y}. (For instance, if one covers a manifold {M} with finite fundamental group by its universal cover, this is a {|\pi_1(M)|}-fold cover, the base has cardinality {1/|\pi_1(M)|} up to isomorphism, and the universal cover has cardinality one up to isomorphism.) Related to this, if one draws an equivalence class {[\mathrm{x}]} of {X} uniformly up to isomorphism, then {\pi([\mathrm{x}])} will be an equivalence class of {Y} drawn uniformly up to isomorphism also.

Indeed, one can show that this notion of cardinality up to isomorphism for groupoids is uniquely determined by a small number of axioms such as these (similar to the axioms that determine Euler characteristic); see this blog post of Qiaochu Yuan for details.

The probability distributions on isomorphism classes described by the above recipe seem to arise naturally in many applications. For instance, if one draws a profinite abelian group up to isomorphism at random in this fashion (so that each isomorphism class {[G]} of a profinite abelian group {G} occurs with probability inversely proportional to the number of automorphisms of this group), then the resulting distribution is known as the Cohen-Lenstra distribution, and seems to emerge as the natural asymptotic distribution of many randomly generated profinite abelian groups in number theory and combinatorics, such as the class groups of random quadratic fields; see this previous blog post for more discussion. For a simple combinatorial example, the set of fixed points of a random permutation on {n} elements will have a cardinality that converges in distribution to the Poisson distribution of rate {1} (as discussed in this previous post), thus we see that the fixed points of a large random permutation asymptotically are distributed uniformly up to isomorphism. I’ve been told that this notion of cardinality up to isomorphism is also particularly compatible with stacks (which are a good framework to describe such objects as moduli spaces of algebraic varieties up to isomorphism), though I am not sufficiently acquainted with this theory to say much more than this.

Given a function {f: {\bf N} \rightarrow \{-1,+1\}} on the natural numbers taking values in {+1, -1}, one can invoke the Furstenberg correspondence principle to locate a measure preserving system {T \circlearrowright (X, \mu)} – a probability space {(X,\mu)} together with a measure-preserving shift {T: X \rightarrow X} (or equivalently, a measure-preserving {{\bf Z}}-action on {(X,\mu)}) – together with a measurable function (or “observable”) {F: X \rightarrow \{-1,+1\}} that has essentially the same statistics as {f} in the sense that

\displaystyle \lim \inf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k)

\displaystyle \leq \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x)

\displaystyle \leq \lim \sup_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k)

for any integers {h_1,\dots,h_k}. In particular, one has

\displaystyle \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x) = \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k) \ \ \ \ \ (1)

 

whenever the limit on the right-hand side exists. We will refer to the system {T \circlearrowright (X,\mu)} together with the designated function {F} as a Furstenberg limit ot the sequence {f}. These Furstenberg limits capture some, but not all, of the asymptotic behaviour of {f}; roughly speaking, they control the typical “local” behaviour of {f}, involving correlations such as {\frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k)} in the regime where {h_1,\dots,h_k} are much smaller than {N}. However, the control on error terms here is usually only qualitative at best, and one usually does not obtain non-trivial control on correlations in which the {h_1,\dots,h_k} are allowed to grow at some significant rate with {N} (e.g. like some power {N^\theta} of {N}).

The correspondence principle is discussed in these previous blog posts. One way to establish the principle is by introducing a Banach limit {p\!-\!\lim: \ell^\infty({\bf N}) \rightarrow {\bf R}} that extends the usual limit functional on the subspace of {\ell^\infty({\bf N})} consisting of convergent sequences while still having operator norm one. Such functionals cannot be constructed explicitly, but can be proven to exist (non-constructively and non-uniquely) using the Hahn-Banach theorem; one can also use a non-principal ultrafilter here if desired. One can then seek to construct a system {T \circlearrowright (X,\mu)} and a measurable function {F: X \rightarrow \{-1,+1\}} for which one has the statistics

\displaystyle \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x) = p\!-\!\lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k) \ \ \ \ \ (2)

 

for all {h_1,\dots,h_k}. One can explicitly construct such a system as follows. One can take {X} to be the Cantor space {\{-1,+1\}^{\bf Z}} with the product {\sigma}-algebra and the shift

\displaystyle T ( (x_n)_{n \in {\bf Z}} ) := (x_{n+1})_{n \in {\bf Z}}

with the function {F: X \rightarrow \{-1,+1\}} being the coordinate function at zero:

\displaystyle F( (x_n)_{n \in {\bf Z}} ) := x_0

(so in particular {F( T^h (x_n)_{n \in {\bf Z}} ) = x_h} for any {h \in {\bf Z}}). The only thing remaining is to construct the invariant measure {\mu}. In order to be consistent with (2), one must have

\displaystyle \mu( \{ (x_n)_{n \in {\bf Z}}: x_{h_j} = \epsilon_j \forall 1 \leq j \leq k \} )

\displaystyle = p\!-\!\lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N 1_{f(n+h_1)=\epsilon_1} \dots 1_{f(n+h_k)=\epsilon_k}

for any distinct integers {h_1,\dots,h_k} and signs {\epsilon_1,\dots,\epsilon_k}. One can check that this defines a premeasure on the Boolean algebra of {\{-1,+1\}^{\bf Z}} defined by cylinder sets, and the existence of {\mu} then follows from the Hahn-Kolmogorov extension theorem (or the closely related Kolmogorov extension theorem). One can then check that the correspondence (2) holds, and that {\mu} is translation-invariant; the latter comes from the translation invariance of the (Banach-)Césaro averaging operation {f \mapsto p\!-\!\lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n)}. A variant of this construction shows that the Furstenberg limit is unique up to equivalence if and only if all the limits appearing in (1) actually exist.

One can obtain a slightly tighter correspondence by using a smoother average than the Césaro average. For instance, one can use the logarithmic Césaro averages {\lim_{N \rightarrow \infty} \frac{1}{\log N}\sum_{n=1}^N \frac{f(n)}{n}} in place of the Césaro average {\sum_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n)}, thus one replaces (2) by

\displaystyle \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x)

\displaystyle = p\!-\!\lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{f(n+h_1) \dots f(n+h_k)}{n}.

Whenever the Césaro average of a bounded sequence {f: {\bf N} \rightarrow {\bf R}} exists, then the logarithmic Césaro average exists and is equal to the Césaro average. Thus, a Furstenberg limit constructed using logarithmic Banach-Césaro averaging still obeys (1) for all {h_1,\dots,h_k} when the right-hand side limit exists, but also obeys the more general assertion

\displaystyle \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x)

\displaystyle = \lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{f(n+h_1) \dots f(n+h_k)}{n}

whenever the limit of the right-hand side exists.

In a recent paper of Frantizinakis, the Furstenberg limits of the Liouville function {\lambda} (with logarithmic averaging) were studied. Some (but not all) of the known facts and conjectures about the Liouville function can be interpreted in the Furstenberg limit. For instance, in a recent breakthrough result of Matomaki and Radziwill (discussed previously here), it was shown that the Liouville function exhibited cancellation on short intervals in the sense that

\displaystyle \lim_{H \rightarrow \infty} \limsup_{X \rightarrow \infty} \frac{1}{X} \int_X^{2X} \frac{1}{H} |\sum_{x \leq n \leq x+H} \lambda(n)|\ dx = 0.

In terms of Furstenberg limits of the Liouville function, this assertion is equivalent to the assertion that

\displaystyle \lim_{H \rightarrow \infty} \int_X |\frac{1}{H} \sum_{h=1}^H F(T^h x)|\ d\mu(x) = 0

for all Furstenberg limits {T \circlearrowright (X,\mu), F} of Liouville (including those without logarithmic averaging). Invoking the mean ergodic theorem (discussed in this previous post), this assertion is in turn equivalent to the observable {F} that corresponds to the Liouville function being orthogonal to the invariant factor {L^\infty(X,\mu)^{\bf Z} = \{ g \in L^\infty(X,\mu): g \circ T = g \}} of {X}; equivalently, the first Gowers-Host-Kra seminorm {\|F\|_{U^1(X)}} of {F} (as defined for instance in this previous post) vanishes. The Chowla conjecture, which asserts that

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \lambda(n+h_1) \dots \lambda(n+h_k) = 0

for all distinct integers {h_1,\dots,h_k}, is equivalent to the assertion that all the Furstenberg limits of Liouville are equivalent to the Bernoulli system ({\{-1,+1\}^{\bf Z}} with the product measure arising from the uniform distribution on {\{-1,+1\}}, with the shift {T} and observable {F} as before). Similarly, the logarithmically averaged Chowla conjecture

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{\lambda(n+h_1) \dots \lambda(n+h_k)}{n} = 0

is equivalent to the assertion that all the Furstenberg limits of Liouville with logarithmic averaging are equivalent to the Bernoulli system. Recently, I was able to prove the two-point version

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{\lambda(n) \lambda(n+h)}{n} = 0 \ \ \ \ \ (3)

 

of the logarithmically averaged Chowla conjecture, for any non-zero integer {h}; this is equivalent to the perfect strong mixing property

\displaystyle \int_X F(x) F(T^h x)\ d\mu(x) = 0

for any Furstenberg limit of Liouville with logarithmic averaging, and any {h \neq 0}.

The situation is more delicate with regards to the Sarnak conjecture, which is equivalent to the assertion that

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \lambda(n) f(n) = 0

for any zero-entropy sequence {f: {\bf N} \rightarrow {\bf R}} (see this previous blog post for more discussion). Morally speaking, this conjecture should be equivalent to the assertion that any Furstenberg limit of Liouville is disjoint from any zero entropy system, but I was not able to formally establish an implication in either direction due to some technical issues regarding the fact that the Furstenberg limit does not directly control long-range correlations, only short-range ones. (There are however ergodic theoretic interpretations of the Sarnak conjecture that involve the notion of generic points; see this paper of El Abdalaoui, Lemancyk, and de la Rue.) But the situation is currently better with the logarithmically averaged Sarnak conjecture

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{\lambda(n) f(n)}{n} = 0,

as I was able to show that this conjecture was equivalent to the logarithmically averaged Chowla conjecture, and hence to all Furstenberg limits of Liouville with logarithmic averaging being Bernoulli; I also showed the conjecture was equivalent to local Gowers uniformity of the Liouville function, which is in turn equivalent to the function {F} having all Gowers-Host-Kra seminorms vanishing in every Furstenberg limit with logarithmic averaging. In this recent paper of Frantzikinakis, this analysis was taken further, showing that the logarithmically averaged Chowla and Sarnak conjectures were in fact equivalent to the much milder seeming assertion that all Furstenberg limits with logarithmic averaging were ergodic.

Actually, the logarithmically averaged Furstenberg limits have more structure than just a {{\bf Z}}-action on a measure preserving system {(X,\mu)} with a single observable {F}. Let {Aff_+({\bf Z})} denote the semigroup of affine maps {n \mapsto an+b} on the integers with {a,b \in {\bf Z}} and {a} positive. Also, let {\hat {\bf Z}} denote the profinite integers (the inverse limit of the cyclic groups {{\bf Z}/q{\bf Z}}). Observe that {Aff_+({\bf Z})} acts on {\hat {\bf Z}} by taking the inverse limit of the obvious actions of {Aff_+({\bf Z})} on {{\bf Z}/q{\bf Z}}.

Proposition 1 (Enriched logarithmically averaged Furstenberg limit of Liouville) Let {p\!-\!\lim} be a Banach limit. Then there exists a probability space {(X,\mu)} with an action {\phi \mapsto T^\phi} of the affine semigroup {Aff_+({\bf Z})}, as well as measurable functions {F: X \rightarrow \{-1,+1\}} and {M: X \rightarrow \hat {\bf Z}}, with the following properties:

  • (i) (Affine Furstenberg limit) For any {\phi_1,\dots,\phi_k \in Aff_+({\bf Z})}, and any congruence class {a\ (q)}, one has

    \displaystyle p\!-\!\lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{\lambda(\phi_1(n)) \dots \lambda(\phi_k(n)) 1_{n = a\ (q)}}{n}

    \displaystyle = \int_X F( T^{\phi_1}(x) ) \dots F( T^{\phi_k}(x) ) 1_{M(x) = a\ (q)}\ d\mu(x).

  • (ii) (Equivariance of {M}) For any {\phi \in Aff_+({\bf Z})}, one has

    \displaystyle M( T^\phi(x) ) = \phi( M(x) )

    for {\mu}-almost every {x \in X}.

  • (iii) (Multiplicativity at fixed primes) For any prime {p}, one has

    \displaystyle F( T^{p\cdot} x ) = - F(x)

    for {\mu}-almost every {x \in X}, where {p \cdot \in Aff_+({\bf Z})} is the dilation map {n \mapsto pn}.

  • (iv) (Measure pushforward) If {\phi \in Aff_+({\bf Z})} is of the form {\phi(n) = an+b} and {S_\phi \subset X} is the set {S_\phi = \{ x \in X: M(x) \in \phi(\hat {\bf Z}) \}}, then the pushforward {T^\phi_* \mu} of {\mu} by {\phi} is equal to {a \mu\downharpoonright_{S_\phi}}, that is to say one has

    \displaystyle \mu( (T^\phi)^{-1}(E) ) = a \mu( E \cap S_\phi )

    for every measurable {E \subset X}.

Note that {{\bf Z}} can be viewed as the subgroup of {Aff_+({\bf Z})} consisting of the translations {n \mapsto n + b}. If one only keeps the {{\bf Z}}-portion of the {Aff_+({\bf Z})} action and forgets the rest (as well as the function {M}) then the action becomes measure-preserving, and we recover an ordinary Furstenberg limit with logarithmic averaging. However, the additional structure here can be quite useful; for instance, one can transfer the proof of (3) to this setting, which we sketch below the fold, after proving the proposition.

The observable {M}, roughly speaking, means that points {x} in the Furstenberg limit {X} constructed by this proposition are still “virtual integers” in the sense that one can meaningfully compute the residue class of {x} modulo any natural number modulus {q}, by first applying {M} and then reducing mod {q}. The action of {Aff_+({\bf Z})} means that one can also meaningfully multiply {x} by any natural number, and translate it by any integer. As with other applications of the correspondence principle, the main advantage of moving to this more “virtual” setting is that one now acquires a probability measure {\mu}, so that the tools of ergodic theory can be readily applied.

Read the rest of this entry »

Given a random variable {X} that takes on only finitely many values, we can define its Shannon entropy by the formula

\displaystyle  H(X) := \sum_x \mathbf{P}(X=x) \log \frac{1}{\mathbf{P}(X=x)}

with the convention that {0 \log \frac{1}{0} = 0}. (In some texts, one uses the logarithm to base {2} rather than the natural logarithm, but the choice of base will not be relevant for this discussion.) This is clearly a nonnegative quantity. Given two random variables {X,Y} taking on finitely many values, the joint variable {(X,Y)} is also a random variable taking on finitely many values, and also has an entropy {H(X,Y)}. It obeys the Shannon inequalities

\displaystyle  H(X), H(Y) \leq H(X,Y) \leq H(X) + H(Y)

so we can define some further nonnegative quantities, the mutual information

\displaystyle  I(X:Y) := H(X) + H(Y) - H(X,Y)

and the conditional entropies

\displaystyle  H(X|Y) := H(X,Y) - H(Y); \quad H(Y|X) := H(X,Y) - H(X).

More generally, given three random variables {X,Y,Z}, one can define the conditional mutual information

\displaystyle  I(X:Y|Z) := H(X|Z) + H(Y|Z) - H(X,Y|Z)

and the final of the Shannon entropy inequalities asserts that this quantity is also non-negative.

The mutual information {I(X:Y)} is a measure of the extent to which {X} and {Y} fail to be independent; indeed, it is not difficult to show that {I(X:Y)} vanishes if and only if {X} and {Y} are independent. Similarly, {I(X:Y|Z)} vanishes if and only if {X} and {Y} are conditionally independent relative to {Z}. At the other extreme, {H(X|Y)} is a measure of the extent to which {X} fails to depend on {Y}; indeed, it is not difficult to show that {H(X|Y)=0} if and only if {X} is determined by {Y} in the sense that there is a deterministic function {f} such that {X = f(Y)}. In a related vein, if {X} and {X'} are equivalent in the sense that there are deterministic functional relationships {X = f(X')}, {X' = g(X)} between the two variables, then {X} is interchangeable with {X'} for the purposes of computing the above quantities, thus for instance {H(X) = H(X')}, {H(X,Y) = H(X',Y)}, {I(X:Y) = I(X':Y)}, {I(X:Y|Z) = I(X':Y|Z)}, etc..

One can get some initial intuition for these information-theoretic quantities by specialising to a simple situation in which all the random variables {X} being considered come from restricting a single random (and uniformly distributed) boolean function {F: \Omega \rightarrow \{0,1\}} on a given finite domain {\Omega} to some subset {A} of {\Omega}:

\displaystyle  X = F \downharpoonright_A.

In this case, {X} has the law of a random uniformly distributed boolean function from {A} to {\{0,1\}}, and the entropy here can be easily computed to be {|A| \log 2}, where {|A|} denotes the cardinality of {A}. If {X} is the restriction of {F} to {A}, and {Y} is the restriction of {F} to {B}, then the joint variable {(X,Y)} is equivalent to the restriction of {F} to {A \cup B}. If one discards the normalisation factor {\log 2}, one then obtains the following dictionary between entropy and the combinatorics of finite sets:

Random variables {X,Y,Z} Finite sets {A,B,C}
Entropy {H(X)} Cardinality {|A|}
Joint variable {(X,Y)} Union {A \cup B}
Mutual information {I(X:Y)} Intersection cardinality {|A \cap B|}
Conditional entropy {H(X|Y)} Set difference cardinality {|A \backslash B|}
Conditional mutual information {I(X:Y|Z)} {|(A \cap B) \backslash C|}
{X, Y} independent {A, B} disjoint
{X} determined by {Y} {A} a subset of {B}
{X,Y} conditionally independent relative to {Z} {A \cap B \subset C}

Every (linear) inequality or identity about entropy (and related quantities, such as mutual information) then specialises to a combinatorial inequality or identity about finite sets that is easily verified. For instance, the Shannon inequality {H(X,Y) \leq H(X)+H(Y)} becomes the union bound {|A \cup B| \leq |A| + |B|}, and the definition of mutual information becomes the inclusion-exclusion formula

\displaystyle  |A \cap B| = |A| + |B| - |A \cup B|.

For a more advanced example, consider the data processing inequality that asserts that if {X, Z} are conditionally independent relative to {Y}, then {I(X:Z) \leq I(X:Y)}. Specialising to sets, this now says that if {A, C} are disjoint outside of {B}, then {|A \cap C| \leq |A \cap B|}; this can be made apparent by considering the corresponding Venn diagram. This dictionary also suggests how to prove the data processing inequality using the existing Shannon inequalities. Firstly, if {A} and {C} are not necessarily disjoint outside of {B}, then a consideration of Venn diagrams gives the more general inequality

\displaystyle  |A \cap C| \leq |A \cap B| + |(A \cap C) \backslash B|

and a further inspection of the diagram then reveals the more precise identity

\displaystyle  |A \cap C| + |(A \cap B) \backslash C| = |A \cap B| + |(A \cap C) \backslash B|.

Using the dictionary in the reverse direction, one is then led to conjecture the identity

\displaystyle  I( X : Z ) + I( X : Y | Z ) = I( X : Y ) + I( X : Z | Y )

which (together with non-negativity of conditional mutual information) implies the data processing inequality, and this identity is in turn easily established from the definition of mutual information.

On the other hand, not every assertion about cardinalities of sets generalises to entropies of random variables that are not arising from restricting random boolean functions to sets. For instance, a basic property of sets is that disjointness from a given set {C} is preserved by unions:

\displaystyle  A \cap C = B \cap C = \emptyset \implies (A \cup B) \cap C = \emptyset.

Indeed, one has the union bound

\displaystyle  |(A \cup B) \cap C| \leq |A \cap C| + |B \cap C|. \ \ \ \ \ (1)

Applying the dictionary in the reverse direction, one might now conjecture that if {X} was independent of {Z} and {Y} was independent of {Z}, then {(X,Y)} should also be independent of {Z}, and furthermore that

\displaystyle  I(X,Y:Z) \leq I(X:Z) + I(Y:Z)

but these statements are well known to be false (for reasons related to pairwise independence of random variables being strictly weaker than joint independence). For a concrete counterexample, one can take {X, Y \in {\bf F}_2} to be independent, uniformly distributed random elements of the finite field {{\bf F}_2} of two elements, and take {Z := X+Y} to be the sum of these two field elements. One can easily check that each of {X} and {Y} is separately independent of {Z}, but the joint variable {(X,Y)} determines {Z} and thus is not independent of {Z}.

From the inclusion-exclusion identities

\displaystyle  |A \cap C| = |A| + |C| - |A \cup C|

\displaystyle  |B \cap C| = |B| + |C| - |B \cup C|

\displaystyle  |(A \cup B) \cap C| = |A \cup B| + |C| - |A \cup B \cup C|

\displaystyle  |A \cap B \cap C| = |A| + |B| + |C| - |A \cup B| - |B \cup C| - |A \cup C|

\displaystyle + |A \cup B \cup C|

one can check that (1) is equivalent to the trivial lower bound {|A \cap B \cap C| \geq 0}. The basic issue here is that in the dictionary between entropy and combinatorics, there is no satisfactory entropy analogue of the notion of a triple intersection {A \cap B \cap C}. (Even the double intersection {A \cap B} only exists information theoretically in a “virtual” sense; the mutual information {I(X:Y)} allows one to “compute the entropy” of this “intersection”, but does not actually describe this intersection itself as a random variable.)

However, this issue only arises with three or more variables; it is not too difficult to show that the only linear equalities and inequalities that are necessarily obeyed by the information-theoretic quantities {H(X), H(Y), H(X,Y), I(X:Y), H(X|Y), H(Y|X)} associated to just two variables {X,Y} are those that are also necessarily obeyed by their combinatorial analogues {|A|, |B|, |A \cup B|, |A \cap B|, |A \backslash B|, |B \backslash A|}. (See for instance the Venn diagram at the Wikipedia page for mutual information for a pictorial summation of this statement.)

One can work with a larger class of special cases of Shannon entropy by working with random linear functions rather than random boolean functions. Namely, let {S} be some finite-dimensional vector space over a finite field {{\mathbf F}}, and let {f: S \rightarrow {\mathbf F}} be a random linear functional on {S}, selected uniformly among all such functions. Every subspace {U} of {S} then gives rise to a random variable {X = X_U: U \rightarrow {\mathbf F}} formed by restricting {f} to {U}. This random variable is also distributed uniformly amongst all linear functions on {U}, and its entropy can be easily computed to be {\mathrm{dim}(U) \log |\mathbf{F}|}. Given two random variables {X, Y} formed by restricting {f} to {U, V} respectively, the joint random variable {(X,Y)} determines the random linear function {f} on the union {U \cup V} on the two spaces, and thus by linearity on the Minkowski sum {U+V} as well; thus {(X,Y)} is equivalent to the restriction of {f} to {U+V}. In particular, {H(X,Y) = \mathrm{dim}(U+V) \log |\mathbf{F}|}. This implies that {I(X:Y) = \mathrm{dim}(U \cap V) \log |\mathbf{F}|} and also {H(X|Y) = \mathrm{dim}(\pi_V(U)) \log |\mathbf{F}|}, where {\pi_V: S \rightarrow S/V} is the quotient map. After discarding the normalising constant {\log |\mathbf{F}|}, this leads to the following dictionary between information theoretic quantities and linear algebra quantities, analogous to the previous dictionary:

Random variables {X,Y,Z} Subspaces {U,V,W}
Entropy {H(X)} Dimension {\mathrm{dim}(U)}
Joint variable {(X,Y)} Sum {U+V}
Mutual information {I(X:Y)} Dimension of intersection {\mathrm{dim}(U \cap V)}
Conditional entropy {H(X|Y)} Dimension of projection {\mathrm{dim}(\pi_V(U))}
Conditional mutual information {I(X:Y|Z)} {\mathrm{dim}(\pi_W(U) \cap \pi_W(V))}
{X, Y} independent {U, V} transverse ({U \cap V = \{0\}})
{X} determined by {Y} {U} a subspace of {V}
{X,Y} conditionally independent relative to {Z} {\pi_W(U)}, {\pi_W(V)} transverse.

The combinatorial dictionary can be regarded as a specialisation of the linear algebra dictionary, by taking {S} to be the vector space {\mathbf{F}_2^\Omega} over the finite field {\mathbf{F}_2} of two elements, and only considering those subspaces {U} that are coordinate subspaces {U = {\bf F}_2^A} associated to various subsets {A} of {\Omega}.

As before, every linear inequality or equality that is valid for the information-theoretic quantities discussed above, is automatically valid for the linear algebra counterparts for subspaces of a vector space over a finite field by applying the above specialisation (and dividing out by the normalising factor of {\log |\mathbf{F}|}). In fact, the requirement that the field be finite can be removed by applying the compactness theorem from logic (or one of its relatives, such as Los’s theorem on ultraproducts, as done in this previous blog post).

The linear algebra model captures more of the features of Shannon entropy than the combinatorial model. For instance, in contrast to the combinatorial case, it is possible in the linear algebra setting to have subspaces {U,V,W} such that {U} and {V} are separately transverse to {W}, but their sum {U+V} is not; for instance, in a two-dimensional vector space {{\bf F}^2}, one can take {U,V,W} to be the one-dimensional subspaces spanned by {(0,1)}, {(1,0)}, and {(1,1)} respectively. Note that this is essentially the same counterexample from before (which took {{\bf F}} to be the field of two elements). Indeed, one can show that any necessarily true linear inequality or equality involving the dimensions of three subspaces {U,V,W} (as well as the various other quantities on the above table) will also be necessarily true when applied to the entropies of three discrete random variables {X,Y,Z} (as well as the corresponding quantities on the above table).

However, the linear algebra model does not completely capture the subtleties of Shannon entropy once one works with four or more variables (or subspaces). This was first observed by Ingleton, who established the dimensional inequality

\displaystyle  \mathrm{dim}(U \cap V) \leq \mathrm{dim}(\pi_W(U) \cap \pi_W(V)) + \mathrm{dim}(\pi_X(U) \cap \pi_X(V)) + \mathrm{dim}(W \cap X) \ \ \ \ \ (2)

for any subspaces {U,V,W,X}. This is easiest to see when the three terms on the right-hand side vanish; then {\pi_W(U), \pi_W(V)} are transverse, which implies that {U\cap V \subset W}; similarly {U \cap V \subset X}. But {W} and {X} are transverse, and this clearly implies that {U} and {V} are themselves transverse. To prove the general case of Ingleton’s inequality, one can define {Y := U \cap V} and use {\mathrm{dim}(\pi_W(Y)) \leq \mathrm{dim}(\pi_W(U) \cap \pi_W(V))} (and similarly for {X} instead of {W}) to reduce to establishing the inequality

\displaystyle  \mathrm{dim}(Y) \leq \mathrm{dim}(\pi_W(Y)) + \mathrm{dim}(\pi_X(Y)) + \mathrm{dim}(W \cap X) \ \ \ \ \ (3)

which can be rearranged using {\mathrm{dim}(\pi_W(Y)) = \mathrm{dim}(Y) - \mathrm{dim}(W) + \mathrm{dim}(\pi_Y(W))} (and similarly for {X} instead of {W}) and {\mathrm{dim}(W \cap X) = \mathrm{dim}(W) + \mathrm{dim}(X) - \mathrm{dim}(W + X)} as

\displaystyle  \mathrm{dim}(W + X ) \leq \mathrm{dim}(\pi_Y(W)) + \mathrm{dim}(\pi_Y(X)) + \mathrm{dim}(Y)

but this is clear since {\mathrm{dim}(W + X ) \leq \mathrm{dim}(\pi_Y(W) + \pi_Y(X)) + \mathrm{dim}(Y)}.

Returning to the entropy setting, the analogue

\displaystyle  H( V ) \leq H( V | Z ) + H(V | W ) + I(Z:W)

of (3) is true (exercise!), but the analogue

\displaystyle  I(X:Y) \leq I(X:Y|Z) + I(X:Y|W) + I(Z:W) \ \ \ \ \ (4)

of Ingleton’s inequality is false in general. Again, this is easiest to see when all the terms on the right-hand side vanish; then {X,Y} are conditionally independent relative to {Z}, and relative to {W}, and {Z} and {W} are independent, and the claim (4) would then be asserting that {X} and {Y} are independent. While there is no linear counterexample to this statement, there are simple non-linear ones: for instance, one can take {Z,W} to be independent uniform variables from {\mathbf{F}_2}, and take {X} and {Y} to be (say) {ZW} and {(1-Z)(1-W)} respectively (thus {X, Y} are the indicators of the events {Z=W=1} and {Z=W=0} respectively). Once one conditions on either {Z} or {W}, one of {X,Y} has positive conditional entropy and the other has zero entropy, and so {X, Y} are conditionally independent relative to either {Z} or {W}; also, {Z} or {W} are independent of each other. But {X} and {Y} are not independent of each other (they cannot be simultaneously equal to {1}). Somehow, the feature of the linear algebra model that is not present in general is that in the linear algebra setting, every pair of subspaces {U, V} has a well-defined intersection {U \cap V} that is also a subspace, whereas for arbitrary random variables {X, Y}, there does not necessarily exist the analogue of an intersection, namely a “common information” random variable {V} that has the entropy of {I(X:Y)} and is determined either by {X} or by {Y}.

I do not know if there is any simpler model of Shannon entropy that captures all the inequalities available for four variables. One significant complication is that there exist some information inequalities in this setting that are not of Shannon type, such as the Zhang-Yeung inequality

\displaystyle  I(X:Y) \leq 2 I(X:Y|Z) + I(X:Z|Y) + I(Y:Z|X)

\displaystyle + I(X:Y|W) + I(Z:W).

One can however still use these simpler models of Shannon entropy to be able to guess arguments that would work for general random variables. An example of this comes from my paper on the logarithmically averaged Chowla conjecture, in which I showed among other things that

\displaystyle  |\sum_{n \leq x} \frac{\lambda(n) \lambda(n+1)}{n}| \leq \varepsilon x \ \ \ \ \ (5)

whenever {x} was sufficiently large depending on {\varepsilon>0}, where {\lambda} is the Liouville function. The information-theoretic part of the proof was as follows. Given some intermediate scale {H} between {1} and {x}, one can form certain random variables {X_H, Y_H}. The random variable {X_H} is a sign pattern of the form {(\lambda(n+1),\dots,\lambda(n+H))} where {n} is a random number chosen from {1} to {x} (with logarithmic weighting). The random variable {Y_H} was tuple {(n \hbox{ mod } p)_{p \sim \varepsilon^2 H}} of reductions of {n} to primes {p} comparable to {\varepsilon^2 H}. Roughly speaking, what was implicitly shown in the paper (after using the multiplicativity of {\lambda}, the circle method, and the Matomaki-Radziwill theorem on short averages of multiplicative functions) is that if the inequality (5) fails, then there was a lower bound

\displaystyle  I( X_H : Y_H ) \gg \varepsilon^7 \frac{H}{\log H}

on the mutual information between {X_H} and {Y_H}. From translation invariance, this also gives the more general lower bound

\displaystyle  I( X_{H_0,H} : Y_H ) \gg \varepsilon^7 \frac{H}{\log H} \ \ \ \ \ (6)

for any {H_0}, where {X_{H_0,H}} denotes the shifted sign pattern {(\lambda(n+H_0+1),\dots,\lambda(n+H_0+H))}. On the other hand, one had the entropy bounds

\displaystyle  H( X_{H_0,H} ), H(Y_H) \ll H

and from concatenating sign patterns one could see that {X_{H_0,H+H'}} is equivalent to the joint random variable {(X_{H_0,H}, X_{H_0+H,H'})} for any {H_0,H,H'}. Applying these facts and using an “entropy decrement” argument, I was able to obtain a contradiction once {H} was allowed to become sufficiently large compared to {\varepsilon}, but the bound was quite weak (coming ultimately from the unboundedness of {\sum_{\log H_- \leq j \leq \log H_+} \frac{1}{j \log j}} as the interval {[H_-,H_+]} of values of {H} under consideration becomes large), something of the order of {H \sim \exp\exp\exp(\varepsilon^{-7})}; the quantity {H} needs at various junctures to be less than a small power of {\log x}, so the relationship between {x} and {\varepsilon} becomes essentially quadruple exponential in nature, {x \sim \exp\exp\exp\exp(\varepsilon^{-7})}. The basic strategy was to observe that the lower bound (6) causes some slowdown in the growth rate {H(X_{kH})/kH} of the mean entropy, in that this quantity decreased by {\gg \frac{\varepsilon^7}{\log H}} as {k} increased from {1} to {\log H}, basically by dividing {X_{kH}} into {k} components {X_{jH, H}}, {j=0,\dots,k-1} and observing from (6) each of these shares a bit of common information with the same variable {Y_H}. This is relatively clear when one works in a set model, in which {Y_H} is modeled by a set {B_H} of size {O(H)}, and {X_{H_0,H}} is modeled by a set of the form

\displaystyle  X_{H_0,H} = \bigcup_{H_0 < h \leq H_0+H} A_h

for various sets {A_h} of size {O(1)} (also there is some translation symmetry that maps {A_h} to a shift {A_{h+1}} while preserving all of the {B_H}).

However, on considering the set model recently, I realised that one can be a little more efficient by exploiting the fact (basically the Chinese remainder theorem) that the random variables {Y_H} are basically jointly independent as {H} ranges over dyadic values that are much smaller than {\log x}, which in the set model corresponds to the {B_H} all being disjoint. One can then establish a variant

\displaystyle  I( X_{H_0,H} : Y_H | (Y_{H'})_{H' < H}) \gg \varepsilon^7 \frac{H}{\log H} \ \ \ \ \ (7)

of (6), which in the set model roughly speaking asserts that each {B_H} claims a portion of the {\bigcup_{H_0 < h \leq H_0+H} A_h} of cardinality {\gg \varepsilon^7 \frac{H}{\log H}} that is not claimed by previous choices of {B_H}. This leads to a more efficient contradiction (relying on the unboundedness of {\sum_{\log H_- \leq j \leq \log H_+} \frac{1}{j}} rather than {\sum_{\log H_- \leq j \leq \log H_+} \frac{1}{j \log j}}) that looks like it removes one order of exponential growth, thus the relationship between {x} and {\varepsilon} is now {x \sim \exp\exp\exp(\varepsilon^{-7})}. Returning to the entropy model, one can use (7) and Shannon inequalities to establish an inequality of the form

\displaystyle  \frac{1}{2H} H(X_{2H} | (Y_{H'})_{H' \leq 2H}) \leq \frac{1}{H} H(X_{H} | (Y_{H'})_{H' \leq H}) - \frac{c \varepsilon^7}{\log H}

for a small constant {c>0}, which on iterating and using the boundedness of {\frac{1}{H} H(X_{H} | (Y_{H'})_{H' \leq H})} gives the claim. (A modification of this analysis, at least on the level of the back of the envelope calculation, suggests that the Matomaki-Radziwill theorem is needed only for ranges {H} greater than {\exp( (\log\log x)^{\varepsilon^{7}} )} or so, although at this range the theorem is not significantly simpler than the general case).

This is a postscript to the previous blog post which was concerned with obtaining heuristic asymptotic predictions for the correlation

\displaystyle \sum_{n \leq x} \tau(n) \tau(n+h), \ \ \ \ \ (1)

 

for the divisor function {\tau(n) := \sum_{d|n} 1}, in particular recovering the calculation of Ingham that obtained the asymptotic

\displaystyle \sum_{n \leq x} \tau(n) \tau(n+h) \sim \frac{6}{\pi^2} \sigma_{-1}(h) x \log^2 x \ \ \ \ \ (2)

 

when {h} was fixed and non-zero and {x} went to infinity. It is natural to consider the more general correlations

\displaystyle \sum_{n \leq x} \tau_k(n) \tau_l(n+h)

for fixed {k,l \geq 1} and non-zero {h}, where

\displaystyle \tau_k(n) := \sum_{d_1 \dots d_k = n} 1

is the order {k} divisor function. The sum (1) then corresponds to the case {k=l=2}. For {l=1}, {\tau_1(n) = 1}, and a routine application of the Dirichlet hyperbola method (or Perron’s formula) gives the asymptotic

\displaystyle \sum_{n \leq x} \tau_k(n) \sim \frac{\log^{k-1} x}{(k-1)!} x,

or more accurately

\displaystyle \sum_{n \leq x} \tau_k(n) \sim P_k(\log x) x

where {P_k(t)} is a certain explicit polynomial of degree {k-1} with leading coefficient {\frac{1}{(k-1)!}}; see e.g. Exercise 31 of this previous post for a discussion of the {k=3} case (which is already typical). Similarly if {k=1}. For more general {k,l \geq 1}, there is a conjecture of Conrey and Gonek which predicts that

\displaystyle \sum_{n \leq x} \tau_k(n) \tau_l(n+h) \sim P_{k,l,h}(\log x) x

for some polynomial {P_{k,l,h}(t)} of degree {k+l-2} which is explicit but whose form is rather complicated (one has to compute residues of a various complicated products of zeta functions and local factors). This conjecture has been verified when {k \leq 2} or {l \leq 2}, by the work of Linnik, Motohashi, Fouvry-Tenenbaum, and others, but all the remaining cases when {k,l \geq 3} are currently open.

In principle, the calculations of the previous post should recover the predictions of Conrey and Gonek. In this post I would like to record this for the top order term:

Conjecture 1 If {k,l \geq 2} and {h \neq 0} are fixed, then

\displaystyle \sum_{n \leq x} \tau_k(n) \tau_l(n+h) \sim \frac{\log^{k-1} x}{(k-1)!} \frac{\log^{l-1} x}{(l-1)!} x \prod_p {\mathfrak S}_{k,l,p}(h)

as {x \rightarrow \infty}, where the product is over all primes {p}, and the local factors {{\mathfrak S}_{k,l,p}(h)} are given by the formula

\displaystyle {\mathfrak S}_{k,l,p}(h) := (\frac{p-1}{p})^{k+l-2} \sum_{j \geq 0: p^j|h} \frac{1}{p^j} P_{k,l,p}(j) \ \ \ \ \ (3)

 

where {P_{k,l,p}} is the degree {k+l-4} polynomial

\displaystyle P_{k,l,p}(j) := \sum_{k'=2}^k \sum_{l'=2}^l \binom{k-k'+j-1}{k-k'} \binom{l-l'+j-1}{l-l'} \alpha_{k',l',p}

where

\displaystyle \alpha_{k',l',p} := (\frac{p}{p-1})^{k'-1} + (\frac{p}{p-1})^{l'-1} - 1

and one adopts the conventions that {\binom{-1}{0} = 1} and {\binom{m-1}{m} = 0} for {m \geq 1}.

For instance, if {k=l=2} then

\displaystyle P_{2,2,p}(h) = \frac{p}{p-1} + \frac{p}{p-1} - 1 = \frac{p+1}{p-1}

and hence

\displaystyle {\mathfrak S}_{2,2,p}(h) = (1 - \frac{1}{p^2}) \sum_{j \geq 0: p^j|h} \frac{1}{p^j}

and the above conjecture recovers the Ingham formula (2). For {k=2, l=3}, we have

\displaystyle P_{2,3,p}(h) =

\displaystyle (\frac{p}{p-1} + (\frac{p}{p-1})^2 - 1) + (\frac{p}{p-1} + \frac{p}{p-1} - 1) j

\displaystyle = \frac{p^2+p-1}{(p-1)^2} + \frac{p+1}{p-1} j

and so we predict

\displaystyle \sum_{n \leq x} \tau(n) \tau_3(n+h) \sim \frac{x \log^3 x}{2} \prod_p {\mathfrak S}_{2,3,p}(h)

where

\displaystyle {\mathfrak S}_{2,3,p}(h) = \sum_{j \geq 0: p^j|h} \frac{\frac{p^3 - 2p + 1}{p^3} + \frac{(p+1)(p-1)^2}{p^3} j}{p^j}.

Similarly, if {k=l=3} we have

\displaystyle P_{3,3,p}(h) = ((\frac{p}{p-1})^2 + (\frac{p}{p-1})^2 - 1) + 2 (\frac{p}{p-1} + (\frac{p}{p-1})^2 - 1) j

\displaystyle + (\frac{p}{p-1} + \frac{p}{p-1} - 1) j^2

\displaystyle = \frac{p^2+2p-1}{(p-1)^2} + 2 \frac{p^2+p-1}{(p-1)^2} j + \frac{p+1}{p-1} j^2

and so we predict

\displaystyle \sum_{n \leq x} \tau_3(n) \tau_3(n+h) \sim \frac{x \log^4 x}{4} \prod_p {\mathfrak S}_{3,3,p}(h)

where

\displaystyle {\mathfrak S}_{3,3,p}(h) = \sum_{j \geq 0: p^j|h} \frac{\frac{p^4 - 4p^2 + 4p - 1}{p^4} + 2 \frac{(p^2+p-1)(p-1)^2}{p^4} j + \frac{(p+1)(p-1)^3}{p^4} j^2}{p^j}.

and so forth.

As in the previous blog, the idea is to factorise

\displaystyle \tau_k(n) = \prod_p \tau_{k,p}(n)

where the local factors {\tau_{k,p}(n)} are given by

\displaystyle \tau_{k,p}(n) := \sum_{j_1,\dots,j_k \geq 0: p^{j_1+\dots+j_k} || n} 1

(where {p^j || n} means that {p} divides {n} precisely {j} times), or in terms of the valuation {v_p(n)} of {n} at {p},

\displaystyle \tau_{k,p}(n) = \binom{k-1+v_p(n)}{k-1}. \ \ \ \ \ (4)

 

We then have the following exact local asymptotics:

Proposition 2 (Local correlations) Let {{\bf n}} be a profinite integer chosen uniformly at random, let {h} be a profinite integer, and let {k,l \geq 2}. Then

\displaystyle {\bf E} \tau_{k,p}({\bf n}) = (\frac{p}{p-1})^{k-1} \ \ \ \ \ (5)

 

and

\displaystyle {\bf E} \tau_{k,p}({\bf n}) \tau_{l,p}({\bf n}+h) = (\frac{p}{p-1})^{k+l-2} {\mathfrak S}_{k,l,p}(h). \ \ \ \ \ (6)

 

(For profinite integers it is possible that {v_p({\bf n})} and hence {\tau_{k,p}({\bf n})} are infinite, but this is a probability zero event and so can be ignored.)

Conjecture 1 can then be heuristically justified from the local calculations (2) by various pseudorandomness heuristics, as discussed in the previous post.

I’ll give a short proof of the above proposition below, basically using the recursive methods of the previous post. This short proof actually took be quite a while to find; I spent several hours and a fair bit of scratch paper working out the cases {k,l = 2,3} laboriously by hand (with some assistance and cross-checking from Maple). Here is an unorganised sample of some of this scratch, just to show how the sausage is actually made:

WP_20160831_12_53_59_Pro

It was only after expending all this effort that I realised that it would be much more efficient to compute the correlations for all values of {k,l} simultaneously by using generating functions. After performing this computation, it then became apparent that there would be a direct combinatorial proof of (6) that was shorter than even the generating function proof. (I will not supply the full generating function calculations here, but will at least show them for the easier correlation (5).)

I am confident that Conjecture 1 is consistent with the explicit asymptotic in the Conrey-Gonek conjecture, but have not yet rigorously established that the leading order term in the latter is indeed identical to the expression provided above.

Read the rest of this entry »

Let {\tau(n) := \sum_{d|n} 1} be the divisor function. A classical application of the Dirichlet hyperbola method gives the asymptotic

\displaystyle \sum_{n \leq x} \tau(n) \sim x \log x

where {X \sim Y} denotes the estimate {X = (1+o(1))Y} as {x \rightarrow \infty}. Much better error estimates are possible here, but we will not focus on the lower order terms in this discussion. For somewhat idiosyncratic reasons I will interpret this estimate (and the other analytic number theory estimates discussed here) through the probabilistic lens. Namely, if {{\bf n} = {\bf n}_x} is a random number selected uniformly between {1} and {x}, then the above estimate can be written as

\displaystyle {\bf E} \tau( {\bf n} ) \sim \log x, \ \ \ \ \ (1)

 

that is to say the random variable {\tau({\bf n})} has mean approximately {\log x}. (But, somewhat paradoxically, this is not the median or mode behaviour of this random variable, which instead concentrates near {\log^{\log 2} x}, basically thanks to the Hardy-Ramanujan theorem.)

Now we turn to the pair correlations {\sum_{n \leq x} \tau(n) \tau(n+h)} for a fixed positive integer {h}. There is a classical computation of Ingham that shows that

\displaystyle \sum_{n \leq x} \tau(n) \tau(n+h) \sim \frac{6}{\pi^2} \sigma_{-1}(h) x \log^2 x, \ \ \ \ \ (2)

 

where

\displaystyle \sigma_{-1}(h) := \sum_{d|h} \frac{1}{d}.

The error term in (2) has been refined by many subsequent authors, as has the uniformity of the estimates in the {h} aspect, as these topics are related to other questions in analytic number theory, such as fourth moment estimates for the Riemann zeta function; but we will not consider these more subtle features of the estimate here. However, we will look at the next term in the asymptotic expansion for (2) below the fold.

Using our probabilistic lens, the estimate (2) can be written as

\displaystyle {\bf E} \tau( {\bf n} ) \tau( {\bf n} + h ) \sim \frac{6}{\pi^2} \sigma_{-1}(h) \log^2 x. \ \ \ \ \ (3)

 

From (1) (and the asymptotic negligibility of the shift by {h}) we see that the random variables {\tau({\bf n})} and {\tau({\bf n}+h)} both have a mean of {\sim \log x}, so the additional factor of {\frac{6}{\pi^2} \sigma_{-1}(h)} represents some arithmetic coupling between the two random variables.

Ingham’s formula can be established in a number of ways. Firstly, one can expand out {\tau(n) = \sum_{d|n} 1} and use the hyperbola method (splitting into the cases {d \leq \sqrt{x}} and {n/d \leq \sqrt{x}} and removing the overlap). If one does so, one soon arrives at the task of having to estimate sums of the form

\displaystyle \sum_{n \leq x: d|n} \tau(n+h)

for various {d \leq \sqrt{x}}. For {d} much less than {\sqrt{x}} this can be achieved using a further application of the hyperbola method, but for {d} comparable to {\sqrt{x}} things get a bit more complicated, necessitating the use of non-trivial estimates on Kloosterman sums in order to obtain satisfactory control on error terms. A more modern approach proceeds using automorphic form methods, as discussed in this previous post. A third approach, which unfortunately is only heuristic at the current level of technology, is to apply the Hardy-Littlewood circle method (discussed in this previous post) to express (2) in terms of exponential sums {\sum_{n \leq x} \tau(n) e(\alpha n)} for various frequencies {\alpha}. The contribution of “major arc” {\alpha} can be computed after a moderately lengthy calculation which yields the right-hand side of (2) (as well as the correct lower order terms that are currently being suppressed), but there does not appear to be an easy way to show directly that the “minor arc” contributions are of lower order, although the methods discussed previously do indirectly show that this is ultimately the case.

Each of the methods outlined above requires a fair amount of calculation, and it is not obvious while performing them that the factor {\frac{6}{\pi^2} \sigma_{-1}(h)} will emerge at the end. One can at least explain the {\frac{6}{\pi^2}} as a normalisation constant needed to balance the {\sigma_{-1}(h)} factor (at a heuristic level, at least). To see this through our probabilistic lens, introduce an independent copy {{\bf n}'} of {{\bf n}}, then

\displaystyle {\bf E} \tau( {\bf n} ) \tau( {\bf n}' ) = ({\bf E} \tau ({\bf n}))^2 \sim \log^2 x; \ \ \ \ \ (4)

 

using symmetry to order {{\bf n}' > {\bf n}} (discarding the diagonal case {{\bf n} = {\bf n}'}) and making the change of variables {{\bf n}' = {\bf n}+h}, we see that (4) is heuristically consistent with (3) as long as the asymptotic mean of {\frac{6}{\pi^2} \sigma_{-1}(h)} in {h} is equal to {1}. (This argument is not rigorous because there was an implicit interchange of limits present, but still gives a good heuristic “sanity check” of Ingham’s formula.) Indeed, if {{\bf E}_h} denotes the asymptotic mean in {h}, then we have (heuristically at least)

\displaystyle {\bf E}_h \sigma_{-1}(h) = \sum_d {\bf E}_h \frac{1}{d} 1_{d|h}

\displaystyle = \sum_d \frac{1}{d^2}

\displaystyle = \frac{\pi^2}{6}

and we obtain the desired consistency after multiplying by {\frac{6}{\pi^2}}.

This still however does not explain the presence of the {\sigma_{-1}(h)} factor. Intuitively it is reasonable that if {h} has many prime factors, and {{\bf n}} has a lot of factors, then {{\bf n}+h} will have slightly more factors than average, because any common factor to {h} and {{\bf n}} will automatically be acquired by {{\bf n}+h}. But how to quantify this effect?

One heuristic way to proceed is through analysis of local factors. Observe from the fundamental theorem of arithmetic that we can factor

\displaystyle \tau(n) = \prod_p \tau_p(n)

where the product is over all primes {p}, and {\tau_p(n) := \sum_{p^j|n} 1} is the local version of {\tau(n)} at {p} (which in this case, is just one plus the {p}valuation {v_p(n)} of {n}: {\tau_p = 1 + v_p}). Note that all but finitely many of the terms in this product will equal {1}, so the infinite product is well-defined. In a similar fashion, we can factor

\displaystyle \sigma_{-1}(h) = \prod_p \sigma_{-1,p}(h)

where

\displaystyle \sigma_{-1,p}(h) := \sum_{p^j|h} \frac{1}{p^j}

(or in terms of valuations, {\sigma_{-1,p}(h) = (1 - p^{-v_p(h)-1})/(1-p^{-1})}). Heuristically, the Chinese remainder theorem suggests that the various factors {\tau_p({\bf n})} behave like independent random variables, and so the correlation between {\tau({\bf n})} and {\tau({\bf n}+h)} should approximately decouple into the product of correlations between the local factors {\tau_p({\bf n})} and {\tau_p({\bf n}+h)}. And indeed we do have the following local version of Ingham’s asymptotics:

Proposition 1 (Local Ingham asymptotics) For fixed {p} and integer {h}, we have

\displaystyle {\bf E} \tau_p({\bf n}) \sim \frac{p}{p-1}

and

\displaystyle {\bf E} \tau_p({\bf n}) \tau_p({\bf n}+h) \sim (1-\frac{1}{p^2}) \sigma_{-1,p}(h) (\frac{p}{p-1})^2

\displaystyle = \frac{p+1}{p-1} \sigma_{-1,p}(h)

From the Euler formula

\displaystyle \prod_p (1-\frac{1}{p^2}) = \frac{1}{\zeta(2)} = \frac{6}{\pi^2}

we see that

\displaystyle \frac{6}{\pi^2} \sigma_{-1}(h) = \prod_p (1-\frac{1}{p^2}) \sigma_{-1,p}(h)

and so one can “explain” the arithmetic factor {\frac{6}{\pi^2} \sigma_{-1}(h)} in Ingham’s asymptotic as the product of the arithmetic factors {(1-\frac{1}{p^2}) \sigma_{-1,p}(h)} in the (much easier) local Ingham asymptotics. Unfortunately we have the usual “local-global” problem in that we do not know how to rigorously derive the global asymptotic from the local ones; this problem is essentially the same issue as the problem of controlling the minor arc contributions in the circle method, but phrased in “physical space” language rather than “frequency space”.

Remark 2 The relation between the local means {\sim \frac{p}{p-1}} and the global mean {\sim \log^2 x} can also be seen heuristically through the application

\displaystyle \prod_{p \leq x^{1/e^\gamma}} \frac{p}{p-1} \sim \log x

of Mertens’ theorem, where {1/e^\gamma} is Pólya’s magic exponent, which serves as a useful heuristic limiting threshold in situations where the product of local factors is divergent.

Let us now prove this proposition. One could brute-force the computations by observing that for any fixed {j}, the valuation {v_p({\bf n})} is equal to {j} with probability {\sim \frac{p-1}{p} \frac{1}{p^j}}, and with a little more effort one can also compute the joint distribution of {v_p({\bf n})} and {v_p({\bf n}+h)}, at which point the proposition reduces to the calculation of various variants of the geometric series. I however find it cleaner to proceed in a more recursive fashion (similar to how one can prove the geometric series formula by induction); this will also make visible the vague intuition mentioned previously about how common factors of {{\bf n}} and {h} force {{\bf n}+h} to have a factor also.

It is first convenient to get rid of error terms by observing that in the limit {x \rightarrow \infty}, the random variable {{\bf n} = {\bf n}_x} converges vaguely to a uniform random variable {{\bf n}_\infty} on the profinite integers {\hat {\bf Z}}, or more precisely that the pair {(v_p({\bf n}_x), v_p({\bf n}_x+h))} converges vaguely to {(v_p({\bf n}_\infty), v_p({\bf n}_\infty+h))}. Because of this (and because of the easily verified uniform integrability properties of {\tau_p({\bf n})} and their powers), it suffices to establish the exact formulae

\displaystyle {\bf E} \tau_p({\bf n}_\infty) = \frac{p}{p-1} \ \ \ \ \ (5)

 

and

\displaystyle {\bf E} \tau_p({\bf n}_\infty) \tau_p({\bf n}_\infty+h) = (1-\frac{1}{p^2}) \sigma_{-1,p}(h) (\frac{p}{p-1})^2 = \frac{p+1}{p-1} \sigma_{-1,p}(h) \ \ \ \ \ (6)

 

in the profinite setting (this setting will make it easier to set up the recursion).

We begin with (5). Observe that {{\bf n}_\infty} is coprime to {p} with probability {\frac{p-1}{p}}, in which case {\tau_p({\bf n}_\infty)} is equal to {1}. Conditioning to the complementary probability {\frac{1}{p}} event that {{\bf n}_\infty} is divisible by {p}, we can factor {{\bf n}_\infty = p {\bf n}'_\infty} where {{\bf n}'_\infty} is also uniformly distributed over the profinite integers, in which event we have {\tau_p( {\bf n}_\infty ) = 1 + \tau_p( {\bf n}'_\infty )}. We arrive at the identity

\displaystyle {\bf E} \tau_p({\bf n}_\infty) = \frac{p-1}{p} + \frac{1}{p} ( 1 + {\bf E} \tau_p( {\bf n}'_\infty ) ).

As {{\bf n}_\infty} and {{\bf n}'_\infty} have the same distribution, the quantities {{\bf E} \tau_p({\bf n}_\infty)} and {{\bf E} \tau_p({\bf n}'_\infty)} are equal, and (5) follows by a brief amount of high-school algebra.

We use a similar method to treat (6). First treat the case when {h} is coprime to {p}. Then we see that with probability {\frac{p-2}{p}}, {{\bf n}_\infty} and {{\bf n}_\infty+h} are simultaneously coprime to {p}, in which case {\tau_p({\bf n}_\infty) = \tau_p({\bf n}_\infty+h) = 1}. Furthermore, with probability {\frac{1}{p}}, {{\bf n}_\infty} is divisible by {p} and {{\bf n}_\infty+h} is not; in which case we can write {{\bf n} = p {\bf n}'} as before, with {\tau_p({\bf n}_\infty) = 1 + \tau_p({\bf n}'_\infty)} and {\tau_p({\bf n}_\infty+h)=1}. Finally, in the remaining event with probability {\frac{1}{p}}, {{\bf n}+h} is divisible by {p} and {{\bf n}} is not; we can then write {{\bf n}_\infty+h = p {\bf n}'_\infty}, so that {\tau_p({\bf n}_\infty+h) = 1 + \tau_p({\bf n}'_\infty)} and {\tau_p({\bf n}_\infty) = 1}. Putting all this together, we obtain

\displaystyle {\bf E} \tau_p({\bf n}_\infty) \tau_p({\bf n}_\infty+h) = \frac{p-2}{p} + 2 \frac{1}{p} (1 + {\bf E} \tau_p({\bf n}'_\infty))

and the claim (6) in this case follows from (5) and a brief computation (noting that {\sigma_{-1,p}(h)=1} in this case).

Now suppose that {h} is divisible by {p}, thus {h=ph'} for some integer {h'}. Then with probability {\frac{p-1}{p}}, {{\bf n}_\infty} and {{\bf n}_\infty+h} are simultaneously coprime to {p}, in which case {\tau_p({\bf n}_\infty) = \tau_p({\bf n}_\infty+h) = 1}. In the remaining {\frac{1}{p}} event, we can write {{\bf n}_\infty = p {\bf n}'_\infty}, and then {\tau_p({\bf n}_\infty) = 1 + \tau_p({\bf n}'_\infty)} and {\tau_p({\bf n}_\infty+h) = 1 + \tau_p({\bf n}'_\infty+h')}. Putting all this together we have

\displaystyle {\bf E} \tau_p({\bf n}_\infty) \tau_p({\bf n}_\infty+h) = \frac{p-1}{p} + \frac{1}{p} {\bf E} (1+\tau_p({\bf n}'_\infty)(1+\tau_p({\bf n}'_\infty+h)

which by (5) (and replacing {{\bf n}'_\infty} by {{\bf n}_\infty}) leads to the recursive relation

\displaystyle {\bf E} \tau_p({\bf n}_\infty) \tau_p({\bf n}_\infty+h) = \frac{p+1}{p-1} + \frac{1}{p} {\bf E} \tau_p({\bf n}_\infty) \tau_p({\bf n}_\infty+h)

and (6) then follows by induction on the number of powers of {p}.

The estimate (2) of Ingham was refined by Estermann, who obtained the more accurate expansion

\displaystyle \sum_{n \leq x} \tau(n) \tau(n+h) = \frac{6}{\pi^2} \sigma_{-1}(h) x \log^2 x + a_1(h) x \log x + a_2(h) x \ \ \ \ \ (7)

 

\displaystyle + O( x^{11/12+o(1)} )

for certain complicated but explicit coefficients {a_1(h), a_2(h)}. For instance, {a_1(h)} is given by the formula

\displaystyle a_1(h) = (\frac{12}{\pi^2} (2\gamma-1) + 4 a') \sigma_{-1}(h) - \frac{24}{\pi^2} \sigma'_{-1}(h)

where {\gamma} is the Euler-Mascheroni constant,

\displaystyle a' := - \sum_{r=1}^\infty \frac{\mu(r)}{r^2} \log r, \ \ \ \ \ (8)

 

and

\displaystyle \sigma'_{-1}(h) := \sum_{d|h} \frac{\log d}{d}.

The formula for {a_2(h)} is similar but even more complicated. The error term {O( x^{11/12+o(1)})} was improved by Heath-Brown to {O( x^{5/6+o(1)})}; it is conjectured (for instance by Conrey and Gonek) that one in fact has square root cancellation {O( x^{1/2+o(1)})} here, but this is well out of reach of current methods.

These lower order terms are traditionally computed either from a Dirichlet series approach (using Perron’s formula) or a circle method approach. It turns out that a refinement of the above heuristics can also predict these lower order terms, thus keeping the calculation purely in physical space as opposed to the “multiplicative frequency space” of the Dirichlet series approach, or the “additive frequency space” of the circle method, although the computations are arguably as messy as the latter computations for the purposes of working out the lower order terms. We illustrate this just for the {a_1(h) x \log x} term below the fold.

Read the rest of this entry »

[This blog post was written jointly by Terry Tao and Will Sawin.]

In the previous blog post, one of us (Terry) implicitly introduced a notion of rank for tensors which is a little different from the usual notion of tensor rank, and which (following BCCGNSU) we will call “slice rank”. This notion of rank could then be used to encode the Croot-Lev-Pach-Ellenberg-Gijswijt argument that uses the polynomial method to control capsets.

Afterwards, several papers have applied the slice rank method to further problems – to control tri-colored sum-free sets in abelian groups (BCCGNSU, KSS) and from there to the triangle removal lemma in vector spaces over finite fields (FL), to control sunflowers (NS), and to bound progression-free sets in {p}-groups (P).

In this post we investigate the notion of slice rank more systematically. In particular, we show how to give lower bounds for the slice rank. In many cases, we can show that the upper bounds on slice rank given in the aforementioned papers are sharp to within a subexponential factor. This still leaves open the possibility of getting a better bound for the original combinatorial problem using the slice rank of some other tensor, but for very long arithmetic progressions (at least eight terms), we show that the slice rank method cannot improve over the trivial bound using any tensor.

It will be convenient to work in a “basis independent” formalism, namely working in the category of abstract finite-dimensional vector spaces over a fixed field {{\bf F}}. (In the applications to the capset problem one takes {{\bf F}={\bf F}_3} to be the finite field of three elements, but most of the discussion here applies to arbitrary fields.) Given {k} such vector spaces {V_1,\dots,V_k}, we can form the tensor product {\bigotimes_{i=1}^k V_i}, generated by the tensor products {v_1 \otimes \dots \otimes v_k} with {v_i \in V_i} for {i=1,\dots,k}, subject to the constraint that the tensor product operation {(v_1,\dots,v_k) \mapsto v_1 \otimes \dots \otimes v_k} is multilinear. For each {1 \leq j \leq k}, we have the smaller tensor products {\bigotimes_{1 \leq i \leq k: i \neq j} V_i}, as well as the {j^{th}} tensor product

\displaystyle \otimes_j: V_j \times \bigotimes_{1 \leq i \leq k: i \neq j} V_i \rightarrow \bigotimes_{i=1}^k V_i

defined in the obvious fashion. Elements of {\bigotimes_{i=1}^k V_i} of the form {v_j \otimes_j v_{\hat j}} for some {v_j \in V_j} and {v_{\hat j} \in \bigotimes_{1 \leq i \leq k: i \neq j} V_i} will be called rank one functions, and the slice rank (or rank for short) {\hbox{rank}(v)} of an element {v} of {\bigotimes_{i=1}^k V_i} is defined to be the least nonnegative integer {r} such that {v} is a linear combination of {r} rank one functions. If {V_1,\dots,V_k} are finite-dimensional, then the rank is always well defined as a non-negative integer (in fact it cannot exceed {\min( \hbox{dim}(V_1), \dots, \hbox{dim}(V_k))}. It is also clearly subadditive:

\displaystyle \hbox{rank}(v+w) \leq \hbox{rank}(v) + \hbox{rank}(w). \ \ \ \ \ (1)

 

For {k=1}, {\hbox{rank}(v)} is {0} when {v} is zero, and {1} otherwise. For {k=2}, {\hbox{rank}(v)} is the usual rank of the {2}-tensor {v \in V_1 \otimes V_2} (which can for instance be identified with a linear map from {V_1} to the dual space {V_2^*}). The usual notion of tensor rank for higher order tensors uses complete tensor products {v_1 \otimes \dots \otimes v_k}, {v_i \in V_i} as the rank one objects, rather than {v_j \otimes_j v_{\hat j}}, giving a rank that is greater than or equal to the slice rank studied here.

From basic linear algebra we have the following equivalences:

Lemma 1 Let {V_1,\dots,V_k} be finite-dimensional vector spaces over a field {{\bf F}}, let {v} be an element of {V_1 \otimes \dots \otimes V_k}, and let {r} be a non-negative integer. Then the following are equivalent:

  • (i) One has {\hbox{rank}(v) \leq r}.
  • (ii) One has a representation of the form

    \displaystyle v = \sum_{j=1}^k \sum_{s \in S_j} v_{j,s} \otimes_j v_{\hat j,s}

    where {S_1,\dots,S_k} are finite sets of total cardinality {|S_1|+\dots+|S_k|} at most {r}, and for each {1 \leq j \leq k} and {s \in S_j}, {v_{j,s} \in V_j} and {v_{\hat j,s} \in \bigotimes_{1 \leq i \leq k: i \neq j} V_i}.

  • (iii) One has

    \displaystyle v \in \sum_{j=1}^k U_j \otimes_j \bigotimes_{1 \leq i \leq k: i \neq j} V_i

    where for each {j=1,\dots,k}, {U_j} is a subspace of {V_j} of total dimension {\hbox{dim}(U_1)+\dots+\hbox{dim}(U_k)} at most {r}, and we view {U_j \otimes_j \bigotimes_{1 \leq i \leq k: i \neq j} V_i} as a subspace of {\bigotimes_{i=1}^k V_i} in the obvious fashion.

  • (iv) (Dual formulation) There exist subspaces {W_j} of the dual space {V_j^*} for {j=1,\dots,k}, of total dimension at least {\hbox{dim}(V_1)+\dots+\hbox{dim}(V_k) - r}, such that {v} is orthogonal to {\bigotimes_{j=1}^k W_j}, in the sense that one has the vanishing

    \displaystyle \langle \bigotimes_{j=1}^k w_j, v \rangle = 0

    for all {w_j \in W_j}, where {\langle, \rangle: \bigotimes_{j=1}^k V_j^* \times \bigotimes_{j=1}^k V_j \rightarrow {\bf F}} is the obvious pairing.

Proof: The equivalence of (i) and (ii) is clear from definition. To get from (ii) to (iii) one simply takes {U_j} to be the span of the {v_{j,s}}, and conversely to get from (iii) to (ii) one takes the {v_{j,s}} to be a basis of the {U_j} and computes {v_{\hat j,s}} by using a basis for the tensor product {\bigotimes_{j=1}^k U_j \otimes_j \bigotimes_{1 \leq i \leq k: i \neq j} V_i} consisting entirely of functions of the form {v_{j,s} \otimes_j e} for various {e}. To pass from (iii) to (iv) one takes {W_j} to be the annihilator {\{ w_j \in V_j: \langle w_j, v_j \rangle = 0 \forall v_j \in U_j \}} of {U_j}, and conversely to pass from (iv) to (iii). \Box

One corollary of the formulation (iv), is that the set of tensors of slice rank at most {r} is Zariski closed (if the field {{\mathbf F}} is algebraically closed), and so the slice rank itself is a lower semi-continuous function. This is in contrast to the usual tensor rank, which is not necessarily semicontinuous.

Corollary 2 Let {V_1,\dots, V_k} be finite-dimensional vector spaces over an algebraically closed field {{\bf F}}. Let {r} be a nonnegative integer. The set of elements of {V_1 \otimes \dots \otimes V_k} of slice rank at most {r} is closed in the Zariski topology.

Proof: In view of Lemma 1(i and iv), this set is the union over tuples of integers {d_1,\dots,d_k} with {d_1 + \dots + d_k \geq \hbox{dim}(V_1)+\dots+\hbox{dim}(V_k) - r} of the projection from {\hbox{Gr}(d_1, V_1) \times \dots \times \hbox{Gr}(d_k, V_k) \times ( V_1 \otimes \dots \otimes V_k)} of the set of tuples {(W_1,\dots,W_k, v)} with { v} orthogonal to {W_1 \times \dots \times W_k}, where {\hbox{Gr}(d,V)} is the Grassmanian parameterizing {d}-dimensional subspaces of {V}.

One can check directly that the set of tuples {(W_1,\dots,W_k, v)} with { v} orthogonal to {W_1 \times \dots \times W_k} is Zariski closed in {\hbox{Gr}(d_1, V_1) \times \dots \times \hbox{Gr}(d_k, V_k) \times V_1 \otimes \dots \otimes V_k} using a set of equations of the form {\langle \bigotimes_{j=1}^k w_j, v \rangle = 0} locally on {\hbox{Gr}(d_1, V_1) \times \dots \times \hbox{Gr}(d_k, V_k) }. Hence because the Grassmanian is a complete variety, the projection of this set to {V_1 \otimes \dots \otimes V_k} is also Zariski closed. So the finite union over tuples {d_1,\dots,d_k} of these projections is also Zariski closed.

\Box

We also have good behaviour with respect to linear transformations:

Lemma 3 Let {V_1,\dots,V_k, W_1,\dots,W_k} be finite-dimensional vector spaces over a field {{\bf F}}, let {v} be an element of {V_1 \otimes \dots \otimes V_k}, and for each {1 \leq j \leq k}, let {\phi_j: V_j \rightarrow W_j} be a linear transformation, with {\bigotimes_{j=1}^k \phi_j: \bigotimes_{j=1}^k V_k \rightarrow \bigotimes_{j=1}^k W_k} the tensor product of these maps. Then

\displaystyle \hbox{rank}( (\bigotimes_{j=1}^k \phi_j)(v) ) \leq \hbox{rank}(v). \ \ \ \ \ (2)

 

Furthermore, if the {\phi_j} are all injective, then one has equality in (2).

Thus, for instance, the rank of a tensor {v \in \bigotimes_{j=1}^k V_k} is intrinsic in the sense that it is unaffected by any enlargements of the spaces {V_1,\dots,V_k}.

Proof: The bound (2) is clear from the formulation (ii) of rank in Lemma 1. For equality, apply (2) to the injective {\phi_j}, as well as to some arbitrarily chosen left inverses {\phi_j^{-1}: W_j \rightarrow V_j} of the {\phi_j}. \Box

Computing the rank of a tensor is difficult in general; however, the problem becomes a combinatorial one if one has a suitably sparse representation of that tensor in some basis, where we will measure sparsity by the property of being an antichain.

Proposition 4 Let {V_1,\dots,V_k} be finite-dimensional vector spaces over a field {{\bf F}}. For each {1 \leq j \leq k}, let {(v_{j,s})_{s \in S_j}} be a linearly independent set in {V_j} indexed by some finite set {S_j}. Let {\Gamma} be a subset of {S_1 \times \dots \times S_k}.

Let {v \in \bigotimes_{j=1}^k V_j} be a tensor of the form

\displaystyle v = \sum_{(s_1,\dots,s_k) \in \Gamma} c_{s_1,\dots,s_k} v_{1,s_1} \otimes \dots \otimes v_{k,s_k} \ \ \ \ \ (3)

 

where for each {(s_1,\dots,s_k)}, {c_{s_1,\dots,s_k}} is a coefficient in {{\bf F}}. Then one has

\displaystyle \hbox{rank}(v) \leq \min_{\Gamma = \Gamma_1 \cup \dots \cup \Gamma_k} |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)| \ \ \ \ \ (4)

 

where the minimum ranges over all coverings of {\Gamma} by sets {\Gamma_1,\dots,\Gamma_k}, and {\pi_j: S_1 \times \dots \times S_k \rightarrow S_j} for {j=1,\dots,k} are the projection maps.

Now suppose that the coefficients {c_{s_1,\dots,s_k}} are all non-zero, that each of the {S_j} are equipped with a total ordering {\leq_j}, and {\Gamma'} is the set of maximal elements of {\Gamma}, thus there do not exist distinct {(s_1,\dots,s_k) \in \Gamma'}, {(t_1,\dots,t_k) \in \Gamma} such that {s_j \leq t_j} for all {j=1,\dots,k}. Then one has

\displaystyle \hbox{rank}(v) \geq \min_{\Gamma' = \Gamma_1 \cup \dots \cup \Gamma_k} |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)|. \ \ \ \ \ (5)

 

In particular, if {\Gamma} is an antichain (i.e. every element is maximal), then equality holds in (4).

Proof: By Lemma 3 (or by enlarging the bases {v_{j,s_j}}), we may assume without loss of generality that each of the {V_j} is spanned by the {v_{j,s_j}}. By relabeling, we can also assume that each {S_j} is of the form

\displaystyle S_j = \{1,\dots,|S_j|\}

with the usual ordering, and by Lemma 3 we may take each {V_j} to be {{\bf F}^{|S_j|}}, with {v_{j,s_j} = e_{s_j}} the standard basis.

Let {r} denote the rank of {v}. To show (4), it suffices to show the inequality

\displaystyle r \leq |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)| \ \ \ \ \ (6)

 

for any covering of {\Gamma} by {\Gamma_1,\dots,\Gamma_k}. By removing repeated elements we may assume that the {\Gamma_i} are disjoint. For each {1 \leq j \leq k}, the tensor

\displaystyle \sum_{(s_1,\dots,s_k) \in \Gamma_j} c_{s_1,\dots,s_k} e_{s_1} \otimes \dots \otimes e_{s_k}

can (after collecting terms) be written as

\displaystyle \sum_{s_j \in \pi_j(\Gamma_j)} e_{s_j} \otimes_j v_{\hat j,s_j}

for some {v_{\hat j, s_j} \in \bigotimes_{1 \leq i \leq k: i \neq j} {\bf F}^{|S_i|}}. Summing and using (1), we conclude the inequality (6).

Now assume that the {c_{s_1,\dots,s_k}} are all non-zero and that {\Gamma'} is the set of maximal elements of {\Gamma}. To conclude the proposition, it suffices to show that the reverse inequality

\displaystyle r \geq |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)| \ \ \ \ \ (7)

 

 holds for some {\Gamma_1,\dots,\Gamma_k} covering {\Gamma'}. By Lemma 1(iv), there exist subspaces {W_j} of {({\bf F}^{|S_j|})^*} whose dimension {d_j := \hbox{dim}(W_j)} sums to

\displaystyle \sum_{j=1}^k d_j = \sum_{j=1}^k |S_j| - r \ \ \ \ \ (8)

 

such that {v} is orthogonal to {\bigotimes_{j=1}^k W_j}.

Let {1 \leq j \leq k}. Using Gaussian elimination, one can find a basis {w_{j,1},\dots,w_{j,d_j}} of {W_j} whose representation in the standard dual basis {e^*_{1},\dots,e^*_{|S_j|}} of {({\bf F}^{|S_j|})^*} is in row-echelon form. That is to say, there exist natural numbers

\displaystyle 1 \leq s_{j,1} < \dots < s_{j,d_j} \leq |S_j|

such that for all {1 \leq t \leq d_j}, {w_{j,t}} is a linear combination of the dual vectors {e^*_{s_{j,t}},\dots,e^*_{|S_j|}}, with the {e^*_{s_{j,t}}} coefficient equal to one.

We now claim that {\prod_{j=1}^k \{ s_{j,t}: 1 \leq t \leq d_j \}} is disjoint from {\Gamma'}. Suppose for contradiction that this were not the case, thus there exists {1 \leq t_j \leq d_j} for each {1 \leq j \leq k} such that

\displaystyle (s_{1,t_1}, \dots, s_{k,t_k}) \in \Gamma'.

As {\Gamma'} is the set of maximal elements of {\Gamma}, this implies that

\displaystyle (s'_1,\dots,s'_k) \not \in \Gamma

for any tuple {(s'_1,\dots,s'_k) \in \prod_{j=1}^k \{ s_{j,t_j}, \dots, |S_j|\}} other than {(s_{1,t_1}, \dots, s_{k,t_k})}. On the other hand, we know that {w_{j,t_j}} is a linear combination of {e^*_{s_{j,t_j}},\dots,e^*_{|S_j|}}, with the {e^*_{s_{j,t_j}}} coefficient one. We conclude that the tensor product {\bigotimes_{j=1}^k w_{j,t_j}} is equal to

\displaystyle \bigotimes_{j=1}^k e^*_{s_{j,t_j}}

plus a linear combination of other tensor products {\bigotimes_{j=1}^k e^*_{s'_j}} with {(s'_1,\dots,s'_k)} not in {\Gamma}. Taking inner products with (3), we conclude that {\langle v, \bigotimes_{j=1}^k w_{j,t_j}\rangle = c_{s_{1,t_1},\dots,s_{k,t_k}} \neq 0}, contradicting the fact that {v} is orthogonal to {\prod_{j=1}^k W_j}. Thus we have {\prod_{j=1}^k \{ s_{j,t}: 1 \leq t \leq d_j \}} disjoint from {\Gamma'}.

For each {1 \leq j \leq k}, let {\Gamma_j} denote the set of tuples {(s_1,\dots,s_k)} in {\Gamma'} with {s_j} not of the form {\{ s_{j,t}: 1 \leq t \leq d_j \}}. From the previous discussion we see that the {\Gamma_j} cover {\Gamma'}, and we clearly have {\pi_j(\Gamma_j) \leq |S_j| - d_j}, and hence from (8) we have (7) as claimed. \Box

As an instance of this proposition, we recover the computation of diagonal rank from the previous blog post:

Example 5 Let {V_1,\dots,V_k} be finite-dimensional vector spaces over a field {{\bf F}} for some {k \geq 2}. Let {d} be a natural number, and for {1 \leq j \leq k}, let {e_{j,1},\dots,e_{j,d}} be a linearly independent set in {V_j}. Let {c_1,\dots,c_d} be non-zero coefficients in {{\bf F}}. Then

\displaystyle \sum_{t=1}^d c_t e_{1,t} \otimes \dots \otimes e_{k,t}

has rank {d}. Indeed, one applies the proposition with {S_1,\dots,S_k} all equal to {\{1,\dots,d\}}, with {\Gamma} the diagonal in {S_1 \times \dots \times S_k}; this is an antichain if we give one of the {S_i} the standard ordering, and another of the {S_i} the opposite ordering (and ordering the remaining {S_i} arbitrarily). In this case, the {\pi_j} are all bijective, and so it is clear that the minimum in (4) is simply {d}.

The combinatorial minimisation problem in the above proposition can be solved asymptotically when working with tensor powers, using the notion of the Shannon entropy {h(X)} of a discrete random variable {X}.

Proposition 6 Let {V_1,\dots,V_k} be finite-dimensional vector spaces over a field {{\bf F}}. For each {1 \leq j \leq k}, let {(v_{j,s})_{s \in S_j}} be a linearly independent set in {V_j} indexed by some finite set {S_j}. Let {\Gamma} be a non-empty subset of {S_1 \times \dots \times S_k}.

Let {v \in \bigotimes_{j=1}^k V_j} be a tensor of the form (3) for some coefficients {c_{s_1,\dots,s_k}}. For each natural number {n}, let {v^{\otimes n}} be the tensor power of {n} copies of {v}, viewed as an element of {\bigotimes_{j=1}^k V_j^{\otimes n}}. Then

\displaystyle \hbox{rank}(v^{\otimes n}) \leq \exp( (H + o(1)) n ) \ \ \ \ \ (9)

 

as {n \rightarrow \infty}, where {H} is the quantity

\displaystyle H = \hbox{sup}_{(X_1,\dots,X_k)} \hbox{min}( h(X_1), \dots, h(X_k) ) \ \ \ \ \ (10)

 

and {(X_1,\dots,X_k)} range over the random variables taking values in {\Gamma}.

Now suppose that the coefficients {c_{s_1,\dots,s_k}} are all non-zero and that each of the {S_j} are equipped with a total ordering {\leq_j}. Let {\Gamma'} be the set of maximal elements of {\Gamma} in the product ordering, and let {H' = \hbox{sup}_{(X_1,\dots,X_k)} \hbox{min}( h(X_1), \dots, h(X_k) ) } where {(X_1,\dots,X_k)} range over random variables taking values in {\Gamma'}. Then

\displaystyle \hbox{rank}(v^{\otimes n}) \geq \exp( (H' + o(1)) n ) \ \ \ \ \ (11)

 

as {n \rightarrow \infty}. In particular, if the maximizer in (10) is supported on the maximal elements of {\Gamma} (which always holds if {\Gamma} is an antichain in the product ordering), then equality holds in (9).

Proof:

It will suffice to show that

\displaystyle \min_{\Gamma^n = \Gamma_{n,1} \cup \dots \cup \Gamma_{n,k}} |\pi_{n,1}(\Gamma_{n,1})| + \dots + |\pi_{n,k}(\Gamma_{n,k})| = \exp( (H + o(1)) n ) \ \ \ \ \ (12)

 

as {n \rightarrow \infty}, where {\pi_{n,j}: \prod_{i=1}^k S_i^n \rightarrow S_j^n} is the projection map. Then the same thing will apply to {\Gamma'} and {H'}. Then applying Proposition 4, using the lexicographical ordering on {S_j^n} and noting that, if {\Gamma'} are the maximal elements of {\Gamma}, then {\Gamma'^n} are the maximal elements of {\Gamma^n}, we obtain both (9) and (11).

We first prove the lower bound. By compactness (and the continuity properties of entropy), we can find a random variable {(X_1,\dots,X_k)} taking values in {\Gamma} such that

\displaystyle H = \hbox{min}( h(X_1), \dots, h(X_k) ). \ \ \ \ \ (13)

 

Let {\varepsilon = o(1)} be a small positive quantity that goes to zero sufficiently slowly with {n}. Let {\Sigma = \Sigma_{X_1,\dots,X_k} \subset \Gamma^n} denote the set of all tuples {(a_1, \dots, \vec a_n)} in {\Gamma^n} that are within {\varepsilon} of being distributed according to the law of {(X_1,\dots,X_k)}, in the sense that for all {a \in \Gamma}, one has

\displaystyle |\frac{|\{ 1 \leq l \leq n: a_l = a \}|}{n} - {\bf P}( (X_1,\dots,X_k) = a )| \leq \varepsilon.

By the asymptotic equipartition property, the cardinality of {\Sigma} can be computed to be

\displaystyle |\Sigma| = \exp( (h( X_1,\dots,X_k)+o(1)) n ) \ \ \ \ \ (14)

 

if {\varepsilon} goes to zero slowly enough. Similarly one has

\displaystyle |\pi_{n,j}(\Sigma)| = \exp( (h( X_j)+o(1)) n ),

and for each {s_{n,j} \in \pi_{n,j}(\Sigma)}, one has

\displaystyle |\{ \sigma \in \Sigma: \pi_{n,j}(\sigma) = s_{n,j} \}| \leq \exp( (h( X_1,\dots,X_k)-h(X_j)+o(1)) n ). \ \ \ \ \ (15)

 

Now let {\Gamma^n = \Gamma_{n,1} \cup \dots \cup \Gamma_{n,k}} be an arbitrary covering of {\Gamma^n}. By the pigeonhole principle, there exists {1 \leq j \leq k} such that

\displaystyle |\Gamma_{n,j} \cap \Sigma| \geq \frac{1}{k} |\Sigma|

and hence by (14), (15)

\displaystyle |\pi_{n,j}( \Gamma_{n,j} \cap \Sigma)| \geq \frac{1}{k} \exp( (h( X_j)+o(1)) n )

which by (13) implies that

\displaystyle |\pi_{n,1}(\Gamma_{n,1})| + \dots + |\pi_{n,k}(\Gamma_{n,k})| \geq \exp( (H + o(1)) n )

noting that the {\frac{1}{k}} factor can be absorbed into the {o(1)} error). This gives the lower bound in (12).

Now we prove the upper bound. We can cover {\Gamma^n} by {O(\exp(o(n))} sets of the form {\Sigma_{X_1,\dots,X_k}} for various choices of random variables {(X_1,\dots,X_k)} taking values in {\Gamma}. For each such random variable {(X_1,\dots,X_k)}, we can find {1 \leq j \leq k} such that {h(X_j) \leq H}; we then place all of {\Sigma_{X_1,\dots,X_k}} in {\Gamma_j}. It is then clear that the {\Gamma_j} cover {\Gamma} and that

\displaystyle |\Gamma_j| \leq \exp( (H+o(1)) n )

for all {j=1,\dots,n}, giving the required upper bound. \Box

It is of interest to compute the quantity {H} in (10). We have the following criterion for when a maximiser occurs:

Proposition 7 Let {S_1,\dots,S_k} be finite sets, and {\Gamma \subset S_1 \times \dots \times S_k} be non-empty. Let {H} be the quantity in (10). Let {(X_1,\dots,X_k)} be a random variable taking values in {\Gamma}, and let {\Gamma^* \subset \Gamma} denote the essential range of {(X_1,\dots,X_k)}, that is to say the set of tuples {(t_1,\dots,t_k)\in \Gamma} such that {{\bf P}( X_1=t_1, \dots, X_k = t_k)} is non-zero. Then the following are equivalent:

  • (i) {(X_1,\dots,X_k)} attains the maximum in (10).
  • (ii) There exist weights {w_1,\dots,w_k \geq 0} and a finite quantity {D \geq 0}, such that {w_j=0} whenever {h(X_j) > \min(h(X_1),\dots,h(X_k))}, and such that

    \displaystyle \sum_{j=1}^k w_j \log \frac{1}{{\bf P}(X_j = t_j)} \leq D \ \ \ \ \ (16)

    for all {(t_1,\dots,t_k) \in \Gamma}, with equality if {(t_1,\dots,t_k) \in \Gamma^*}. (In particular, {w_j} must vanish if there exists a {t_j \in \pi_i(\Gamma)} with {{\bf P}(X_j=t_j)=0}.)

Furthermore, when (i) and (ii) holds, one has

\displaystyle D = H \sum_{j=1}^k w_j. \ \ \ \ \ (17)

 

Proof: We first show that (i) implies (ii). The function {p \mapsto p \log \frac{1}{p}} is concave on {[0,1]}. As a consequence, if we define {C} to be the set of tuples {(h_1,\dots,h_k) \in [0,+\infty)^k} such that there exists a random variable {(Y_1,\dots,Y_k)} taking values in {\Gamma} with {h(Y_j) \geq h_j}, then {C} is convex. On the other hand, by (10), {C} is disjoint from the orthant {(H,+\infty)^k}. Thus, by the hyperplane separation theorem, we conclude that there exists a half-space

\displaystyle \{ (h_1,\dots,h_k) \in {\bf R}^k: w_1 h_1 + \dots + w_k h_k \geq c \},

where {w_1,\dots,w_k} are reals that are not all zero, and {c} is another real, which contains {(h(X_1),\dots,h(X_k))} on its boundary and {(H,+\infty)^k} in its interior, such that {C} avoids the interior of the half-space. Since {(h(X_1),\dots,h(X_k))} is also on the boundary of {(H,+\infty)^k}, we see that the {w_j} are non-negative, and that {w_j = 0} whenever {h(X_j) \neq H}.

By construction, the quantity

\displaystyle w_1 h(Y_1) + \dots + w_k h(Y_k)

is maximised when {(Y_1,\dots,Y_k) = (X_1,\dots,X_k)}. At this point we could use the method of Lagrange multipliers to obtain the required constraints, but because we have some boundary conditions on the {(Y_1,\dots,Y_k)} (namely, that the probability that they attain a given element of {\Gamma} has to be non-negative) we will work things out by hand. Let {t = (t_1,\dots,t_k)} be an element of {\Gamma}, and {s = (s_1,\dots,s_k)} an element of {\Gamma^*}. For {\varepsilon>0} small enough, we can form a random variable {(Y_1,\dots,Y_k)} taking values in {\Gamma}, whose probability distribution is the same as that for {(X_1,\dots,X_k)} except that the probability of attaining {(t_1,\dots,t_k)} is increased by {\varepsilon}, and the probability of attaining {(s_1,\dots,s_k)} is decreased by {\varepsilon}. If there is any {j} for which {{\bf P}(X_j = t_j)=0} and {w_j \neq 0}, then one can check that

\displaystyle w_1 h(Y_1) + \dots + w_k h(Y_k) - (w_1 h(X_1) + \dots + w_k h(X_k)) \gg \varepsilon \log \frac{1}{\varepsilon}

for sufficiently small {\varepsilon}, contradicting the maximality of {(X_1,\dots,X_k)}; thus we have {{\bf P}(X_j = t_j) > 0} whenever {w_j \neq 0}. Taylor expansion then gives

\displaystyle w_1 h(Y_1) + \dots + w_k h(Y_k) - (w_1 h(X_1) + \dots + w_k h(X_k)) = (A_t - A_s) \varepsilon + O(\varepsilon^2)

for small {\varepsilon}, where

\displaystyle A_t := \sum_{j=1}^k w_j \log \frac{1}{{\bf P}(X_j = t_j)}

and similarly for {A_s}. We conclude that {A_t \leq A_s} for all {s \in \Gamma^*} and {t \in \Gamma}, thus there exists a quantity {D} such that {A_s = D} for all {s \in \Gamma^*}, and {A_t \leq D} for all {t \in \Gamma}. By construction {D} must be nonnegative. Sampling {(t_1,\dots,t_k)} using the distribution of {(X_1,\dots,X_k)}, one has

\displaystyle \sum_{j=1}^k w_j \log \frac{1}{{\bf P}(X_j = t_j)} = D

almost surely; taking expectations we conclude that

\displaystyle \sum_{j=1}^k w_j \sum_{t_j \in S_j} {\bf P}( X_j = t_j) \log \frac{1}{{\bf P}(X_j = t_j)} = D.

The inner sum is {h(X_j)}, which equals {H} when {w_j} is non-zero, giving (17).

Now we show conversely that (ii) implies (i). As noted previously, the function {p \mapsto p \log \frac{1}{p}} is concave on {[0,1]}, with derivative {\log \frac{1}{p} - 1}. This gives the inequality

\displaystyle q \log \frac{1}{q} \leq p \log \frac{1}{p} + (q-p) ( \log \frac{1}{p} - 1 ) \ \ \ \ \ (18)

 

for any {0 \leq p,q \leq 1} (note the right-hand side may be infinite when {p=0} and {q>0}). Let {(Y_1,\dots,Y_k)} be any random variable taking values in {\Gamma}, then on applying the above inequality with {p = {\bf P}(X_j = t_j)} and {q = {\bf P}( Y_j = t_j )}, multiplying by {w_j}, and summing over {j=1,\dots,k} and {t_j \in S_j} gives

\displaystyle \sum_{j=1}^k w_j h(Y_j) \leq \sum_{j=1}^k w_j h(X_j)

\displaystyle + \sum_{j=1}^k \sum_{t_j \in S_j} w_j ({\bf P}(Y_j = t_j) - {\bf P}(X_j = t_j)) ( \log \frac{1}{{\bf P}(X_j=t_j)} - 1 ).

By construction, one has

\displaystyle \sum_{j=1}^k w_j h(X_j) = \min(h(X_1),\dots,h(X_k)) \sum_{j=1}^k w_j

and

\displaystyle \sum_{j=1}^k w_j h(Y_j) \geq \min(h(Y_1),\dots,h(Y_k)) \sum_{j=1}^k w_j

so to prove that {\min(h(Y_1),\dots,h(Y_k)) \leq \min(h(X_1),\dots,h(X_k))} (which would give (i)), it suffices to show that

\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j ({\bf P}(Y_j = t_j) - {\bf P}(X_j = t_j)) ( \log \frac{1}{{\bf P}(X_j=t_j)} - 1 ) \leq 0,

or equivalently that the quantity

\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j {\bf P}(Y_j = t_j) ( \log \frac{1}{{\bf P}(X_j=t_j)} - 1 )

is maximised when {(Y_1,\dots,Y_k) = (X_1,\dots,X_k)}. Since

\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j {\bf P}(Y_j = t_j) = \sum_{j=1}^k w_j

it suffices to show this claim for the quantity

\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j {\bf P}(Y_j = t_j) \log \frac{1}{{\bf P}(X_j=t_j)}.

One can view this quantity as

\displaystyle {\bf E}_{(Y_1,\dots,Y_k)} \sum_{j=1}^k w_j \log \frac{1}{{\bf P}_{X_j}(X_j=Y_j)}.

By (ii), this quantity is bounded by {D}, with equality if {(Y_1,\dots,Y_k)} is equal to {(X_1,\dots,X_k)} (and is in particular ranging in {\Gamma^*}), giving the claim. \Box

The second half of the proof of Proposition 7 only uses the marginal distributions {{{\bf P}(X_j=t_j)}} and the equation(16), not the actual distribution of {(X_1,\dots,X_k)}, so it can also be used to prove an upper bound on {H} when the exact maximizing distribution is not known, given suitable probability distributions in each variable. The logarithm of the probability distribution here plays the role that the weight functions do in BCCGNSU.

Remark 8 Suppose one is in the situation of (i) and (ii) above; assume the nondegeneracy condition that {H} is positive (or equivalently that {D} is positive). We can assign a “degree” {d_j(t_j)} to each element {t_j \in S_j} by the formula

\displaystyle d_j(t_j) := w_j \log \frac{1}{{\bf P}(X_j = t_j)}, \ \ \ \ \ (19)

 

then every tuple {(t_1,\dots,t_k)} in {\Gamma} has total degree at most {D}, and those tuples in {\Gamma^*} have degree exactly {D}. In particular, every tuple in {\Gamma^n} has degree at most {nD}, and hence by (17), each such tuple has a {j}-component of degree less than or equal to {nHw_j} for some {j} with {w_j>0}. On the other hand, we can compute from (19) and the fact that {h(X_j) = H} for {w_j > 0} that {Hw_j = {\bf E} d_j(X_j)}. Thus, by asymptotic equipartition, and assuming {w_j \neq 0}, the number of “monomials” in {S_j^n} of total degree at most {nHw_j} is at most {\exp( (h(X_j)+o(1)) n )}; one can in fact use (19) and (18) to show that this is in fact an equality. This gives a direct way to cover {\Gamma^n} by sets {\Gamma_{n,1},\dots,\Gamma_{n,k}} with {|\pi_j(\Gamma_{n,j})| \leq \exp( (H+o(1)) n)}, which is in the spirit of the Croot-Lev-Pach-Ellenberg-Gijswijt arguments from the previous post.

We can now show that the rank computation for the capset problem is sharp:

Proposition 9 Let {V_1^{\otimes n} = V_2^{\otimes n} = V_3^{\otimes n}} denote the space of functions from {{\bf F}_3^n} to {{\bf F}_3}. Then the function {(x,y,z) \mapsto \delta_{0^n}(x,y,z)} from {{\bf F}_3^n \times {\bf F}_3^n \times {\bf F}_3^n} to {{\bf F}}, viewed as an element of {V_1^{\otimes n} \otimes V_2^{\otimes n} \otimes V_3^{\otimes n}}, has rank {\exp( (H^*+o(1)) n )} as {n \rightarrow \infty}, where {H^* \approx 1.013445} is given by the formula

\displaystyle H^* = \alpha \log \frac{1}{\alpha} + \beta \log \frac{1}{\beta} + \gamma \log \frac{1}{\gamma} \ \ \ \ \ (20)

 

with

\displaystyle \alpha = \frac{32}{3(15 + \sqrt{33})} \approx 0.51419

\displaystyle \beta = \frac{4(\sqrt{33}-1)}{3(15+\sqrt{33})} \approx 0.30495

\displaystyle \gamma = \frac{(\sqrt{33}-1)^2}{6(15+\sqrt{33})} \approx 0.18086.

Proof: In {{\bf F}_3 \times {\bf F}_3 \times {\bf F}_3}, we have

\displaystyle \delta_0(x+y+z) = 1 - (x+y+z)^2

\displaystyle = (1-x^2) - y^2 - z^2 + xy + yz + zx.

Thus, if we let {V_1=V_2=V_3} be the space of functions from {{\bf F}_3} to {{\bf F}_3} (with domain variable denoted {x,y,z} respectively), and define the basis functions

\displaystyle v_{1,0} := 1; v_{1,1} := x; v_{1,2} := x^2

\displaystyle v_{2,0} := 1; v_{2,1} := y; v_{2,2} := y^2

\displaystyle v_{3,0} := 1; v_{3,1} := z; v_{3,2} := z^2

of {V_1,V_2,V_3} indexed by {S_1=S_2=S_3 := \{ 0,1,2\}} (with the usual ordering), respectively, and set {\Gamma \subset S_1 \times S_2 \times S_3} to be the set

\displaystyle \{ (2,0,0), (0,2,0), (0,0,2), (1,1,0), (0,1,1), (1,0,1),(0,0,0) \}

then {\delta_0(x,y,z)} is a linear combination of the {v_{1,t_1} \otimes v_{1,t_2} \otimes v_{1,t_3}} with {(t_1,t_2,t_3) \in \Gamma}, and all coefficients non-zero. Then we have {\Gamma'= \{ (2,0,0), (0,2,0), (0,0,2), (1,1,0), (0,1,1), (1,0,1) \}}. We will show that the quantity {H} of (10) agrees with the quantity {H^*} of (20), and that the optimizing distribution is supported on {\Gamma'}, so that by Proposition 6 the rank of {\delta_{0^n}(x,y,z)} is {\exp( (H+o(1)) n)}.

To compute the quantity at (10), we use the criterion in Proposition 7. We take {(X_1,X_2,X_3)} to be the random variable taking values in {\Gamma} that attains each of the values {(2,0,0), (0,2,0), (0,0,2)} with a probability of {\gamma \approx 0.18086}, and each of {(1,1,0), (0,1,1), (1,0,1)} with a probability of {\alpha - 2\gamma = \beta/2 \approx 0.15247}; then each of the {X_j} attains the values of {0,1,2} with probabilities {\alpha,\beta,\gamma} respectively, so in particular {h(X_1)=h(X_2)=h(X_3)} is equal to the quantity {H'} in (20). If we now set {w_1 = w_2 = w_3 := 1} and

\displaystyle D := 2\log \frac{1}{\alpha} + \log \frac{1}{\gamma} = \log \frac{1}{\alpha} + 2 \log \frac{1}{\beta} = 3H^* \approx 3.04036

we can verify the condition (16) with equality for all {(t_1,t_2,t_3) \in \Gamma'}, which from (17) gives {H=H'=H^*} as desired. \Box

This statement already follows from the result of Kleinberg-Sawin-Speyer, which gives a “tri-colored sum-free set” in {\mathbb F_3^n} of size {\exp((H'+o(1))n)}, as the slice rank of this tensor is an upper bound for the size of a tri-colored sum-free set. If one were to go over the proofs more carefully to evaluate the subexponential factors, this argument would give a stronger lower bound than KSS, as it does not deal with the substantial loss that comes from Behrend’s construction. However, because it actually constructs a set, the KSS result rules out more possible approaches to give an exponential improvement of the upper bound for capsets. The lower bound on slice rank shows that the bound cannot be improved using only the slice rank of this particular tensor, whereas KSS shows that the bound cannot be improved using any method that does not take advantage of the “single-colored” nature of the problem.

We can also show that the slice rank upper bound in a result of Naslund-Sawin is similarly sharp:

Proposition 10 Let {V_1^{\otimes n} = V_2^{\otimes n} = V_3^{\otimes n}} denote the space of functions from {\{0,1\}^n} to {\mathbb C}. Then the function {(x,y,z) \mapsto \prod_{i=1}^n (x_i+y_i+z_i)-1} from {\{0,1\}^n \times \{0,1\}^n \times \{0,1\}^n \rightarrow \mathbb C}, viewed as an element of {V_1^{\otimes n} \otimes V_2^{\otimes n} \otimes V_3^{\otimes n}}, has slice rank {(3/2^{2/3})^n e^{o(n)}}

Proof: Let {v_{1,0}=1} and {v_{1,1}=x} be a basis for the space {V_1} of functions on {\{0,1\}}, itself indexed by {S_1=\{0,1\}}. Choose similar bases for {V_2} and {V_3}, with {v_{2,0}=1, v_{2,1}=y} and {v_{3,0}=1,v_{3,1}=z-1}.

Set {\Gamma = \{(1,0,0),(0,1,0),(0,0,1)\}}. Then {x+y+z-1} is a linear combination of the {v_{1,t_1} \otimes v_{1,t_2} \otimes v_{1,t_3}} with {(t_1,t_2,t_3) \in \Gamma}, and all coefficients non-zero. Order {S_1,S_2,S_3} the usual way so that {\Gamma} is an antichain. We will show that the quantity {H} of (10) is {\log(3/2^{2/3})}, so that applying the last statement of Proposition 6, we conclude that the rank of {\delta_{0^n}(x,y,z)} is {\exp( (\log(3/2^{2/3})+o(1)) n)= (3/2^{2/3})^n e^{o(n)}} ,

Let {(X_1,X_2,X_3)} be the random variable taking values in {\Gamma} that attains each of the values {(1,0,0),(0,1,0),(0,0,1)} with a probability of {1/3}. Then each of the {X_i} attains the value {1} with probability {1/3} and {0} with probability {2/3}, so

\displaystyle h(X_1)=h(X_2)=h(X_3) = (1/3) \log (3) + (2/3) \log(3/2) = \log 3 - (2/3) \log 2= \log (3/2^{2/3})

Setting {w_1=w_2=w_3=1} and {D=3 \log(3/2^{2/3})=3 \log 3 - 2 \log 2}, we can verify the condition (16) with equality for all {(t_1,t_2,t_3) \in \Gamma'}, which from (17) gives {H=\log (3/2^{2/3})} as desired. \Box

We used a slightly different method in each of the last two results. In the first one, we use the most natural bases for all three vector spaces, and distinguish {\Gamma} from its set of maximal elements {\Gamma'}. In the second one we modify one basis element slightly, with {v_{3,1}=z-1} instead of the more obvious choice {z}, which allows us to work with {\Gamma = \{(1,0,0),(0,1,0),(0,0,1)\}} instead of {\Gamma=\{(1,0,0),(0,1,0),(0,0,1),(0,0,0)\}}. Because {\Gamma} is an antichain, we do not need to distinguish {\Gamma} and {\Gamma'}. Both methods in fact work with either problem, and they are both about equally difficult, but we include both as either might turn out to be substantially more convenient in future work.

Proposition 11 Let {k \geq 8} be a natural number and let {G} be a finite abelian group. Let {{\bf F}} be any field. Let {V_1 = \dots = V_k} denote the space of functions from {G} to {{\bf F}}.

Let {F} be any {{\bf F}}-valued function on {G^k} that is nonzero only when the {k} elements of {G^n} form a {k}-term arithmetic progression, and is nonzero on every {k}-term constant progression.

Then the slice rank of {F} is {|G|}.

Proof: We apply Proposition 4, using the standard bases of {V_1,\dots,V_k}. Let {\Gamma} be the support of {F}. Suppose that we have {k} orderings on {H} such that the constant progressions are maximal elements of {\Gamma} and thus all constant progressions lie in {\Gamma'}. Then for any partition {\Gamma_1,\dots, \Gamma_k} of {\Gamma'}, {\Gamma_j} can contain at most {|\pi_j(\Gamma_j)|} constant progressions, and as all {|G|} constant progressions must lie in one of the {\Gamma_j}, we must have {\sum_{j=1}^k |\pi_j(\Gamma_j)| \geq |G|}. By Proposition 4, this implies that the slice rank of {F} is at least {|G|}. Since {F} is a {|G| \times \dots \times |G|} tensor, the slice rank is at most {|G|}, hence exactly {|G|}.

So it is sufficient to find {k} orderings on {G} such that the constant progressions are maximal element of {\Gamma}. We make several simplifying reductions: We may as well assume that {\Gamma} consists of all the {k}-term arithmetic progressions, because if the constant progressions are maximal among the set of all progressions then they are maximal among its subset {\Gamma}. So we are looking for an ordering in which the constant progressions are maximal among all {k}-term arithmetic progressions. We may as well assume that {G} is cyclic, because if for each cyclic group we have an ordering where constant progressions are maximal, on an arbitrary finite abelian group the lexicographic product of these orderings is an ordering for which the constant progressions are maximal. We may assume {k=8}, as if we have an {8}-tuple of orderings where constant progressions are maximal, we may add arbitrary orderings and the constant progressions will remain maximal.

So it is sufficient to find {8} orderings on the cyclic group {\mathbb Z/n} such that the constant progressions are maximal elements of the set of {8}-term progressions in {\mathbb Z/n} in the {8}-fold product ordering. To do that, let the first, second, third, and fifth orderings be the usual order on {\{0,\dots,n-1\}} and let the fourth, sixth, seventh, and eighth orderings be the reverse of the usual order on {\{0,\dots,n-1\}}.

Then let {(c,c,c,c,c,c,c,c)} be a constant progression and for contradiction assume that {(a,a+b,a+2b,a+3b,a+4b,a+5b,a+6b,a+7b)} is a progression greater than {(c,c,c,c,c,c,c,c)} in this ordering. We may assume that {c \in [0, (n-1)/2]}, because otherwise we may reverse the order of the progression, which has the effect of reversing all eight orderings, and then apply the transformation {x \rightarrow n-1-x}, which again reverses the eight orderings, bringing us back to the original problem but with {c \in [0,(n-1)/2]}.

Take a representative of the residue class {b} in the interval {[-n/2,n/2]}. We will abuse notation and call this {b}. Observe that {a+b, a+2b,} {a+3b}, and {a+5b} are all contained in the interval {[0,c]} modulo {n}. Take a representative of the residue class {a} in the interval {[0,c]}. Then {a+b} is in the interval {[mn,mn+c]} for some {m}. The distance between any distinct pair of intervals of this type is greater than {n/2}, but the distance between {a} and {a+b} is at most {n/2}, so {a+b} is in the interval {[0,c]}. By the same reasoning, {a+2b} is in the interval {[0,c]}. Therefore {|b| \leq c/2< n/4}. But then the distance between {a+2b} and {a+4b} is at most {n/2}, so by the same reasoning {a+4b} is in the interval {[0,c]}. Because {a+3b} is between {a+2b} and {a+4b}, it also lies in the interval {[0,c]}. Because {a+3b} is in the interval {[0,c]}, and by assumption it is congruent mod {n} to a number in the set {\{0,\dots,n-1\}} greater than or equal to {c}, it must be exactly {c}. Then, remembering that {a+2b} and {a+4b} lie in {[0,c]}, we have {c-b \leq b} and {c+b \leq b}, so {b=0}, hence {a=c}, thus {(a,\dots,a+7b)=(c,\dots,c)}, which contradicts the assumption that {(a,\dots,a+7b)>(c,\dots,c)}. \Box

In fact, given a {k}-term progressions mod {n} and a constant, we can form a {k}-term binary sequence with a {1} for each step of the progression that is greater than the constant and a {0} for each step that is less. Because a rotation map, viewed as a dynamical system, has zero topological entropy, the number of {k}-term binary sequences that appear grows subexponentially in {k}. Hence there must be, for large enough {k}, at least one sequence that does not appear. In this proof we exploit a sequence that does not appear for {k=8}.

The twin prime conjecture, still unsolved, asserts that there are infinitely many primes {p} such that {p+2} is also prime. A more precise form of this conjecture is (a special case) of the Hardy-Littlewood prime tuples conjecture, which asserts that

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) = (2\Pi_2+o(1)) x \ \ \ \ \ (1)

 

as {x \rightarrow \infty}, where {\Lambda} is the von Mangoldt function and {\Pi_2 = 0.6606\dots} is the twin prime constant

\displaystyle \prod_{p>2} (1 - \frac{1}{(p-1)^2}).

Because {\Lambda} is almost entirely supported on the primes, it is not difficult to see that (1) implies the twin prime conjecture.

One can give a heuristic justification of the asymptotic (1) (and hence the twin prime conjecture) via sieve theoretic methods. Recall that the von Mangoldt function can be decomposed as a Dirichlet convolution

\displaystyle \Lambda(n) = \sum_{d|n} \mu(d) \log \frac{n}{d}

where {\mu} is the Möbius function. Because of this, we can rewrite the left-hand side of (1) as

\displaystyle \sum_{d \leq x} \mu(d) \sum_{n \leq x: d|n} \log\frac{n}{d} \Lambda(n+2). \ \ \ \ \ (2)

 

To compute this double sum, it is thus natural to consider sums such as

\displaystyle \sum_{n \leq x: d|n} \log \frac{n}{d} \Lambda(n+2)

or (to simplify things by removing the logarithm)

\displaystyle \sum_{n \leq x: d|n} \Lambda(n+2).

The prime number theorem in arithmetic progressions suggests that one has an asymptotic of the form

\displaystyle \sum_{n \leq x: d|n} \Lambda(n+2) \approx \frac{g(d)}{d} x \ \ \ \ \ (3)

 

where {g} is the multiplicative function with {g(d)=0} for {d} even and

\displaystyle g(d) := \frac{d}{\phi(d)} = \prod_{p|d} (1-\frac{1}{p})^{-1}

for {d} odd. Summing by parts, one then expects

\displaystyle \sum_{n \leq x: d|n} \Lambda(n+2)\log \frac{n}{d}  \approx \frac{g(d)}{d} x \log \frac{x}{d}

and so we heuristically have

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) \approx x \sum_{d \leq x} \frac{\mu(d) g(d)}{d} \log \frac{x}{d}.

The Dirichlet series

\displaystyle \sum_n \frac{\mu(n) g(n)}{n^s}

has an Euler product factorisation

\displaystyle \sum_n \frac{\mu(n) g(n)}{n^s} = \prod_p (1 - \frac{g(p)}{p^s})

for {\hbox{Re} s > 1}; comparing this with the Euler product factorisation

\displaystyle \zeta(s) = \prod_p (1 - \frac{1}{p^s})^{-1}

for the Riemann zeta function, and recalling that {\zeta} has a simple pole of residue {1} at {s=1}, we see that

\displaystyle \sum_n \frac{\mu(n) g(n)}{n^s} = \frac{1}{\zeta(s)} \prod_p \frac{1-g(p)/p^s}{1-p^s}

has a simple zero at {s=1} with first derivative

\displaystyle \prod_p \frac{1 - g(p)/p}{1-1/p} = 2 \Pi_2.

From this and standard multiplicative number theory manipulations, one can calculate the asymptotic

\displaystyle \sum_{d \leq x} \frac{\mu(d) g(d)}{d} \log \frac{x}{d} = 2 \Pi_2 + o(1)

which concludes the heuristic justification of (1).

What prevents us from making the above heuristic argument rigorous, and thus proving (1) and the twin prime conjecture? Note that the variable {d} in (2) ranges to be as large as {x}. On the other hand, the prime number theorem in arithmetic progressions (3) is not expected to hold for {d} anywhere that large (for instance, the left-hand side of (3) vanishes as soon as {d} exceeds {x}). The best unconditional result known of the type (3) is the Siegel-Walfisz theorem, which allows {d} to be as large as {\log^{O(1)} x}. Even the powerful generalised Riemann hypothesis (GRH) only lets one prove an estimate of the form (3) for {d} up to about {x^{1/2-o(1)}}.

However, because of the averaging effect of the summation in {d} in (2), we don’t need the asymptotic (3) to be true for all {d} in a particular range; having it true for almost all {d} in that range would suffice. Here the situation is much better; the celebrated Bombieri-Vinogradov theorem (sometimes known as “GRH on the average”) implies, roughly speaking, that the approximation (3) is valid for almost all {d \leq x^{1/2-\varepsilon}} for any fixed {\varepsilon>0}. While this is not enough to control (2) or (1), the Bombieri-Vinogradov theorem can at least be used to control variants of (1) such as

\displaystyle \sum_{n \leq x} (\sum_{d|n} \lambda_d) \Lambda(n+2)

for various sieve weights {\lambda_d} whose associated divisor function {\sum_{d|n} \lambda_d} is supposed to approximate the von Mangoldt function {\Lambda}, although that theorem only lets one do this when the weights {\lambda_d} are supported on the range {d \leq x^{1/2-\varepsilon}}. This is still enough to obtain some partial results towards (1); for instance, by selecting weights according to the Selberg sieve, one can use the Bombieri-Vinogradov theorem to establish the upper bound

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) \leq (4+o(1)) 2 \Pi_2 x, \ \ \ \ \ (4)

 

which is off from (1) by a factor of about {4}. See for instance this blog post for details.

It has been difficult to improve upon the Bombieri-Vinogradov theorem in its full generality, although there are various improvements to certain restricted versions of the Bombieri-Vinogradov theorem, for instance in the famous work of Zhang on bounded gaps between primes. Nevertheless, it is believed that the Elliott-Halberstam conjecture (EH) holds, which roughly speaking would mean that (3) now holds for almost all {d \leq x^{1-\varepsilon}} for any fixed {\varepsilon>0}. (Unfortunately, the {\varepsilon} factor cannot be removed, as investigated in a series of papers by Friedlander, Granville, and also Hildebrand and Maier.) This comes tantalisingly close to having enough distribution to control all of (1). Unfortunately, it still falls short. Using this conjecture in place of the Bombieri-Vinogradov theorem leads to various improvements to sieve theoretic bounds; for instance, the factor of {4+o(1)} in (4) can now be improved to {2+o(1)}.

In two papers from the 1970s (which can be found online here and here respectively, the latter starting on page 255 of the pdf), Bombieri developed what is now known as the Bombieri asymptotic sieve to clarify the situation more precisely. First, he showed that on the Elliott-Halberstam conjecture, while one still could not establish the asymptotic (1), one could prove the generalised asymptotic

\displaystyle \sum_{n \leq x} \Lambda_k(n) \Lambda(n+2) = (2\Pi_2+o(1)) k x \log^{k-1} x \ \ \ \ \ (5)

 

for all natural numbers {k \geq 2}, where the generalised von Mangoldt functions {\Lambda_k} are defined by the formula

\displaystyle \Lambda_k(n) := \sum_{d|n} \mu(d) \log^k \frac{n}{d}.

These functions behave like the von Mangoldt function, but are concentrated on {k}-almost primes (numbers with at most {k} prime factors) rather than primes. The right-hand side of (5) corresponds to what one would expect if one ran the same heuristics used to justify (1). Sadly, the {k=1} case of (5), which is just (1), is just barely excluded from Bombieri’s analysis.

More generally, on the assumption of EH, the Bombieri asymptotic sieve provides the asymptotic

\displaystyle \sum_{n \leq x} \Lambda_{(k_1,\dots,k_r)}(n) \Lambda(n+2) \ \ \ \ \ (6)

 

\displaystyle = (2\Pi_2+o(1)) \frac{\prod_{i=1}^r k_i!}{(k_1+\dots+k_r-1)!} x \log^{k_1+\dots+k_r-1} x

for any fixed {r \geq 1} and any tuple {(k_1,\dots,k_r)} of natural numbers other than {(1,\dots,1)}, where

\displaystyle \Lambda_{(k_1,\dots,k_r)} := \Lambda_{k_1} * \dots * \Lambda_{k_r}

is a further generalisation of the von Mangoldt function (now concentrated on {k_1+\dots+k_r}-almost primes). By combining these asymptotics with some elementary identities involving the {\Lambda_{(k_1,\dots,k_r)}}, together with the Weierstrass approximation theorem, Bombieri was able to control a wide family of sums including (1), except for one undetermined scalar {\delta_x \in [0,2]}. Namely, he was able to show (again on EH) that for any fixed {r \geq 1} and any continuous function {g_r} on the simplex {\Delta_r := \{ (t_1,\dots,t_r) \in {\bf R}^r: t_1+\dots+t_r = 1; 0 \leq t_1 \leq \dots \leq t_r\}} that had suitable vanishing at the boundary, the sum

\displaystyle \sum_{n \leq x: n=p_1 \dots p_r} g_r( \frac{\log p_1}{\log n}, \dots, \frac{\log p_r}{\log n} ) \Lambda(n+2)

was equal to

\displaystyle (\delta_x+o(1)) \int_{\Delta_r} g_r \frac{x}{\log x} \ \ \ \ \ (7)

 

when {r} was odd and

\displaystyle (2-\delta_x+o(1)) \int_{\Delta_r} g_r \frac{x}{\log x} \ \ \ \ \ (8)

 

when {r} was even, where the integral on {\Delta_r} is with respect to the measure {\frac{dt_1 \dots dt_{r-1}}{t_1 \dots t_r}} (this is Dirac measure in the case {r=1}). In particular, we have

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) = (\delta_x + o(1)) 2 \Pi_2 x

and the twin prime conjecture would be proved if one could show that {\delta_x} is bounded away from zero, while (1) is equivalent to the assertion that {\delta_x} is equal to {1+o(1)}. Unfortunately, no additional bound beyond the inequalities {0 \leq \delta_x \leq 2} provided by the Bombieri asymptotic sieve is known, even if one assumes all other major conjectures in number theory than the prime tuples conjecture and its variants (e.g. GRH, GEH, GUE, abc, Chowla, …).

To put it another way, the Bombieri asymptotic sieve is able (on EH) to compute asymptotics for sums

\displaystyle \sum_{n \leq x} f(n) \Lambda(n+2) \ \ \ \ \ (9)

 

without needing to know the unknown scalar {\delta_x}, when {f} is a function supported on almost primes of the form

\displaystyle f(p_1 \dots p_r) = g_r( \frac{\log p_1}{\log n}, \dots, \frac{\log p_r}{\log n} )

for {1 \leq r \leq r_*} and some fixed {r_*}, with {f} vanishing elsewhere and for some continuous (symmetric) functions {g_r: \Delta_r \rightarrow {\bf C}} obeying some vanishing at the boundary, so long as the parity condition

\displaystyle \sum_{r \hbox{ odd}} \int_{\Delta_r} g_r = \sum_{r \hbox{ even}} \int_{\Delta_r} g_r

is obeyed (informally: {f} gives the same weight to products of an odd number of primes as to products of an even number of primes, or to put it another way, {f} is asymptotically orthogonal to the Möbius function {\mu}). But when {f} violates the parity condition, the asymptotic involves the unknown {\delta_x}. This scalar {\delta_x} thus embodies the “parity problem” for the twin prime conjecture (discussed in these previous blog posts).

Because the obstruction to the parity problem is only one-dimensional (on EH), one can replace any parity-violating weight (such as {\Lambda}) with any other parity-violating weight and obtain a logically equivalent estimate. For instance, to prove the twin prime conjecture on EH, it would suffice to show that

\displaystyle \sum_{p_1 p_2 p_3 \leq x: p_1,p_2,p_3 \geq x^\alpha} \Lambda(p_1 p_2 p_3 + 2) \gg \frac{x}{\log x}

for some fixed {\alpha>0}, or equivalently that there are {\gg \frac{x}{\log^2 x}} solutions to the equation {p - p_1 p_2 p_3 = 2} in primes with {p \leq x} and {p_1,p_2,p_3 \geq x^\alpha}. (In some cases, this sort of reduction can also be made using other sieves than the Bombieri asymptotic sieve, as was observed by Ng.) As another example, the Bombieri asymptotic sieve can be used to show that the asymptotic (1) is equivalent to the asymptotic

\displaystyle \sum_{n \leq x} \mu(n) 1_R(n) \Lambda(n+2) = o( \frac{x}{\log x})

where {R} is the set of numbers that are rough in the sense that they have no prime factors less than {x^\alpha} for some fixed {\alpha>0} (the function {\mu 1_R} clearly correlates with {\mu} and so must violate the parity condition). One can replace {1_R} with similar sieve weights (e.g. a Selberg sieve) that concentrate on almost primes if desired.

As it turns out, if one is willing to strengthen the assumption of the Elliott-Halberstam (EH) conjecture to the assumption of the generalised Elliott-Halberstam (GEH) conjecture (as formulated for instance in Claim 2.6 of the Polymath8b paper), one can also swap the {\Lambda(n+2)} factor in the above asymptotics with other parity-violating weights and obtain a logically equivalent estimate, as the Bombieri asymptotic sieve also applies to weights such as {\mu 1_R} under the assumption of GEH. For instance, on GEH one can use two such applications of the Bombieri asymptotic sieve to show that the twin prime conjecture would follow if one could show that there are {\gg \frac{x}{\log^2 x}} solutions to the equation

\displaystyle p_1 p_2 - p_3 p_4 = 2

in primes with {p_1,p_2,p_3,p_4 \geq x^\alpha} and {p_1 p_2 \leq x}, for some {\alpha > 0}. Similarly, on GEH the asymptotic (1) is equivalent to the asymptotic

\displaystyle \sum_{n \leq x} \mu(n) 1_R(n) \mu(n+2) 1_R(n+2) = o( \frac{x}{\log^2 x})

for some fixed {\alpha>0}, and similarly with {1_R} replaced by other sieves. This form of the quantitative twin primes conjecture is appealingly similar to the (special case)

\displaystyle \sum_{n \leq x} \mu(n) \mu(n+2) = o(x)

of the Chowla conjecture, for which there has been some recent progress (discussed for instance in these recent posts). Informally, the Bombieri asymptotic sieve lets us (on GEH) view the twin prime conjecture as a sort of Chowla conjecture restricted to almost primes. Unfortunately, the recent progress on the Chowla conjecture relies heavily on the multiplicativity of {\mu} at small primes, which is completely destroyed by inserting a weight such as {1_R}, so this does not yet yield a viable path towards the twin prime conjecture even assuming GEH. Still, the similarity is striking, and one can hope that further ways to attack the Chowla conjecture may emerge that could impact the twin prime conjecture. (Alternatively, if one assumes a sufficiently optimistic version of the GEH, one could perhaps relax the notion of “almost prime” to the extent that one could start usefully using multiplicativity at smallish primes, though this seems rather wishful at present, particularly since the most optimistic versions of GEH are known to be false.)

The Bombieri asymptotic sieve is already well explained in the original two papers of Bombieri; there is also a slightly different treatment of the sieve by Friedlander and Iwaniec, as well as a simplified version in the book of Friedlander and Iwaniec (in which the distribution hypothesis is strengthened in order to shorten the arguments. I’ve decided though to write up my own notes on the sieve below the fold; this is primarily for my own benefit, but may be useful to some readers also. I largely follow the treatment of Bombieri, with the one idiosyncratic twist of replacing the usual “elementary” Selberg sieve with the “analytic” Selberg sieve used in particular in many of the breakthrough works in small gaps between primes; I prefer working with the latter due to its Fourier-analytic flavour.

— 1. Controlling generalised von Mangoldt sums —

To prove (5), we shall first generalise it, by replacing the sequence {\Lambda(n+2)} by a more general sequence {a_n} obeying the following axioms:

  • (i) (Non-negativity) One has {a_n \geq 0} for all {n}.
  • (ii) (Crude size bound) One has {a_n \ll \tau(n)^{O(1)} \log^{O(1)} n} for all {n}, where {\tau} is the divisor function.
  • (iii) (Size) We have {\sum_{n \leq x} a_n = (C+o(1)) x} for some constant {C>0}.
  • (iv) (Elliott-Halberstam type conjecture) For any {\varepsilon,A>0}, one has

    \displaystyle \sum_{d \leq x^{1-\varepsilon}} |\sum_{n \leq x: d|n} a_n - C x \frac{g(d)}{d}| \ll_{\varepsilon,A} x \log^{-A} x

    where {g} is a multiplicative function with {g(p^j) = 1 + O(1/p)} for all primes {p} and {j \geq 1}.

These axioms are a little bit stronger than what is actually needed to make the Bombieri asymptotic sieve work, but we will not attempt to work with the weakest possible axioms here.

We introduce the function

\displaystyle G(s) := \prod_p \frac{1-g(p)/p^s}{1-1/p^s}

which is analytic for {\hbox{Re}(s) > 0}; in particular it can be evaluated at {s=1} to yield

\displaystyle G(1) = \prod_p \frac{1-g(p)/p}{1-1/p}.

There are two model examples of data {a_n, C, g} to keep in mind. The first, discussed in the introduction, is when {a_n =\Lambda(n+2)}, then {C = 2 \Pi_2} and {g} is as in the introduction; one of course needs EH to justify axiom (iv) in this case. The other is when {a_n=1}, in which case {C=1} and {g(n)=1} for all {n}. We will later take advantage of the second example to avoid doing some (routine, but messy) main term computations.

The main result of this section is then

Theorem 1 Let {a_n, g, C, G} be as above. Let {\vec k = (k_1,\dots,k_r)} be a tuple of natural numbers (independent of {x}) that is not equal to {(1,\dots,1)}. Then one has the asymptotic

\displaystyle \sum_{n \leq x} \Lambda_{\vec k}(n) a_n = (G(1)+o(1)) \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} C x \log^{|\vec k|-1} x

as {x \rightarrow \infty}, where {|\vec k| := k_1 + \dots + k_r}.

Note that this recovers (5) (on EH) as a special case.

We now begin the proof of this theorem. Henceforth we allow implied constants in the {O()} or {\ll} notation to depend on {r, \vec k} and {g,G}.

It will be convenient to replace the range {n \leq x} by a shorter range by the following standard localisation trick. Let {B} be a large quantity depending on {r, \vec k} to be chosen later, and let {I} denote the interval {\{ n: x - x \log^{-B} x \leq n \leq x \}}. We will show the estimate

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = (G(1)+o(1)) \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} C |I| \log^{|\vec k|-1} x \ \ \ \ \ (10)

 

from which the original claim follows by a routine summation argument. Observe from axiom (iv) and the triangle inequality that

\displaystyle \sum_{d \leq x^{1-\varepsilon}: \mu^2(d)=1} |\sum_{n \in I: d|n} a_n - C |I| \frac{g(d)}{d}| \ll_{\varepsilon,A} x \log^{-A} x

for any {\varepsilon,A > 0}.

Write {L} for the logarithm function {L(n) := \log n}, thus {\Lambda_k = \mu * L^k} for any {k}. Without loss of generality we may assume that {k_r > 1}; we then factor {\Lambda_{\vec k} = \mu_{\vec k} * L^{k_r}}, where

\displaystyle \mu_{\vec k} := \Lambda_{k_1} * \dots * \Lambda_{k_{r-1}} * \mu.

This function is just {\mu} when {r=1}. When {r>1} the function is more complicated, but we at least have the following crude bound:

Lemma 2 One has the pointwise bound {|\mu_{\vec k}| \leq L^{|\vec k|-k_r}}.

Proof: We induct on {r}. The case {r=1} is obvious, so suppose {r>1} and the claim has already been proven for {r-1}. Since {\mu_{\vec k} = \Lambda_{k_1} * \mu_{(k_2,\dots,k_r)}}, we see from induction hypothesis and the triangle inequality that

\displaystyle |\mu_{\vec k}| \leq \Lambda_{k_1} * L^{|\vec k| - k_r - k_1} \leq L^{|\vec k| - k_r - k_1} (\Lambda_{k_1} * 1).

Since {\Lambda_{k_1}*1 = L^{k_1}} by Möbius inversion, the claim follows. \Box

We can write

\displaystyle \Lambda_{\vec k}(n) = \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{n}{d}.

In the region {n \in I}, we have {\log^{k_r} \frac{n}{d} = \log^{k_r} \frac{x}{d} + O( \log^{-B+O(1)} x )}. Thus

\displaystyle \Lambda_{\vec k}(n) = \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} + O( \tau(x) \log^{-B+O(1)} x )

for {n \in I}. The contribution of the error term to {O( \tau(x) \log^{-B+O(1)} x )} to (10) is easily seen to be negligible if {B} is large enough, so we may freely replace {\Lambda_{\vec k}(n)} with {\sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d}} with little difficulty.

If we insert this replacement directly into the left-hand side of (10) and rearrange, we get

\displaystyle \sum_{d \leq x} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \sum_{n \in I: d|n} a_d.

We can’t quite control this using axiom (iv) because the range of {d} is a bit too big, as explained in the introduction. So let us introduce a truncated function

\displaystyle \Lambda_{\vec k,\varepsilon}(n) := \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \eta_\varepsilon( \frac{\log d}{\log x} ) \ \ \ \ \ (11)

 

where {\varepsilon>0} is a small quantity to be chosen later, and {\eta_\varepsilon: {\bf R} \rightarrow [0,1]} is a smooth function that equals {1} on {(-\infty,1-4\varepsilon)} and equals {0} on {(1-3\varepsilon,+\infty)}. Suppose one could establish the following two estimates for any fixed {\varepsilon>0}:

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) a_n + O( (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (12)

 

and

\displaystyle \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) a_n = C Q_{\varepsilon,x} G(1) + o( |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (13)

 

where {Q_{\varepsilon,x}} is a quantity that depends on {\varepsilon, \eta_\varepsilon, \vec k, B, x} but not on {C, g,G}. Then on combining the two estimates we would have

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = C Q_{\varepsilon,x} G(1) + (O(\varepsilon) + o(1)) C |I| \log^{|\vec k|-1} x. \ \ \ \ \ (14)

 

One could in principle compute {Q_{\varepsilon,x}} explicitly from the proof of (13), but one can avoid doing so by the following comparison trick. In the special case {a_n=1}, standard multiplicative number theory (noting that the Dirichlet series {\sum_n \frac{\Lambda_{\vec k}(n)}{n^s}} has a pole of order {|\vec k|} at {s=1}, with top Laurent coefficient {\prod_{j=1}^r k_j!}) gives the asymptotic

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} + o(1)) |I| \log^{|\vec k|-1} x

which when compared with (14) for {a_n=1} (recalling that {G(1)=C=1} in this case) gives the formula

\displaystyle Q_{\varepsilon,x} = (\prod_{j=1}^r k_j + O(\varepsilon)) |I| \log^{|\vec k|-1} x.

Inserting this back into (14) and recalling that {\varepsilon>0} can be made arbitrarily small, we obtain (10).

As it turns out, the estimate (13) is easy to establish, but the estimate (12) is not, roughly speaking because the typical number {n} in {I} has too many divisors {d} in the range {[x^{1-4\varepsilon},1]}, each of which gives a contribution to the error term. (In the book of Friedlander and Iwaniec, the estimate (13) is established anyway, but only after assuming a stronger version of (iv), roughly speaking in which {d} is allowed to be as large as {x \exp( -\log^{1/4} x)}.) To resolve this issue, we will insert a preliminary sieve {\nu_\varepsilon} that will remove most of the potential divisors {d} i the range {[x^{1-4\varepsilon},1]} (leaving only about {O(1)} such divisors on the average for typical {n}), making the analogue of (12) easier to prove (at the cost of making the analogue of (13) more difficult). Namely, if one can find a function {\nu_\varepsilon: {\bf N} \rightarrow {\bf R}} for which one has the estimates

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = \sum_{n \in I} \Lambda_{\vec k}(n) \nu_\varepsilon(n) a_n + O( (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x ), \ \ \ \ \ (15)

 

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) \nu_\varepsilon(n) a_n

\displaystyle = \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) \nu_\varepsilon(n) a_n + O( (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (16)

 

and

\displaystyle \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) \nu_\varepsilon(n) a_n = C Q'_{\varepsilon,x} G(1) + o( |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (17)

 

for some quantity {Q'_{\varepsilon,x}} that depends on {\varepsilon, \eta_\varepsilon, \vec k, B, x} but not on {C, g, G,}, then by repeating the previous arguments we will again be able to establish (10).

The key estimate is (16). As we shall see, when comparing {\Lambda_{\vec k}(n) \nu_\varepsilon(n)} with {\Lambda_{\vec k,\varepsilon}(n) \nu_\varepsilon(n)}, the weight {\nu_\varepsilon} will cost us a factor of {1/\varepsilon}, but the {\log^{k_r} \frac{x}{d}} term in the definitions of {\Lambda_{\vec k}} and {\Lambda_{\vec k,\varepsilon}} will recover a factor of {\varepsilon^{k_r}}, which will give the desired bound since we are assuming {k_r > 1}.

One has some flexibility in how to select the weight {\nu_\varepsilon}: basically any standard sieve that uses divisors of size at most {x^{2\varepsilon}} to localise (at least approximately) to numbers that are rough in the sense that they have no (or at least very few) factors less than {x^\varepsilon}, will do. We will use the analytic Selberg sieve choice

\displaystyle \nu_\varepsilon(n) := (\sum_{d|n} \mu(d) \psi( \frac{\log d}{\varepsilon \log x} ))^2 \ \ \ \ \ (18)

 

where {\psi: {\bf R} \rightarrow [0,1]} is a smooth function supported on {[-1,1]} that equals {1} on {[-1/2,1/2]}.

It remains to establish the bounds (15), (16), (17). To warm up and introduce the various methods needed, we begin with the standard bound

\displaystyle \sum_{n \in I} \nu_\varepsilon(n) a_n = \frac{C|I|}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1) + o(1)), \ \ \ \ \ (19)

 

where {\psi'} denotes the derivative of {\psi}. Note the loss of {1/\varepsilon} that had previously been pointed out. In the arguments that follows I will be a little brief with the details, as they are standard (see e.g. this previous post).

We now prove (19). The left-hand side can be expanded as

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \sum_{n \in I: [d_1,d_2]|n} a_n

where {[d_1,d_2]} denotes the least common multiple of {d_1} and {d_2}. From the support of {\psi} we see that the summand is only non-vanishing when {[d_1,d_2] \leq x^{2\varepsilon}}. We now use axiom (iv) and split the left-hand side into a main term

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g(d)}{d} C |I|

and an error term that is at most

\displaystyle O_\varepsilon( \sum_{d \leq x^{2\varepsilon}} \tau(d)^{O(1)} | \sum_{n \in I: d|n} a_n - \frac{g(d)}{d} C |I|| ). \ \ \ \ \ (20)

 

From axiom (ii) and elementary multiplicative number theory, we have the bound

\displaystyle \sum_{d \leq x} \tau(d)^{O(1)} | \sum_{n \in I: d|n} a_n - \frac{g(d)}{d} C |I| \ll C |I| \log^{O(1)} x

so from axiom (iv) and Cauchy-Schwarz we see that the error term (20) is acceptable. Thus it will suffice to establish the bound

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g([d_1,d_2])}{[d_1,d_2]}

\displaystyle = \frac{1}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1) + o(\frac{1}{\log x}). \ \ \ \ \ (21)

 

The summand here is almost, but not quite, multiplicative in {d_1,d_2}. To make it genuinely multiplicative, we perform a (shifted) Fourier expansion

\displaystyle \psi(u) = \int_{\bf R} e^{-(1+it)u} \Psi(t)\ dt \ \ \ \ \ (22)

 

for some rapidly decreasing function {\Psi} (essentially the Fourier transform of {e^u \psi(u)}). Thus

\displaystyle \psi( \frac{\log d}{\varepsilon \log x} ) = \int_{\bf R} \frac{1}{d^{\frac{1+it}{\varepsilon \log x}}} \Psi(t)\ dt,

and so the left-hand side of (21) can be rearranged using Fubini’s theorem as

\displaystyle \int_{\bf R} \int_{\bf R} E(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x})\ \Psi(t_1) \Psi(t_2) dt_1 dt_2 \ \ \ \ \ (23)

 

where

\displaystyle E(s_1,s_2) := \sum_{d_1,d_2} \frac{\mu(d_1) \mu(d_2)}{d_1^{s_1}d_2^{s_2}} \frac{g([d_1,d_2])}{[d_1,d_2]}.

We can factorise {E(s_1,s_2)} as an Euler product:

\displaystyle E(s_1,s_2) = \prod_p (1 - \frac{g(p)}{p^{1+s_1}} - \frac{g(p)}{p^{1+s_2}} + \frac{g(p)}{p^{1+s_1+s_2}}).

Taking absolute values and using Mertens’ theorem leads to the crude bound

\displaystyle E(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x}) \ll_\varepsilon \log^{O(1)} x

which when combined with the rapid decrease of {\Psi}, allows us to restrict the region of integration in (23) to the square {\{ |t_1|, |t_2| \leq \sqrt{\log x} \}} (say) with negligible error. Next, we use the Euler product

\displaystyle \zeta(s) = \prod_p (1-\frac{1}{p^s})^{-1}

for {\hbox{Re} s > 1} to factorise

\displaystyle E(s_1,s_2) = \frac{\zeta(1+s_1+s_2)}{\zeta(1+s_1) \zeta(1+s_2)} \prod_p E_p(s_1,s_2)

where

\displaystyle E_p(s_1,s_2) := \frac{(1 - \frac{g(p)}{p^{1+s_1}} - \frac{g(p)}{p^{1+s_2}} + \frac{g(p)}{p^{1+s_1+s_2}})(1 - \frac{1}{p^{1+s_1+s_2}})}{(1-\frac{1}{p^{1+s_1}})(1-\frac{1}{p^{1+s_2}})}.

For {s_1,s_2=o(1)} with nonnegative real part, one has

\displaystyle E_p(s_1,s_2) = 1 + O(1/p^2)

and so by the Weierstrass {M}-test, {\prod_p E_p(s_1,s_2)} is continuous at {s_1=s_2=0}. Since

\displaystyle \prod_p E_p(0,0) = G(1)

we thus have

\displaystyle \prod_p E_p(s_1,s_2) = G(1) + o(1)

Also, since {\zeta} has a pole of order {1} at {s=1} with residue {1}, we have

\displaystyle \frac{\zeta(1+s_1+s_2)}{\zeta(1+s_1) \zeta(1+s_2)} = (1+o(1)) \frac{s_1 s_2}{s_1+s_2}

and thus

\displaystyle E(s_1,s_2) = (G(1)+o(1)) \frac{s_1s_2}{s_1+s_2}.

The quantity (23) can thus be written, up to errors of {o(\frac{1}{\log x})}, as

\displaystyle \frac{G(1)}{\varepsilon \log x} \int_{|t_1|, |t_2| \leq \sqrt{\log x}} \frac{(1+it_1)(1+it_2)}{1+it_1+1+it_2} \Psi(t_1) \Psi(t_2)\ dt_1 dt_2.

Using the rapid decrease of {\Psi}, we may remove the restriction on {t_1,t_2}, and it will now suffice to prove the identity

\displaystyle \int_{\bf R} \int_{\bf R} \frac{(1+it_1)(1+it_2)}{1+it_1+1+it_2} \Psi(t_1) \Psi(t_2)\ dt_1 dt_2 = (\int_0^1 \psi'(u)^2\ du)^2.

But on differentiating and then squaring (22) we have

\displaystyle \psi'(u)^2 = \int_{\bf R} \int_{\bf R} (1+it_1)(1+it_2) e^{-(1+it_1+1+it_2)u}\Psi(t_1) \Psi(t_2)\ dt_1 dt_2

and the claim follows by integrating in {u} from zero to infinity (noting that {\psi'} vanishes for {u>1}).

We have the following variant of (19):

Lemma 3 For any {d \leq x^{1-3\varepsilon}}, one has

\displaystyle \sum_{n \in I: d|n} \nu_\varepsilon(n) a_n \ll \frac{C|I|}{\varepsilon \log x} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d} + R_d \ \ \ \ \ (24)

 

where the {R_d} are such that

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} R_d \ll_A |I| \log^{-A} x \ \ \ \ \ (25)

 

for any {A>0}. We also have the variant

\displaystyle \sum_{n \in I: d|n} \nu_\varepsilon(n/d) a_n \ll \frac{C|I|}{\varepsilon \log x} \frac{\prod_{p|d} O(1 ) )}{d} + R_d. \ \ \ \ \ (26)

 

If in addition {d} has no prime factors less than {x^\delta} for some fixed {\delta>0}, one has

\displaystyle \sum_{n \in I: d|n} \nu_\varepsilon(n) a_n

\displaystyle = \frac{1+o(1)}{d} \frac{C|I|}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1) + O(R_d). \ \ \ \ \ (27)

 

Roughly speaking, the above estimates assert that {\nu_\varepsilon} is concentrated on those numbers {n} with no prime factors much less than {x^\varepsilon}, but factors {d} without such small prime divisors occur with about the same relative density as they do in the integers.

Proof: The left-hand side of (24) can be expanded as

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \sum_{n \in I: [d_1,d_2,d]|n} a_n.

If we define

\displaystyle R_d := \sum_{d' \leq x^{1-\varepsilon}: d|d'} \tau(d')^2 |\sum_{n \in I:d'|n} a_n - \frac{g(d')}{d'} C|I||

then the previous expression can be written as

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g([d_1,d_2,d])}{[d_1,d_2,d]} C|I| + O(R_d),

while one has

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} R_d \leq \sum_{d' \leq x^{1-\varepsilon}} \tau(d')^3 |\sum_{n \in I:d'|n} a_n - \frac{g(d')}{d'} C|I||

which gives (25) from Axiom (iv). To prove (24), it now suffices to show that

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g([d_1,d_2,d])}{[d_1,d_2,d]}

\displaystyle \ll \frac{1}{\varepsilon \log x} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d}. \ \ \ \ \ (28)

 

Arguing as before, the left-hand side is

\displaystyle \int_{\bf R} \int_{\bf R} E^{(d)}(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x})\ \Psi(t_1) \Psi(t_2) dt_1 dt_2

where

\displaystyle E^{(d)}(s_1,s_2) := \sum_{d_1,d_2} \frac{\mu(d_1) \mu(d_2)}{d_1^{s_1}d_2^{s_2}} \frac{g([d_1,d_2,d])}{[d_1,d_2,d]}.

From Mertens’ theorem we have

\displaystyle E^{(d)}(s_1,s_2) \ll_\varepsilon \frac{\prod_{p|d} O(1)}{d} \log^{O(1)} x

when {\hbox{Re} s_1, \hbox{Re} s_2 = \frac{1}{\varepsilon \log x}}, so the contribution of the terms where {|t_1|, |t_2| \geq \sqrt{\log x}} can be absorbed into the {R_d} error (after increasing that error slightly). For the remaining contributions, we see that

\displaystyle E^{(d)}(s_1,s_2) = \frac{\zeta(1+s_1+s_2)}{\zeta(1+s_1) \zeta(1+s_2)} \prod_p E^{(d)}_p(s_1,s_2)

where {E^{(d)}_p(s_1,s_2) = E_p(s_1,s_2)} if {p} does not divide {d}, and

\displaystyle E^{(d)}_p(s_1,s_2) = \frac{g(p^j)}{p^j} \frac{(1 - \frac{1}{p^{s_1}}) (1 - \frac{1}{p^{s_2}}) (1 - \frac{1}{p^{1+s_1+s_2}})}{(1-\frac{1}{p^{1+s_1}})(1-\frac{1}{p^{1+s_2}})}

if {p} divides {d} {j} times for some {j \geq 1}. In the latter case, Taylor expansion gives the bounds

\displaystyle |E^{(d)}_p(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x})| \lesssim (1+|t_1|+|t_2|)^{O(1)} \frac{\min( \frac{\log p}{\varepsilon \log x}, 1 )^2}{p}

and the claim (28) follows. When {p \geq x^\delta} and {|t_1|, |t_2| \leq \sqrt{\log x}} we have

\displaystyle E^{(d)}_p(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x}) = \frac{1+o(1)}{p^j}

and (27) follows by repeating the previous calculations. Finally, (26) is proven similarly to (24) (using {d[d_1,d_2]} in place of {[d_1,d_2,d]}). \Box

Now we can prove (15), (16), (17). We begin with (15). Using the Leibniz rule {L(f*g) = (Lf)*g + f*(Lg)} applied to the identity {\mu = \mu * 1 * \mu} and using {\Lambda = \mu*L} and Möbius inversion (and the associativity and commutativity of Dirichlet convolution) we see that

\displaystyle L\mu = - \mu * \Lambda. \ \ \ \ \ (29)

 

Next, by applying the Leibniz rule to {\Lambda_k = \mu * L^k} for some {k \geq 1} and using (29) we see that

\displaystyle L \Lambda_k = L \mu * L^k + \mu * L^{k+1}

\displaystyle = - \mu * \Lambda * L^k + \Lambda_{k+1}

and hence we have the recursive identity

\displaystyle \Lambda_{k+1} = L \Lambda_k + \Lambda *\Lambda_k. \ \ \ \ \ (30)

 

In particular, from induction we see that {\Lambda_k} is supported on numbers with at most {k} distinct prime factors, and hence {\Lambda_{\vec k}} is supported on numbers with at most {|\vec k|} distinct prime factors. In particular, from (18) we see that {\nu_\varepsilon(n) = O(1)} on the support of {\Lambda_{\vec k}}. Thus it will suffice to show that

\displaystyle \sum_{n \in I: \nu_\varepsilon(n) \neq 1} \Lambda_{\vec k}(n) a_n \ll (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x.

If {\nu_\varepsilon(n) \neq 1} and {\Lambda_{\vec k}(n) \neq 0}, then {n} has at most {|\vec k|} distinct prime factors {p_1 < p_2 < \dots < p_r}, with {p_1 \leq x^\varepsilon}. If we factor {n = n_1 n_2}, where {n_1} is the contribution of those {p_i} with {p_i \leq x^{1/10|\vec k|}}, and {n_2} is the contribution of those {p_i} with {p_i > x^{1/10|\vec k|}}, then at least one of the following two statements hold:

  • (a) {n_1} (and hence {n}) is divisible by a square number of size at least {x^{1/10}}.
  • (b) {n_1 \leq x^{1/5}}.

The contribution of case (a) is easily seen to be acceptable by axiom (ii). For case (b), we observe from (30) and induction that

\displaystyle \Lambda_k(n) \ll \log^{|\vec k|} x \prod_{j=1}^k \frac{\log p_j}{\log x}

and so it will suffice to show that

\displaystyle \sum_{n_1} (\prod_{p|n_1} \frac{\log p}{\log x}) \sum_{n \in I: n_1 | n} 1_R(n/n_1) a_n \ll (\varepsilon + o(1)) C |I| \log^{-1} x

where {n_1} ranges over numbers bounded by {x^{1/5}} with at most {|\vec k|} distinct prime factors, the smallest of which is at most {x^\varepsilon}, and {R} consists of those numbers with no prime factor less than or equal to {x^{1/10|\vec k|}}. Applying (26) (with {\varepsilon} replaced by {1/10|\vec k|}) gives the bound

\displaystyle \sum_{n \in I: d|n} 1_R(n/n_1) a_n \ll \frac{C|I|}{\log x} \frac{1}{n_1} + R_d

so by (25) it suffices to show that

\displaystyle \sum_{n_1} (\prod_{p|n_1} \frac{\log p}{\log x}) \frac{1}{n_1} \ll \varepsilon

subject to the same constraints on {n_1} as before. The contribution of those {n_1} with {r} distinct prime factors can be bounded by

\displaystyle O(\sum_{p_1 \leq x^\varepsilon} \frac{\log p_1}{p_1 \log x}) \times O(\sum_{p \leq x^{1/5}} \frac{\log p}{p\log x})^{r-1};

applying Mertens’ theorem and summing over {1 \leq r \leq |\vec k|}, one obtains the claim.

Now we show (16). As discussed previously in this section, we can replace {\Lambda_{\vec k}(n)} by {\sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d}} with negligible error. Comparing this with (16) and (11), we see that it suffices to show that

\displaystyle \sum_{n \in I} \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} (1 - \eta_\varepsilon(\frac{\log d}{\log x})) \nu_\varepsilon(n) a_n \ll (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x.

From the support of {\eta_\varepsilon}, the summand on the left-hand side is only non-zero when {d \geq x^{1-4\varepsilon}}, which makes {\log^{k_r} \frac{x}{d} \ll \varepsilon^{k_r} \log^{k_r} x \leq \varepsilon^2 \log^{k_r} x}, where we use the crucial hypothesis {k_r > 1} to gain enough powers of {\varepsilon} to make the argument here work. Applying Lemma 2, we reduce to showing that

\displaystyle \sum_{n \in I} \sum_{d|n: d \geq x^{1-4\varepsilon}} \nu_\varepsilon(n) a_n \ll \frac{1+o(1)}{\varepsilon \log x} C |I|.

We can make the change of variables {d \mapsto n/d} to flip the sum

\displaystyle \sum_{d|n: d \geq x^{1-4\varepsilon}} 1 \leq \sum_{d|n: d \leq x^{3\varepsilon}} 1

and then swap the sums to reduce to showing that

\displaystyle \sum_{d \leq x^{4\varepsilon}} \sum_{n \in I} \nu_\varepsilon(n) a_n \ll \frac{1+o(1)}{\varepsilon \log x} C |I|.

By Lemma 3, it suffices to show that

\displaystyle \sum_{d \leq x^{4\varepsilon}} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d} \ll 1.

To prove this, we use the Rankin trick, bounding the implied weight {1_{d \leq x^{4\varepsilon}}} by {O( \frac{1}{d^{1/\varepsilon \log x}} )}. We can then bound the left-hand side by the Euler product

\displaystyle \prod_p (1 + O( \frac{\min( \frac{\log p}{\varepsilon \log x}, 1 )^2}{p^{1+1/\varepsilon \log x}} ))

which can be bounded by

\displaystyle \exp( O( \sum_p \frac{\min( \frac{\log p}{\varepsilon \log x}, 1 )^2}{p^{1+1/\varepsilon \log x}} ) )

and the claim follows from Mertens’ theorem.

Finally, we show (17). By (11), the left-hand side expands as

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \eta_\varepsilon(\frac{\log d}{\log x}) \sum_{n \in I: d|n} \nu_\varepsilon(n) a_n.

We let {\delta>0} be a small constant to be chosen later. We divide the outer sum into two ranges, depending on whether {d} only has prime factors greater than {x^\delta} or not. In the former case, we can apply (27) to write this contribution as

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \eta_\varepsilon(\frac{\log d}{\log x}) \frac{1+o(1)}{d} \frac{C|I|}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1)

plus a negligible error, where the {d} is implicitly restricted to numbers with all prime factors greater than {x^\delta}. The main term is messy, but it is of the required form {C Q'_{\varepsilon,x} G(1)} up to an acceptable error, so there is no need to compute it any further. It remains to consider those {d} that have at least one prime factor less than {x^\delta}. Here we use (24) instead of (27) as well as Lemma 3 to dominate this contribution by

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} O( \log^{|\vec k|} x \frac{C|I|}{\varepsilon \log x} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d} )

up to negligible errors, where {d} is now restricted to have at least one prime factor less than {x^\delta}. This makes at least one of the factors {\min( \frac{\log p}{\varepsilon \log x}, 1 )} to be at most {O_\varepsilon(\delta)}. A routine application of Rankin’s trick shows that

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 ) )}{d} \ll_\varepsilon 1

and so the total contribution of this case is {O_\varepsilon((\delta+o(1)) |I| \log^{|\vec k|-1} x)}. Since {\delta>0} can be made arbitrarily small, (17) follows.

— 2. Weierstrass approximation —

Having proved Theorem 1, we now take linear combinations of this theorem, combined with the Weierstrass approximation theorem, to give the asymptotics (7), (8) described in the introduction.

Let {a_n}, {g}, {C}, {G} be as in that theorem. It will be convenient to normalise the weights {\Lambda_{\vec k}} by {L^{1-|\vec k|}} to make their mean value comparable to {1}. From Theorem 1 and summation by parts we have

\displaystyle \sum_{n \leq x} L^{1-|\vec k|} \Lambda_{\vec k}(n) a_n = (G(1)+o(1)) \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} C x \ \ \ \ \ (31)

 

whenever {\vec k} does not consist entirely of ones.

We now take a closer look at what happens when {\vec k} does consist entirely of ones. Let {1^r} denote the {r}-tuple {(1,\dots,1)}. Convolving the {k=1} case of (30) with {r-1} copies of {\Lambda} for some {r \geq 1} and using the Leibniz rule, we see that

\displaystyle \Lambda_{(1^{r-1}, 2)} = \frac{1}{r} L \Lambda_{1^r} + \Lambda_{1^{r+1}}

and hence

\displaystyle L^{-r} \Lambda_{1^{r+1}} = L^{-r} \Lambda_{(1^{r-1},2)} - \frac{1}{r} L^{1-r} \Lambda_{1^r}.

Multiplying by {a_n} and summing over {n \leq x}, and using (31) to control the {\Lambda_{(1^{r-1},2)}} term, one has

\displaystyle \sum_{n \leq x} L^{-r} \Lambda_{1^{r+1}}(n) a_n = (G(1)+o(1)) \frac{2}{r!} - \frac{1}{r} \sum_{n \leq x} L^{1-r} \Lambda_{1^{r}}(n) a_n.

If we define {\delta_x} (up to an error of {o(1)}) by the formula

\displaystyle \sum_{n \leq x} \Lambda(n) a_n = (\delta_x G(1) + o(1)) C x

then an induction then shows that

\displaystyle \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) a_n = \frac{1}{(r-1)!} (\delta_x G(1) + o(1)) C x

for odd {r}, and

\displaystyle \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) a_n = \frac{1}{(r-1)!} ((2-\delta_x) G(1) + o(1)) C x

for even {r}. In particular, after adjusting {\delta_x} by {o(1)} if necessary, we have {0 \leq \delta_x \leq 2} since the left-hand sides are non-negative.

If we now define the comparison sequence {b_n := C G(1) (1 + (1-\delta_x) \mu(n))}, standard multiplicative number theory shows that the above estimates also hold when {a_n} is replaced by {b_n}; thus

\displaystyle \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) a_n = \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) b_n + o( x )

for both odd and even {r}. The bound (31) also holds for {b_n} when {\vec k} does not consist entirely of ones, and hence

\displaystyle \sum_{n \leq x} L^{1-|\vec k|} \Lambda_{\vec k}(n) a_n = \sum_{n \leq x} L^{1-|\vec k|} \Lambda_{\vec k}(n) b_n + o( x )

for any fixed {\vec k} (which may or may not consist entirely of ones).

Next, from induction (on {j_1+\dots+j_r}), the Leibniz rule, and (30), we see that for any {r \geq 1} and {j_1,\dots,j_r \geq 0}, {k_1,\dots,k_r}, the function

\displaystyle L^{1-j_1-\dots-j_r-|\vec k|} ((L^{j_1} \Lambda_{k_1}) * \dots * (L^{j_r} \Lambda_{k_r})) \ \ \ \ \ (32)

 

is a finite linear combination of functions of the form {L^{1-|\vec k'|} \Lambda_{\vec k'}} for tuples {\vec k'} that may possibly consist entirely of ones. We thus have

\displaystyle \sum_{n \leq x} f(n) a_n = \sum_{n \leq x}f(n) b_n + o( x )

whenever {f} is one of these functions (32). Specialising to the case {k_1=\dots=k_r=1}, we thus have

\displaystyle \sum_{n_1 \dots n_r \leq x} a_{n} \log^{1-r} n \prod_{i=1}^r (\log n_i/\log n)^{j_i} \Lambda(n_i)

\displaystyle = \sum_{n_1 \dots n_r \leq x} b_{n} \log^{1-r} n \prod_{i=1}^r (\log n_i/\log n)^{j_i} \Lambda(n_i) + o(x )

where {n := n_1 \dots n_r}. The contribution of those {n_i} that are powers of primes can be easily seen to be negligible, leading to

\displaystyle \sum_{p_1 \dots p_r \leq x} a_{n} \log n \prod_{i=1}^r (\log p_i/\log n)^{j_i+1}

\displaystyle = \sum_{p_1 \dots p_r \leq x} b_{n} \prod_{i=1}^r (\log p_i/\log n)^{j_i+1} + o(x)

where now {n := p_1 \dots p_r}. The contribution of the case where two of the primes {p_i} agree can also be seen to be negligible, as can the error when replacing {\log n} with {\log x}, and then by symmetry

\displaystyle \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} a_{n} \prod_{i=1}^r (\log p_i/\log n)^{j_i+1}

\displaystyle = \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} b_{n} \prod_{i=1}^r (\log p_i/\log n)^{j_i+1} + o(x / \log x).

By linearity, this implies that

\displaystyle \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} a_{n} P( \log p_1/\log n, \dots, \log p_r/\log n)

\displaystyle = \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} b_{n} P( \log p_1/\log n, \dots, \log p_r/\log n) + o(x / \log x)

for any polynomial {P(t_1,\dots,t_r)} that vanishes on the coordinate hyperplanes {t_i=0}. The right-hand side can also be evaluated by Mertens’ theorem as

\displaystyle CG(1) \delta_x \int_{\Delta_r} P x + o(x)

when {r} is odd and

\displaystyle CG(1) (2-\delta_x) \int_{\Delta_r} P x + o(x)

when {r} is even. Using the Weierstrass approximation theorem, we then have

\displaystyle \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} a_{n} g_r( \log p_1/\log n, \dots, \log p_r/\log n)

\displaystyle = \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} b_{n} g_r( \log p_1/\log n, \dots, \log p_r/\log n) + o(x / \log x)

for any continuous function {g_r} that is compactly supported in the interior of {\Delta_r}. Computing the right-hand side using Mertens’ theorem as before, we obtain the claimed asymptotics (7), (8).

Remark 4 The Bombieri asymptotic sieve has to use the full power of EH (or GEH); there are constructions due to Ford that show that if one only has a distributional hypothesis up to {x^{1-c}} for some fixed constant {c>0}, then the asymptotics of sums such as (5), or more generally (9), are not determined by a single scalar parameter {\delta_x}, but can also vary in other ways as well. Thus the Bombieri asymptotic sieve really is asymptotic; in order to get {o(1)} type error terms one needs the level {1-\varepsilon} of distribution to be asymptotically equal to {1} as {x \rightarrow \infty}. Related to this, the quantitative decay of the {o(1)} error terms in the Bombieri asymptotic sieve are extremely poor; in particular, they depend on the dependence of implied constant in axiom (iv) on the parameters {\varepsilon,A}, for which there is no consensus on what one should conjecturally expect.

Archives