You are currently browsing the monthly archive for September 2014.

The (presumably) final article arising from the Polymath8 project has now been uploaded to the arXiv as “The “bounded gaps between primes” Polymath project – a retrospective“.  This article, submitted to the Newsletter of the European Mathematical Society, consists of personal contributions from ten different participants (at varying levels of stage of career, and intensity of participation) on their own experiences with the project, and some thoughts as to what lessons to draw for any subsequent Polymath projects.  (At present, I do not know of any such projects being proposed, but from recent experience I would imagine that some opportunity suitable for a Polymath approach will present itself at some point in the near future.)

This post will also serve as the latest (and probably last) of the Polymath8 threads (rolling over this previous post), to wrap up any remaining discussion about any aspect of this project.

Analytic number theory is often concerned with the asymptotic behaviour of various arithmetic functions: functions {f: {\bf N} \rightarrow {\bf R}} or {f: {\bf N} \rightarrow {\bf C}} from the natural numbers {{\bf N} = \{1,2,\dots\}} to the real numbers {{\bf R}} or complex numbers {{\bf C}}. In this post, we will focus on the purely algebraic properties of these functions, and for reasons that will become clear later, it will be convenient to generalise the notion of an arithmetic function to functions {f: {\bf N} \rightarrow R} taking values in some abstract commutative ring {R}. In this setting, we can add or multiply two arithmetic functions {f,g: {\bf N} \rightarrow R} to obtain further arithmetic functions {f+g, fg: {\bf N} \rightarrow R}, and we can also form the Dirichlet convolution {f*g: {\bf N} \rightarrow R} by the usual formula

\displaystyle  f*g(n) := \sum_{d|n} f(d) g(\frac{n}{d}).

Regardless of what commutative ring {R} is in used here, we observe that Dirichlet convolution is commutative, associative, and bilinear over {R}.

An important class of arithmetic functions in analytic number theory are the multiplicative functions, that is to say the arithmetic functions {f: {\bf N} \rightarrow {\bf R}} such that {f(1)=1} and

\displaystyle  f(nm) = f(n) f(m)

for all coprime {n,m \in {\bf N}}. A subclass of these functions are the completely multiplicative functions, in which the restriction that {n,m} be coprime is dropped. Basic examples of completely multiplicative functions (in the classical setting {R={\bf C}}) include

  • the Kronecker delta {\delta}, defined by setting {\delta(n)=1} for {n=1} and {\delta(n)=0} otherwise;
  • the constant function {1: n \mapsto 1} and the linear function {n \mapsto n} (which by abuse of notation we denote by {n});
  • more generally monomials {n \mapsto n^s} for any fixed complex number {s} (in particular, the “Archimedean characters” {n \mapsto n^{it}} for any fixed {t \in {\bf R}}), which by abuse of notation we denote by {n^s};
  • Dirichlet characters {\chi};
  • the Liouville function {\lambda};
  • the indicator function of the {z}smooth numbers (numbers whose prime factors are all at most {z}), for some given {z}; and
  • the indicator function of the {z}rough numbers (numbers whose prime factors are all greater than {z}), for some given {z}.

Examples of multiplicative functions that are not completely multiplicative include

These multiplicative functions interact well with the multiplication and convolution operations: if {f,g: {\bf N} \rightarrow R} are multiplicative, then so are {fg} and {f * g}, and if {\psi} is completely multiplicative, then we also have

\displaystyle  \psi (f*g) = (\psi f) * (\psi g). \ \ \ \ \ (1)

Finally, the product of completely multiplicative functions is again completely multiplicative. On the other hand, the sum of two multiplicative functions will never be multiplicative (just look at what happens at {n=1}), and the convolution of two completely multiplicative functions will usually just be multiplicative rather than completley multiplicative.

The specific multiplicative functions listed above are also related to each other by various important identities, for instance

\displaystyle  \delta * f = f; \quad \mu * 1 = \delta; \quad 1 * 1 = \tau; \quad \phi * 1 = n

where {f} is an arbitrary arithmetic function.

On the other hand, analytic number theory also is very interested in certain arithmetic functions that are not exactly multiplicative (and certainly not completely multiplicative). One particularly important such function is the von Mangoldt function {\Lambda}. This function is certainly not multiplicative, but is clearly closely related to such functions via such identities as {\Lambda = \mu * L} and {L = \Lambda * 1}, where {L: n\mapsto \log n} is the natural logarithm function. The purpose of this post is to point out that functions such as the von Mangoldt function lie in a class closely related to multiplicative functions, which I will call the derived multiplicative functions. More precisely:

Definition 1 A derived multiplicative function {f: {\bf N} \rightarrow R} is an arithmetic function that can be expressed as the formal derivative

\displaystyle  f(n) = \frac{d}{d\varepsilon} F_\varepsilon(n) |_{\varepsilon=0}

at the origin of a family {f_\varepsilon: {\bf N}\rightarrow R} of multiplicative functions {F_\varepsilon: {\bf N} \rightarrow R} parameterised by a formal parameter {\varepsilon}. Equivalently, {f: {\bf N} \rightarrow R} is a derived multiplicative function if it is the {\varepsilon} coefficient of a multiplicative function in the extension {R[\varepsilon]/(\varepsilon^2)} of {R} by a nilpotent infinitesimal {\varepsilon}; in other words, there exists an arithmetic function {F: {\bf N} \rightarrow R} such that the arithmetic function {F + \varepsilon f: {\bf N} \rightarrow R[\varepsilon]/(\varepsilon^2)} is multiplicative, or equivalently that {F} is multiplicative and one has the Leibniz rule

\displaystyle  f(nm) = f(n) F(m) + F(n) f(m) \ \ \ \ \ (2)

for all coprime {n,m \in {\bf N}}.

More generally, for any {k\geq 0}, a {k}-derived multiplicative function {f: {\bf N} \rightarrow R} is an arithmetic function that can be expressed as the formal derivative

\displaystyle  f(n) = \frac{d^k}{d\varepsilon_1 \dots d\varepsilon_k} F_{\varepsilon_1,\dots,\varepsilon_k}(n) |_{\varepsilon_1,\dots,\varepsilon_k=0}

at the origin of a family {f_{\varepsilon_1,\dots,\varepsilon_k}: {\bf N} \rightarrow R} of multiplicative functions {F_{\varepsilon_1,\dots,\varepsilon_k}: {\bf N} \rightarrow R} parameterised by formal parameters {\varepsilon_1,\dots,\varepsilon_k}. Equivalently, {f} is the {\varepsilon_1 \dots \varepsilon_k} coefficient of a multiplicative function in the extension {R[\varepsilon_1,\dots,\varepsilon_k]/(\varepsilon_1^2,\dots,\varepsilon_k^2)} of {R} by {k} nilpotent infinitesimals {\varepsilon_1,\dots,\varepsilon_k}.

We define the notion of a {k}-derived completely multiplicative function similarly by replacing “multiplicative” with “completely multiplicative” in the above discussion.

There are Leibniz rules similar to (2) but they are harder to state; for instance, a doubly derived multiplicative function {f: {\bf N} \rightarrow R} comes with singly derived multiplicative functions {F_1, F_2: {\bf N} \rightarrow R} and a multiplicative function {G: {\bf N} \rightarrow R} such that

\displaystyle  f(nm) = f(n) G(m) + F_1(n) F_2(m) + F_2(n) F_1(m) + G(n) f(m)

for all coprime {n,m \in {\bf N}}.

One can then check that the von Mangoldt function {\Lambda} is a derived multiplicative function, because {\delta + \varepsilon \Lambda} is multiplicative in the ring {{\bf C}[\varepsilon]/(\varepsilon^2)} with one infinitesimal {\varepsilon}. Similarly, the logarithm function {L} is derived completely multiplicative because {\exp( \varepsilon L ) := 1 + \varepsilon L} is completely multiplicative in {{\bf C}[\varepsilon]/(\varepsilon^2)}. More generally, any additive function {\omega: {\bf N} \rightarrow R} is derived multiplicative because it is the top order coefficient of {\exp(\varepsilon \omega) := 1 + \varepsilon \omega}.

Remark 1 One can also phrase these concepts in terms of the formal Dirichlet series {F(s) = \sum_n \frac{f(n)}{n^s}} associated to an arithmetic function {f}. A function {f} is multiplicative if {F} admits a (formal) Euler product; {f} is derived multiplicative if {F} is the (formal) first derivative of an Euler product with respect to some parameter (not necessarily {s}, although this is certainly an option); and so forth.

Using the definition of a {k}-derived multiplicative function as the top order coefficient of a multiplicative function of a ring with {k} infinitesimals, it is easy to see that the product or convolution of a {k}-derived multiplicative function {f: {\bf N} \rightarrow R} and a {l}-derived multiplicative function {g: {\bf N} \rightarrow R} is necessarily a {k+l}-derived multiplicative function (again taking values in {R}). Thus, for instance, the higher-order von Mangoldt functions {\Lambda_k := \mu * L^k} are {k}-derived multiplicative functions, because {L^k} is a {k}-derived completely multiplicative function. More explicitly, {L^k} is the top order coeffiicent of the completely multiplicative function {\prod_{i=1}^k \exp(\varepsilon_i L)}, and {\Lambda_k} is the top order coefficient of the multiplicative function {\mu * \prod_{i=1}^k \exp(\varepsilon_i L)}, with both functions taking values in the ring {C[\varepsilon_1,\dots,\varepsilon_k]/(\varepsilon_1^2,\dots,\varepsilon_k^2)} of complex numbers with {k} infinitesimals {\varepsilon_1,\dots,\varepsilon_k} attached.

It then turns out that most (if not all) of the basic identities used by analytic number theorists concerning derived multiplicative functions, can in fact be viewed as coefficients of identities involving purely multiplicative functions, with the latter identities being provable primarily from multiplicative identities, such as (1). This phenomenon is analogous to the one in linear algebra discussed in this previous blog post, in which many of the trace identities used there are derivatives of determinant identities. For instance, the Leibniz rule

\displaystyle  L (f * g) = (Lf)*g + f*(Lg)

for any arithmetic functions {f,g} can be viewed as the top order term in

\displaystyle  \exp(\varepsilon L) (f*g) = (\exp(\varepsilon L) f) * (\exp(\varepsilon L) g)

in the ring with one infinitesimal {\varepsilon}, and then we see that the Leibniz rule is a special case (or a derivative) of (1), since {\exp(\varepsilon L)} is completely multiplicative. Similarly, the formulae

\displaystyle  \Lambda = \mu * L; \quad L = \Lambda * 1

are top order terms of

\displaystyle  (\delta + \varepsilon \Lambda) = \mu * \exp(\varepsilon L); \quad \exp(\varepsilon L) = (\delta + \varepsilon \Lambda) * 1,

and the variant formula {\Lambda = - (L\mu) * 1} is the top order term of

\displaystyle  (\delta + \varepsilon \Lambda) = (\exp(-\varepsilon L)\mu) * 1,

which can then be deduced from the previous identities by noting that the completely multiplicative function {\exp(-\varepsilon L)} inverts {\exp(\varepsilon L)} multiplicatively, and also noting that {L} annihilates {\mu*1=\delta}. The Selberg symmetry formula

\displaystyle  \Lambda_2 = \Lambda*\Lambda + \Lambda L, \ \ \ \ \ (3)

which plays a key role in the Erdös-Selberg elementary proof of the prime number theorem (as discussed in this previous blog post), is the top order term of the identity

\displaystyle  \delta + \varepsilon_1 \Lambda + \varepsilon_2 \Lambda + \varepsilon_1\varepsilon_2 \Lambda_2 = (\exp(\varepsilon_2 L) (\delta + \varepsilon_1 \Lambda)) * (\delta + \varepsilon_2 \Lambda)

involving the multiplicative functions {\delta + \varepsilon_1 \Lambda + \varepsilon_2 \Lambda + \varepsilon_1\varepsilon_2 \Lambda_2}, {\exp(\varepsilon_2 L)}, {\delta+\varepsilon_1 \Lambda}, {\delta+\varepsilon_2 \Lambda} with two infinitesimals {\varepsilon_1,\varepsilon_2}, and this identity can be proven while staying purely within the realm of multiplicative functions, by using the identities

\displaystyle  \delta + \varepsilon_1 \Lambda + \varepsilon_2 \Lambda + \varepsilon_1\varepsilon_2 \Lambda_2 = \mu * (\exp(\varepsilon_1 L) \exp(\varepsilon_2 L))

\displaystyle  \exp(\varepsilon_1 L) = 1 * (\delta + \varepsilon_1 \Lambda)

\displaystyle  \delta + \varepsilon_2 \Lambda = \mu * \exp(\varepsilon_2 L)

and (1). Similarly for higher identities such as

\displaystyle  \Lambda_3 = \Lambda L^2 + 3 \Lambda L * \Lambda + \Lambda * \Lambda * \Lambda

which arise from expanding out {\mu * (\exp(\varepsilon_1 L) \exp(\varepsilon_2 L) \exp(\varepsilon_3 L))} using (1) and the above identities; we leave this as an exercise to the interested reader.

An analogous phenomenon arises for identities that are not purely multiplicative in nature due to the presence of truncations, such as the Vaughan identity

\displaystyle  \Lambda_{> V} = \mu_{\leq U} * L - \mu_{\leq U} * \Lambda_{\leq V} * 1 + \mu_{>U} * \Lambda_{>V} * 1 \ \ \ \ \ (4)

for any {U,V \geq 1}, where {f_{>V} = f 1_{>V}} is the restriction of a multiplicative function {f} to the natural numbers greater than {V}, and similarly for {f_{\leq V}}, {f_{>U}}, {f_{\leq U}}. In this particular case, (4) is the top order coefficient of the identity

\displaystyle  (\delta + \varepsilon \Lambda)_{>V} = \mu_{\leq U} * \exp(\varepsilon L) - \mu_{\leq U} * (\delta + \varepsilon \Lambda)_{\leq V} * 1

\displaystyle + \mu_{>U} * (\delta+\varepsilon \Lambda)_{>V} * 1

which can be easily derived from the identities {\delta = \mu_{\leq U} * 1 + \mu_{>U} * 1} and {\exp(\varepsilon L) = (\delta + \varepsilon \Lambda)_{>V} * 1 + (\delta + \varepsilon \Lambda)_{\leq V} + 1}. Similarly for the Heath-Brown identity

\displaystyle  \Lambda = \sum_{j=1}^K (-1)^{j-1} \binom{K}{j} \mu_{\leq U}^{*j} * 1^{*j-1} * L \ \ \ \ \ (5)

valid for natural numbers up to {U^K}, where {U \geq 1} and {K \geq 1} are arbitrary parameters and {f^{*j}} denotes the {j}-fold convolution of {f}, and discussed in this previous blog post; this is the top order coefficient of

\displaystyle  \delta + \varepsilon \Lambda = \sum_{j=1}^K (-1)^{j-1} \binom{K}{j} \mu_{\leq U}^{*j} * 1^{*j-1} * \exp( \varepsilon L )

and arises by first observing that

\displaystyle  (\mu - \mu_{\leq U})^{*K} * 1^{*K-1} * \exp(\varepsilon L) = \mu_{>U}^{*K} * 1^{*K-1} * \exp( \varepsilon L )

vanishes up to {U^K}, and then expanding the left-hand side using the binomial formula and the identity {\mu^{*K} * 1^{*K-1} * \exp(\varepsilon L) = \delta + \varepsilon \Lambda}.

One consequence of this phenomenon is that identities involving derived multiplicative functions tend to have a dimensional consistency property: all terms in the identity have the same order of derivation in them. For instance, all the terms in the Selberg symmetry formula (3) are doubly derived functions, all the terms in the Vaughan identity (4) or the Heath-Brown identity (5) are singly derived functions, and so forth. One can then use dimensional analysis to help ensure that one has written down a key identity involving such functions correctly, much as is done in physics.

In addition to the dimensional analysis arising from the order of derivation, there is another dimensional analysis coming from the value of multiplicative functions at primes {p} (which is more or less equivalent to the order of pole of the Dirichlet series at {s=1}). Let us say that a multiplicative function {f: {\bf N} \rightarrow R} has a pole of order {j} if one has {f(p)=j} on the average for primes {p}, where we will be a bit vague as to what “on the average” means as it usually does not matter in applications. Thus for instance, {1} or {\exp(\varepsilon L)} has a pole of order {1} (a simple pole), {\delta} or {\delta + \varepsilon \Lambda} has a pole of order {0} (i.e. neither a zero or a pole), Dirichlet characters also have a pole of order {0} (although this is slightly nontrivial, requiring Dirichlet’s theorem), {\mu} has a pole of order {-1} (a simple zero), {\tau} has a pole of order {2}, and so forth. Note that the convolution of a multiplicative function with a pole of order {j} with a multiplicative function with a pole of order {j'} will be a multiplicative function with a pole of order {j+j'}. If there is no oscillation in the primes {p} (e.g. if {f(p)=j} for all primes {p}, rather than on the average), it is also true that the product of a multiplicative function with a pole of order {j} with a multiplicative function with a pole of order {j'} will be a multiplicative function with a pole of order {jj'}. The situation is significantly different though in the presence of oscillation; for instance, if {\chi} is a quadratic character then {\chi^2} has a pole of order {1} even though {\chi} has a pole of order {0}.

A {k}-derived multiplicative function will then be said to have an underived pole of order {j} if it is the top order coefficient of a multiplicative function with a pole of order {j}; in terms of Dirichlet series, this roughly means that the Dirichlet series has a pole of order {j+k} at {s=1}. For instance, the singly derived multiplicative function {\Lambda} has an underived pole of order {0}, because it is the top order coefficient of {\delta + \varepsilon \Lambda}, which has a pole of order {0}; similarly {L} has an underived pole of order {1}, being the top order coefficient of {\exp(\varepsilon L)}. More generally, {\Lambda_k} and {L^k} have underived poles of order {0} and {1} respectively for any {k}.

By taking top order coefficients, we then see that the convolution of a {k}-derived multiplicative function with underived pole of order {j} and a {k'}-derived multiplicative function with underived pole of order {j'} is a {k+k'}-derived multiplicative function with underived pole of order {j+j'}. If there is no oscillation in the primes, the product of these functions will similarly have an underived pole of order {jj'}, for instance {\Lambda L} has an underived pole of order {0}. We then have the dimensional consistency property that in any of the standard identities involving derived multiplicative functions, all terms not only have the same derived order, but also the same underived pole order. For instance, in (3), (4), (5) all terms have underived pole order {0} (with any Mobius function terms being counterbalanced by a matching term of {1} or {L}). This gives a second way to use dimensional analysis as a consistency check. For instance, any identity that involves a linear combination of {\mu_{\leq U} * L} and {\Lambda_{>V} * 1} is suspect because the underived pole orders do not match (being {0} and {1} respectively), even though the derived orders match (both are {1}).

One caveat, though: this latter dimensional consistency breaks down for identities that involve infinitely many terms, such as Linnik’s identity

\displaystyle  \Lambda = \sum_{i=0}^\infty (-1)^{i} L * 1_{>1}^{*i}.

In this case, one can still rewrite things in terms of multiplicative functions as

\displaystyle  \delta + \varepsilon \Lambda = \sum_{i=0}^\infty (-1)^i \exp(\varepsilon L) * 1_{>1}^{*i},

so the former dimensional consistency is still maintained.

I thank Andrew Granville, Kannan Soundararajan, and Emmanuel Kowalski for helpful conversations on these topics.

Tamar Ziegler and I have just uploaded to the arXiv our paper “Narrow progressions in the primes“, submitted to the special issue “Analytic Number Theory” in honor of the 60th birthday of Helmut Maier. The results here are vaguely reminiscent of the recent progress on bounded gaps in the primes, but use different methods.

About a decade ago, Ben Green and I showed that the primes contained arbitrarily long arithmetic progressions: given any {k}, one could find a progression {n, n+r, \dots, n+(k-1)r} with {r>0} consisting entirely of primes. In fact we showed the same statement was true if the primes were replaced by any subset of the primes of positive relative density.

A little while later, Tamar Ziegler and I obtained the following generalisation: given any {k} and any polynomials {P_1,\dots,P_k: {\bf Z} \rightarrow {\bf Z}} with {P_1(0)=\dots=P_k(0)}, one could find a “polynomial progression” {n+P_1(r),\dots,n+P_k(r)} with {r>0} consisting entirely of primes. Furthermore, we could make this progression somewhat “narrow” by taking {r = n^{o(1)}} (where {o(1)} denotes a quantity that goes to zero as {n} goes to infinity). Again, the same statement also applies if the primes were replaced by a subset of positive relative density. My previous result with Ben corresponds to the linear case {P_i(r) = (i-1)r}.

In this paper we were able to make the progressions a bit narrower still: given any {k} and any polynomials {P_1,\dots,P_k: {\bf Z} \rightarrow {\bf Z}} with {P_1(0)=\dots=P_k(0)}, one could find a “polynomial progression” {n+P_1(r),\dots,n+P_k(r)} with {r>0} consisting entirely of primes, and such that {r \leq \log^L n}, where {L} depends only on {k} and {P_1,\dots,P_k} (in fact it depends only on {k} and the degrees of {P_1,\dots,P_k}). The result is still true if the primes are replaced by a subset of positive density {\delta}, but unfortunately in our arguments we must then let {L} depend on {\delta}. However, in the linear case {P_i(r) = (i-1)r}, we were able to make {L} independent of {\delta} (although it is still somewhat large, of the order of {k 2^k}).

The polylogarithmic factor is somewhat necessary: using an upper bound sieve, one can easily construct a subset of the primes of density, say, {90\%}, whose arithmetic progressions {n,n+r,\dots,n+(k-1)r} of length {k} all obey the lower bound {r \gg \log^{k-1} n}. On the other hand, the prime tuples conjecture predicts that if one works with the actual primes rather than dense subsets of the primes, then one should have infinitely many length {k} arithmetic progressions of bounded width for any fixed {k}. The {k=2} case of this is precisely the celebrated theorem of Yitang Zhang that was the focus of the recently concluded Polymath8 project here. The higher {k} case is conjecturally true, but appears to be out of reach of known methods. (Using the multidimensional Selberg sieve of Maynard, one can get {m} primes inside an interval of length {O( \exp(O(m)) )}, but this is such a sparse set of primes that one would not expect to find even a progression of length three within such an interval.)

The argument in the previous paper was unable to obtain a polylogarithmic bound on the width of the progressions, due to the reliance on a certain technical “correlation condition” on a certain Selberg sieve weight {\nu}. This correlation condition required one to control arbitrarily long correlations of {\nu}, which was not compatible with a bounded value of {L} (particularly if one wanted to keep {L} independent of {\delta}).

However, thanks to recent advances in this area by Conlon, Fox, and Zhao (who introduced a very nice “densification” technique), it is now possible (in principle, at least) to delete this correlation condition from the arguments. Conlon-Fox-Zhao did this for my original theorem with Ben; and in the current paper we apply the densification method to our previous argument to similarly remove the correlation condition. This method does not fully eliminate the need to control arbitrarily long correlations, but allows most of the factors in such a long correlation to be bounded, rather than merely controlled by an unbounded weight such as {\nu}. This turns out to be significantly easier to control, although in the non-linear case we still unfortunately had to make {L} large compared to {\delta} due to a certain “clearing denominators” step arising from the complicated nature of the Gowers-type uniformity norms that we were using to control polynomial averages. We believe though that this an artefact of our method, and one should be able to prove our theorem with an {L} that is uniform in {\delta}.

Here is a simple instance of the densification trick in action. Suppose that one wishes to establish an estimate of the form

\displaystyle  {\bf E}_n {\bf E}_r f(n) g(n+r) h(n+r^2) = o(1) \ \ \ \ \ (1)

for some real-valued functions {f,g,h} which are bounded in magnitude by a weight function {\nu}, but which are not expected to be bounded; this average will naturally arise when trying to locate the pattern {(n,n+r,n+r^2)} in a set such as the primes. Here I will be vague as to exactly what range the parameters {n,r} are being averaged over. Suppose that the factor {g} (say) has enough uniformity that one can already show a smallness bound

\displaystyle  {\bf E}_n {\bf E}_r F(n) g(n+r) H(n+r^2) = o(1) \ \ \ \ \ (2)

whenever {F, H} are bounded functions. (One should think of {F,H} as being like the indicator functions of “dense” sets, in contrast to {f,h} which are like the normalised indicator functions of “sparse” sets). The bound (2) cannot be directly applied to control (1) because of the unbounded (or “sparse”) nature of {f} and {h}. However one can “densify” {f} and {h} as follows. Since {f} is bounded in magnitude by {\nu}, we can bound the left-hand side of (1) as

\displaystyle  {\bf E}_n \nu(n) | {\bf E}_r g(n+r) h(n+r^2) |.

The weight function {\nu} will be normalised so that {{\bf E}_n \nu(n) = O(1)}, so by the Cauchy-Schwarz inequality it suffices to show that

\displaystyle  {\bf E}_n \nu(n) | {\bf E}_r g(n+r) h(n+r^2) |^2 = o(1).

The left-hand side expands as

\displaystyle  {\bf E}_n {\bf E}_r {\bf E}_s \nu(n) g(n+r) h(n+r^2) g(n+s) h(n+s^2).

Now, it turns out that after an enormous (but finite) number of applications of the Cauchy-Schwarz inequality to steadily eliminate the {g,h} factors, as well as a certain “polynomial forms condition” hypothesis on {\nu}, one can show that

\displaystyle  {\bf E}_n {\bf E}_r {\bf E}_s (\nu-1)(n) g(n+r) h(n+r^2) g(n+s) h(n+s^2) = o(1).

(Because of the polynomial shifts, this requires a method known as “PET induction”, but let me skip over this point here.) In view of this estimate, we now just need to show that

\displaystyle  {\bf E}_n {\bf E}_r {\bf E}_s g(n+r) h(n+r^2) g(n+s) h(n+s^2) = o(1).

Now we can reverse the previous steps. First, we collapse back to

\displaystyle  {\bf E}_n | {\bf E}_r g(n+r) h(n+r^2) |^2 = o(1).

One can bound {|{\bf E}_r g(n+r) h(n+r^2)|} by {{\bf E}_r \nu(n+r) \nu(n+r^2)}, which can be shown to be “bounded on average” in a suitable sense (e.g. bounded {L^4} norm) via the aforementioned polynomial forms condition. Because of this and the Hölder inequality, the above estimate is equivalent to

\displaystyle  {\bf E}_n | {\bf E}_r g(n+r) h(n+r^2) | = o(1).

By setting {F} to be the signum of {{\bf E}_r g(n+r) h(n+r^2)}, this is equivalent to

\displaystyle  {\bf E}_n {\bf E}_r F(n) g(n+r) h(n+r^2) = o(1).

This is halfway between (1) and (2); the sparsely supported function {f} has been replaced by its “densification” {F}, but we have not yet densified {h} to {H}. However, one can shift {n} by {r^2} and repeat the above arguments to achieve a similar densificiation of {h}, at which point one has reduced (1) to (2).

Archives