You are currently browsing the category archive for the ‘math.NT’ category.

In this blog post, I would like to specialise the arguments of Bourgain, Demeter, and Guth from the previous post to the two-dimensional case of the Vinogradov main conjecture, namely

Theorem 1 (Two-dimensional Vinogradov main conjecture) One has

\displaystyle  \int_{[0,1]^2} |\sum_{j=0}^N e( j x + j^2 y)|^6\ dx dy \ll N^{3+o(1)}

as {N \rightarrow \infty}.

This particular case of the main conjecture has a classical proof using some elementary number theory. Indeed, the left-hand side can be viewed as the number of solutions to the system of equations

\displaystyle  j_1 + j_2 + j_3 = k_1 + k_2 + k_3

\displaystyle  j_1^2 + j_2^2 + j_3^2 = k_1^2 + k_2^2 + k_3^2

with {j_1,j_2,j_3,k_1,k_2,k_3 \in \{0,\dots,N\}}. These two equations can combine (using the algebraic identity {(a+b-c)^2 - (a^2+b^2-c^2) = 2 (a-c)(b-c)} applied to {(a,b,c) = (j_1,j_2,k_3), (k_1,k_2,j_3)}) to imply the further equation

\displaystyle  (j_1 - k_3) (j_2 - k_3) = (k_1 - j_3) (k_2 - j_3)

which, when combined with the divisor bound, shows that each {k_1,k_2,j_3} is associated to {O(N^{o(1)})} choices of {j_1,j_2,k_3} excluding diagonal cases when two of the {j_1,j_2,j_3,k_1,k_2,k_3} collide, and this easily yields Theorem 1. However, the Bourgain-Demeter-Guth argument (which, in the two dimensional case, is essentially contained in a previous paper of Bourgain and Demeter) does not require the divisor bound, and extends for instance to the the more general case where {j} ranges in a {1}-separated set of reals between {0} to {N}.

In this special case, the Bourgain-Demeter argument simplifies, as the lower dimensional inductive hypothesis becomes a simple {L^2} almost orthogonality claim, and the multilinear Kakeya estimate needed is also easy (collapsing to just Fubini’s theorem). Also one can work entirely in the context of the Vinogradov main conjecture, and not turn to the increased generality of decoupling inequalities (though this additional generality is convenient in higher dimensions). As such, I am presenting this special case as an introduction to the Bourgain-Demeter-Guth machinery.

We now give the specialisation of the Bourgain-Demeter argument to Theorem 1. It will suffice to establish the bound

\displaystyle  \int_{[0,1]^2} |\sum_{j=0}^N e( j x + j^2 y)|^p\ dx dy \ll N^{p/2+o(1)}

for all {4<p<6}, (where we keep {p} fixed and send {N} to infinity), as the {L^6} bound then follows by combining the above bound with the trivial bound {|\sum_{j=0}^N e( j x + j^2 x^2)| \ll N}. Accordingly, for any {\eta > 0} and {4<p<6}, we let {P(p,\eta)} denote the claim that

\displaystyle  \int_{[0,1]^2} |\sum_{j=0}^N e( j x + j^2 y)|^p\ dx dy \ll N^{p/2+\eta+o(1)}

as {N \rightarrow \infty}. Clearly, for any fixed {p}, {P(p,\eta)} holds for some large {\eta}, and it will suffice to establish

Proposition 2 Let {4<p<6}, and let {\eta>0} be such that {P(p,\eta)} holds. Then there exists {0 < \eta' < \eta} such that {P(p,\eta')} holds.

Indeed, this proposition shows that for {4<p<6}, the infimum of the {\eta} for which {P(p,\eta)} holds is zero.

We prove the proposition below the fold, using a simplified form of the methods discussed in the previous blog post. To simplify the exposition we will be a bit cavalier with the uncertainty principle, for instance by essentially ignoring the tails of rapidly decreasing functions.

Read the rest of this entry »

Given any finite collection of elements {(f_i)_{i \in I}} in some Banach space {X}, the triangle inequality tells us that

\displaystyle \| \sum_{i \in I} f_i \|_X \leq \sum_{i \in I} \|f_i\|_X.

However, when the {f_i} all “oscillate in different ways”, one expects to improve substantially upon the triangle inequality. For instance, if {X} is a Hilbert space and the {f_i} are mutually orthogonal, we have the Pythagorean theorem

\displaystyle \| \sum_{i \in I} f_i \|_X = (\sum_{i \in I} \|f_i\|_X^2)^{1/2}.

For sake of comparison, from the triangle inequality and Cauchy-Schwarz one has the general inequality

\displaystyle \| \sum_{i \in I} f_i \|_X \leq (\# I)^{1/2} (\sum_{i \in I} \|f_i\|_X^2)^{1/2} \ \ \ \ \ (1)

 

for any finite collection {(f_i)_{i \in I}} in any Banach space {X}, where {\# I} denotes the cardinality of {I}. Thus orthogonality in a Hilbert space yields “square root cancellation”, saving a factor of {(\# I)^{1/2}} or so over the trivial bound coming from the triangle inequality.

More generally, let us somewhat informally say that a collection {(f_i)_{i \in I}} exhibits decoupling in {X} if one has the Pythagorean-like inequality

\displaystyle \| \sum_{i \in I} f_i \|_X \ll_\varepsilon (\# I)^\varepsilon (\sum_{i \in I} \|f_i\|_X^2)^{1/2}

for any {\varepsilon>0}, thus one obtains almost the full square root cancellation in the {X} norm. The theory of almost orthogonality can then be viewed as the theory of decoupling in Hilbert spaces such as {L^2({\bf R}^n)}. In {L^p} spaces for {p < 2} one usually does not expect this sort of decoupling; for instance, if the {f_i} are disjointly supported one has

\displaystyle \| \sum_{i \in I} f_i \|_{L^p} = (\sum_{i \in I} \|f_i\|_{L^p}^p)^{1/p}

and the right-hand side can be much larger than {(\sum_{i \in I} \|f_i\|_{L^p}^2)^{1/2}} when {p < 2}. At the opposite extreme, one usually does not expect to get decoupling in {L^\infty}, since one could conceivably align the {f_i} to all attain a maximum magnitude at the same location with the same phase, at which point the triangle inequality in {L^\infty} becomes sharp.

However, in some cases one can get decoupling for certain {2 < p < \infty}. For instance, suppose we are in {L^4}, and that {f_1,\dots,f_N} are bi-orthogonal in the sense that the products {f_i f_j} for {1 \leq i < j \leq N} are pairwise orthogonal in {L^2}. Then we have

\displaystyle \| \sum_{i = 1}^N f_i \|_{L^4}^2 = \| (\sum_{i=1}^N f_i)^2 \|_{L^2}

\displaystyle = \| \sum_{1 \leq i,j \leq N} f_i f_j \|_{L^2}

\displaystyle \ll (\sum_{1 \leq i,j \leq N} \|f_i f_j \|_{L^2}^2)^{1/2}

\displaystyle = \| (\sum_{1 \leq i,j \leq N} |f_i f_j|^2)^{1/2} \|_{L^2}

\displaystyle = \| \sum_{i=1}^N |f_i|^2 \|_{L^2}^{1/2}

\displaystyle \leq (\sum_{i=1}^N \| |f_i|^2 \|_{L^2})^{1/2}

\displaystyle = (\sum_{i=1}^N \|f_i\|_{L^4}^2)^{1/2}

giving decoupling in {L^4}. (Similarly if each of the {f_i f_j} is orthogonal to all but {O_\varepsilon( N^\varepsilon )} of the other {f_{i'} f_{j'}}.) A similar argument also gives {L^6} decoupling when one has tri-orthogonality (with the {f_i f_j f_k} mostly orthogonal to each other), and so forth. As a slight variant, Khintchine’s inequality also indicates that decoupling should occur for any fixed {2 < p < \infty} if one multiplies each of the {f_i} by an independent random sign {\epsilon_i \in \{-1,+1\}}.

In recent years, Bourgain and Demeter have been establishing decoupling theorems in {L^p({\bf R}^n)} spaces for various key exponents of {2 < p < \infty}, in the “restriction theory” setting in which the {f_i} are Fourier transforms of measures supported on different portions of a given surface or curve; this builds upon the earlier decoupling theorems of Wolff. In a recent paper with Guth, they established the following decoupling theorem for the curve {\gamma({\bf R}) \subset {\bf R}^n} parameterised by the polynomial curve

\displaystyle \gamma: t \mapsto (t, t^2, \dots, t^n).

For any ball {B = B(x_0,r)} in {{\bf R}^n}, let {w_B: {\bf R}^n \rightarrow {\bf R}^+} denote the weight

\displaystyle w_B(x) := \frac{1}{(1 + \frac{|x-x_0|}{r})^{100n}},

which should be viewed as a smoothed out version of the indicator function {1_B} of {B}. In particular, the space {L^p(w_B) = L^p({\bf R}^n, w_B(x)\ dx)} can be viewed as a smoothed out version of the space {L^p(B)}. For future reference we observe a fundamental self-similarity of the curve {\gamma({\bf R})}: any arc {\gamma(I)} in this curve, with {I} a compact interval, is affinely equivalent to the standard arc {\gamma([0,1])}.

Theorem 1 (Decoupling theorem) Let {n \geq 1}. Subdivide the unit interval {[0,1]} into {N} equal subintervals {I_i} of length {1/N}, and for each such {I_i}, let {f_i: {\bf R}^n \rightarrow {\bf R}} be the Fourier transform

\displaystyle f_i(x) = \int_{\gamma(I_i)} e(x \cdot \xi)\ d\mu_i(\xi)

of a finite Borel measure {\mu_i} on the arc {\gamma(I_i)}, where {e(\theta) := e^{2\pi i \theta}}. Then the {f_i} exhibit decoupling in {L^{n(n+1)}(w_B)} for any ball {B} of radius {N^n}.

Orthogonality gives the {n=1} case of this theorem. The bi-orthogonality type arguments sketched earlier only give decoupling in {L^p} up to the range {2 \leq p \leq 2n}; the point here is that we can now get a much larger value of {n}. The {n=2} case of this theorem was previously established by Bourgain and Demeter (who obtained in fact an analogous theorem for any curved hypersurface). The exponent {n(n+1)} (and the radius {N^n}) is best possible, as can be seen by the following basic example. If

\displaystyle f_i(x) := \int_{I_i} e(x \cdot \gamma(\xi)) g_i(\xi)\ d\xi

where {g_i} is a bump function adapted to {I_i}, then standard Fourier-analytic computations show that {f_i} will be comparable to {1/N} on a rectangular box of dimensions {N \times N^2 \times \dots \times N^n} (and thus volume {N^{n(n+1)/2}}) centred at the origin, and exhibit decay away from this box, with {\|f_i\|_{L^{n(n+1)}(w_B)}} comparable to

\displaystyle 1/N \times (N^{n(n+1)/2})^{1/(n(n+1))} = 1/\sqrt{N}.

On the other hand, {\sum_{i=1}^N f_i} is comparable to {1} on a ball of radius comparable to {1} centred at the origin, so {\|\sum_{i=1}^N f_i\|_{L^{n(n+1)}(w_B)}} is {\gg 1}, which is just barely consistent with decoupling. This calculation shows that decoupling will fail if {n(n+1)} is replaced by any larger exponent, and also if the radius of the ball {B} is reduced to be significantly smaller than {N^n}.

This theorem has the following consequence of importance in analytic number theory:

Corollary 2 (Vinogradov main conjecture) Let {s, n, N \geq 1} be integers, and let {\varepsilon > 0}. Then

\displaystyle \int_{[0,1]^n} |\sum_{j=1}^N e( j x_1 + j^2 x_2 + \dots + j^n x_n)|^{2s}\ dx_1 \dots dx_n

\displaystyle \ll_{\varepsilon,s,n} N^{s+\varepsilon} + N^{2s - \frac{n(n+1)}{2}+\varepsilon}.

Proof: By the Hölder inequality (and the trivial bound of {N} for the exponential sum), it suffices to treat the critical case {s = n(n+1)/2}, that is to say to show that

\displaystyle \int_{[0,1]^n} |\sum_{j=1}^N e( j x_1 + j^2 x_2 + \dots + j^n x_n)|^{n(n+1)}\ dx_1 \dots dx_n \ll_{\varepsilon,n} N^{\frac{n(n+1)}{2}+\varepsilon}.

We can rescale this as

\displaystyle \int_{[0,N] \times [0,N^2] \times \dots \times [0,N^n]} |\sum_{j=1}^N e( x \cdot \gamma(j/N) )|^{n(n+1)}\ dx \ll_{\varepsilon,n} N^{3\frac{n(n+1)}{2}+\varepsilon}.

As the integrand is periodic along the lattice {N{\bf Z} \times N^2 {\bf Z} \times \dots \times N^n {\bf Z}}, this is equivalent to

\displaystyle \int_{[0,N^n]^n} |\sum_{j=1}^N e( x \cdot \gamma(j/N) )|^{n(n+1)}\ dx \ll_{\varepsilon,n} N^{\frac{n(n+1)}{2}+n^2+\varepsilon}.

The left-hand side may be bounded by {\ll \| \sum_{j=1}^N f_j \|_{L^{n(n+1)}(w_B)}^{n(n+1)}}, where {B := B(0,N^n)} and {f_j(x) := e(x \cdot \gamma(j/N))}. Since

\displaystyle \| f_j \|_{L^{n(n+1)}(w_B)} \ll (N^{n^2})^{\frac{1}{n(n+1)}},

the claim now follows from the decoupling theorem and a brief calculation. \Box

Using the Plancherel formula, one may equivalently (when {s} is an integer) write the Vinogradov main conjecture in terms of solutions {j_1,\dots,j_s,k_1,\dots,k_s \in \{1,\dots,N\}} to the system of equations

\displaystyle j_1^i + \dots + j_s^i = k_1^i + \dots + k_s^i \forall i=1,\dots,n,

but we will not use this formulation here.

A history of the Vinogradov main conjecture may be found in this survey of Wooley; prior to the Bourgain-Demeter-Guth theorem, the conjecture was solved completely for {n \leq 3}, or for {n > 3} and {s} either below {n(n+1)/2 - n/3 + O(n^{2/3})} or above {n(n-1)}, with the bulk of recent progress coming from the efficient congruencing technique of Wooley. It has numerous applications to exponential sums, Waring’s problem, and the zeta function; to give just one application, the main conjecture implies the predicted asymptotic for the number of ways to express a large number as the sum of {23} fifth powers (the previous best result required {28} fifth powers). The Bourgain-Demeter-Guth approach to the Vinogradov main conjecture, based on decoupling, is ostensibly very different from the efficient congruencing technique, which relies heavily on the arithmetic structure of the program, but it appears (as I have been told from second-hand sources) that the two methods are actually closely related, with the former being a sort of “Archimedean” version of the latter (with the intervals {I_i} in the decoupling theorem being analogous to congruence classes in the efficient congruencing method); hopefully there will be some future work making this connection more precise. One advantage of the decoupling approach is that it generalises to non-arithmetic settings in which the set {\{1,\dots,N\}} that {j} is drawn from is replaced by some other similarly separated set of real numbers. (A random thought – could this allow the Vinogradov-Korobov bounds on the zeta function to extend to Beurling zeta functions?)

Below the fold we sketch the Bourgain-Demeter-Guth argument proving Theorem 1.

I thank Jean Bourgain and Andrew Granville for helpful discussions.

Read the rest of this entry »

Let {\lambda} denote the Liouville function. The prime number theorem is equivalent to the estimate

\displaystyle \sum_{n \leq x} \lambda(n) = o(x)

as {x \rightarrow \infty}, that is to say that {\lambda} exhibits cancellation on large intervals such as {[1,x]}. This result can be improved to give cancellation on shorter intervals. For instance, using the known zero density estimates for the Riemann zeta function, one can establish that

\displaystyle \int_X^{2X} |\sum_{x \leq n \leq x+H} \lambda(n)|\ dx = o( HX ) \ \ \ \ \ (1)

 

as {X \rightarrow \infty} if {X^{1/6+\varepsilon} \leq H \leq X} for some fixed {\varepsilon>0}; I believe this result is due to Ramachandra (see also Exercise 21 of this previous blog post), and in fact one could obtain a better error term on the right-hand side that for instance gained an arbitrary power of {\log X}. On the Riemann hypothesis (or the weaker density hypothesis), it was known that the {X^{1/6+\varepsilon}} could be lowered to {X^\varepsilon}.

Early this year, there was a major breakthrough by Matomaki and Radziwill, who (among other things) showed that the asymptotic (1) was in fact valid for any {H = H(X)} with {H \leq X} that went to infinity as {X \rightarrow \infty}, thus yielding cancellation on extremely short intervals. This has many further applications; for instance, this estimate, or more precisely its extension to other “non-pretentious” bounded multiplicative functions, was a key ingredient in my recent solution of the Erdös discrepancy problem, as well as in obtaining logarithmically averaged cases of Chowla’s conjecture, such as

\displaystyle \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1)}{n} = o(\log x). \ \ \ \ \ (2)

 

It is of interest to twist the above estimates by phases such as the linear phase {n \mapsto e(\alpha n) := e^{2\pi i \alpha n}}. In 1937, Davenport showed that

\displaystyle \sup_\alpha |\sum_{n \leq x} \lambda(n) e(\alpha n)| \ll_A x \log^{-A} x

which of course improves the prime number theorem. Recently with Matomaki and Radziwill, we obtained a common generalisation of this estimate with (1), showing that

\displaystyle \sup_\alpha \int_X^{2X} |\sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(HX) \ \ \ \ \ (3)

 

as {X \rightarrow \infty}, for any {H = H(X) \leq X} that went to infinity as {X \rightarrow \infty}. We were able to use this estimate to obtain an averaged form of Chowla’s conjecture.

In that paper, we asked whether one could improve this estimate further by moving the supremum inside the integral, that is to say to establish the bound

\displaystyle \int_X^{2X} \sup_\alpha |\sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(HX) \ \ \ \ \ (4)

 

as {X \rightarrow \infty}, for any {H = H(X) \leq X} that went to infinity as {X \rightarrow \infty}. This bound is asserting that {\lambda} is locally Fourier-uniform on most short intervals; it can be written equivalently in terms of the “local Gowers {U^2} norm” as

\displaystyle \int_X^{2X} \sum_{1 \leq a \leq H} |\sum_{x \leq n \leq x+H} \lambda(n) \lambda(n+a)|^2\ dx = o( H^3 X )

from which one can see that this is another averaged form of Chowla’s conjecture (stronger than the one I was able to prove with Matomaki and Radziwill, but a consequence of the unaveraged Chowla conjecture). If one inserted such a bound into the machinery I used to solve the Erdös discrepancy problem, it should lead to further averaged cases of Chowla’s conjecture, such as

\displaystyle \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1) \lambda(n+2)}{n} = o(\log x), \ \ \ \ \ (5)

 

though I have not fully checked the details of this implication. It should also have a number of new implications for sign patterns of the Liouville function, though we have not explored these in detail yet.

One can write (4) equivalently in the form

\displaystyle \int_X^{2X} \sum_{x \leq n \leq x+H} \lambda(n) e( \alpha(x) n + \beta(x) )\ dx = o(HX) \ \ \ \ \ (6)

 

uniformly for all {x}-dependent phases {\alpha(x), \beta(x)}. In contrast, (3) is equivalent to the subcase of (6) when the linear phase coefficient {\alpha(x)} is independent of {x}. This dependency of {\alpha(x)} on {x} seems to necessitate some highly nontrivial additive combinatorial analysis of the function {x \mapsto \alpha(x)} in order to establish (4) when {H} is small. To date, this analysis has proven to be elusive, but I would like to record what one can do with more classical methods like Vaughan’s identity, namely:

Proposition 1 The estimate (4) (or equivalently (6)) holds in the range {X^{2/3+\varepsilon} \leq H \leq X} for any fixed {\varepsilon>0}. (In fact one can improve the right-hand side by an arbitrary power of {\log X} in this case.)

The values of {H} in this range are far too large to yield implications such as new cases of the Chowla conjecture, but it appears that the {2/3} exponent is the limit of “classical” methods (at least as far as I was able to apply them), in the sense that one does not do any combinatorial analysis on the function {x \mapsto \alpha(x)}, nor does one use modern equidistribution results on “Type III sums” that require deep estimates on Kloosterman-type sums. The latter may shave a little bit off of the {2/3} exponent, but I don’t see how one would ever hope to go below {1/2} without doing some non-trivial combinatorics on the function {x \mapsto \alpha(x)}. UPDATE: I have come across this paper of Zhan which uses mean-value theorems for L-functions to lower the {2/3} exponent to {5/8}.

Let me now sketch the proof of the proposition, omitting many of the technical details. We first remark that known estimates on sums of the Liouville function (or similar functions such as the von Mangoldt function) in short arithmetic progressions, based on zero-density estimates for Dirichlet {L}-functions, can handle the “major arc” case of (4) (or (6)) where {\alpha} is restricted to be of the form {\alpha = \frac{a}{q} + O( X^{-1/6-\varepsilon} )} for {q = O(\log^{O(1)} X)} (the exponent here being of the same numerology as the {X^{1/6+\varepsilon}} exponent in the classical result of Ramachandra, tied to the best zero density estimates currently available); for instance a modification of the arguments in this recent paper of Koukoulopoulos would suffice. Thus we can restrict attention to “minor arc” values of {\alpha} (or {\alpha(x)}, using the interpretation of (6)).

Next, one breaks up {\lambda} (or the closely related Möbius function) into Dirichlet convolutions using one of the standard identities (e.g. Vaughan’s identity or Heath-Brown’s identity), as discussed for instance in this previous post (which is focused more on the von Mangoldt function, but analogous identities exist for the Liouville and Möbius functions). The exact choice of identity is not terribly important, but the upshot is that {\lambda(n)} can be decomposed into {\log^{O(1)} X} terms, each of which is either of the “Type I” form

\displaystyle \sum_{d \sim D; m \sim M: dm=n} a_d

for some coefficients {a_d} that are roughly of logarithmic size on the average, and scales {D, M} with {D \ll X^{2/3}} and {DM \sim X}, or else of the “Type II” form

\displaystyle \sum_{d \sim D; m \sim M: dm=n} a_d b_m

for some coefficients {a_d, b_m} that are roughly of logarithmic size on the average, and scales {D,M} with {X^{1/3} \ll D,M \ll X^{2/3}} and {DM \sim X}. As discussed in the previous post, the {2/3} exponent is a natural barrier in these identities if one is unwilling to also consider “Type III” type terms which are roughly of the shape of the third divisor function {\tau_3(n) := \sum_{d_1d_2d_3=1} 1}.

A Type I sum makes a contribution to { \sum_{x \leq n \leq x+H} \lambda(n) e( \alpha(x) n + \beta(x) )} that can be bounded (via Cauchy-Schwarz) in terms of an expression such as

\displaystyle \sum_{d \sim D} | \sum_{x/d \leq m \leq x/d+H/d} e(\alpha(x) dm )|^2.

The inner sum exhibits a lot of cancellation unless {\alpha(x) d} is within {O(D/H)} of an integer. (Here, “a lot” should be loosely interpreted as “gaining many powers of {\log X} over the trivial bound”.) Since {H} is significantly larger than {D}, standard Vinogradov-type manipulations (see e.g. Lemma 13 of these previous notes) show that this bad case occurs for many {d} only when {\alpha} is “major arc”, which is the case we have specifically excluded. This lets us dispose of the Type I contributions.

A Type II sum makes a contribution to { \sum_{x \leq n \leq x+H} \lambda(n) e( \alpha(x) n + \beta(x) )} roughly of the form

\displaystyle \sum_{d \sim D} | \sum_{x/d \leq m \leq x/d+H/d} b_m e(\alpha(x) dm)|.

We can break this up into a number of sums roughly of the form

\displaystyle \sum_{d = d_0 + O( H / M )} | \sum_{x/d_0 \leq m \leq x/d_0 + H/D} b_m e(\alpha(x) dm)|

for {d_0 \sim D}; note that the {d} range is non-trivial because {H} is much larger than {M}. Applying the usual bilinear sum Cauchy-Schwarz methods (e.g. Theorem 14 of these notes) we conclude that there is a lot of cancellation unless one has {\alpha(x) = a/q + O( \frac{X \log^{O(1)} X}{H^2} )} for some {q = O(\log^{O(1)} X)}. But with {H \geq X^{2/3+\varepsilon}}, {X \log^{O(1)} X/H^2} is well below the threshold {X^{-1/6-\varepsilon}} for the definition of major arc, so we can exclude this case and obtain the required cancellation.

A basic estimate in multiplicative number theory (particularly if one is using the Granville-Soundararajan “pretentious” approach to this subject) is the following inequality of Halasz (formulated here in a quantitative form introduced by Montgomery and Tenenbaum).

Theorem 1 (Halasz inequality) Let {f: {\bf N} \rightarrow {\bf C}} be a multiplicative function bounded in magnitude by {1}, and suppose that {x \geq 3}, {T \geq 1}, and { M \geq 0} are such that

\displaystyle  \sum_{p \leq x} \frac{1 - \hbox{Re}(f(p) p^{-it})}{p} \leq M \ \ \ \ \ (1)

for all real numbers {t} with {|t| \leq T}. Then

\displaystyle  \frac{1}{x} \sum_{n \leq x} f(n) \ll (1+M) e^{-M} + \frac{1}{\sqrt{T}}.

As a qualitative corollary, we conclude (by standard compactness arguments) that if

\displaystyle  \sum_{p} \frac{1 - \hbox{Re}(f(p) p^{-it})}{p} = +\infty

for all real {t}, then

\displaystyle  \frac{1}{x} \sum_{n \leq x} f(n) = o(1) \ \ \ \ \ (2)

as {x \rightarrow \infty}. In the more recent work of this paper of Granville and Soundararajan, the sharper bound

\displaystyle  \frac{1}{x} \sum_{n \leq x} f(n) \ll (1+M) e^{-M} + \frac{1}{T} + \frac{\log\log x}{\log x}

is obtained (with a more precise description of the {(1+M) e^{-M}} term).

The usual proofs of Halasz’s theorem are somewhat lengthy (though there has been a recent simplification, in forthcoming work of Granville, Harper, and Soundarajan). Below the fold I would like to give a relatively short proof of the following “cheap” version of the inequality, which has slightly weaker quantitative bounds, but still suffices to give qualitative conclusions such as (2).

Theorem 2 (Cheap Halasz inequality) Let {f: {\bf N} \rightarrow {\bf C}} be a multiplicative function bounded in magnitude by {1}. Let {T \geq 1} and {M \geq 0}, and suppose that {x} is sufficiently large depending on {T,M}. If (1) holds for all {|t| \leq T}, then

\displaystyle  \frac{1}{x} \sum_{n \leq x} f(n) \ll (1+M) e^{-M/2} + \frac{1}{T}.

The non-optimal exponent {1/2} can probably be improved a bit by being more careful with the exponents, but I did not try to optimise it here. A similar bound appears in the first paper of Halasz on this topic.

The idea of the argument is to split {f} as a Dirichlet convolution {f = f_1 * f_2 * f_3} where {f_1,f_2,f_3} is the portion of {f} coming from “small”, “medium”, and “large” primes respectively (with the dividing line between the three types of primes being given by various powers of {x}). Using a Perron-type formula, one can express this convolution in terms of the product of the Dirichlet series of {f_1,f_2,f_3} respectively at various complex numbers {1+it} with {|t| \leq T}. One can use {L^2} based estimates to control the Dirichlet series of {f_2,f_3}, while using the hypothesis (1) one can get {L^\infty} estimates on the Dirichlet series of {f_1}. (This is similar to the Fourier-analytic approach to ternary additive problems, such as Vinogradov’s theorem on representing large odd numbers as the sum of three primes.) This idea was inspired by a similar device used in the work of Granville, Harper, and Soundarajan. A variant of this argument also appears in unpublished work of Adam Harper.

I thank Andrew Granville for helpful comments which led to significant simplifications of the argument.

Read the rest of this entry »

Kevin Ford, James Maynard, and I have uploaded to the arXiv our preprint “Chains of large gaps between primes“. This paper was announced in our previous paper with Konyagin and Green, which was concerned with the largest gap

\displaystyle  G_1(X) := \max_{p_n, p_{n+1} \leq X} (p_{n+1} - p_n)

between consecutive primes up to {X}, in which we improved the Rankin bound of

\displaystyle  G_1(X) \gg \log X \frac{\log_2 X \log_4 X}{(\log_3 X)^2}

to

\displaystyle  G_1(X) \gg \log X \frac{\log_2 X \log_4 X}{\log_3 X}

for large {X} (where we use the abbreviations {\log_2 X := \log\log X}, {\log_3 X := \log\log\log X}, and {\log_4 X := \log\log\log\log X}). Here, we obtain an analogous result for the quantity

\displaystyle  G_k(X) := \max_{p_n, \dots, p_{n+k} \leq X} \min( p_{n+1} - p_n, p_{n+2}-p_{n+1}, \dots, p_{n+k} - p_{n+k-1} )

which measures how far apart the gaps between chains of {k} consecutive primes can be. Our main result is

\displaystyle  G_k(X) \gg \frac{1}{k^2} \log X \frac{\log_2 X \log_4 X}{\log_3 X}

whenever {X} is sufficiently large depending on {k}, with the implied constant here absolute (and effective). The factor of {1/k^2} is inherent to the method, and related to the basic probabilistic fact that if one selects {k} numbers at random from the unit interval {[0,1]}, then one expects the minimum gap between adjacent numbers to be about {1/k^2} (i.e. smaller than the mean spacing of {1/k} by an additional factor of {1/k}).

Our arguments combine those from the previous paper with the matrix method of Maier, who (in our notation) showed that

\displaystyle  G_k(X) \gg_k  \log X \frac{\log_2 X \log_4 X}{(\log_3 X)^2}

for an infinite sequence of {X} going to infinity. (Maier needed to restrict to an infinite sequence to avoid Siegel zeroes, but we are able to resolve this issue by the now standard technique of simply eliminating a prime factor of an exceptional conductor from the sieve-theoretic portion of the argument. As a byproduct, this also makes all of the estimates in our paper effective.)

As its name suggests, the Maier matrix method is usually presented by imagining a matrix of numbers, and using information about the distribution of primes in the columns of this matrix to deduce information about the primes in at least one of the rows of the matrix. We found it convenient to interpret this method in an equivalent probabilistic form as follows. Suppose one wants to find an interval {n+1,\dots,n+y} which contained a block of at least {k} primes, each separated from each other by at least {g} (ultimately, {y} will be something like {\log X \frac{\log_2 X \log_4 X}{\log_3 X}} and {g} something like {y/k^2}). One can do this by the probabilistic method: pick {n} to be a random large natural number {{\mathbf n}} (with the precise distribution to be chosen later), and try to lower bound the probability that the interval {{\mathbf n}+1,\dots,{\mathbf n}+y} contains at least {k} primes, no two of which are within {g} of each other.

By carefully choosing the residue class of {{\mathbf n}} with respect to small primes, one can eliminate several of the {{\mathbf n}+j} from consideration of being prime immediately. For instance, if {{\mathbf n}} is chosen to be large and even, then the {{\mathbf n}+j} with {j} even have no chance of being prime and can thus be eliminated; similarly if {{\mathbf n}} is large and odd, then {{\mathbf n}+j} cannot be prime for any odd {j}. Using the methods of our previous paper, we can find a residue class {m \hbox{ mod } P} (where {P} is a product of a large number of primes) such that, if one chooses {{\mathbf n}} to be a large random element of {m \hbox{ mod } P} (that is, {{\mathbf n} = {\mathbf z} P + m} for some large random integer {{\mathbf z}}), then the set {{\mathcal T}} of shifts {j \in \{1,\dots,y\}} for which {{\mathbf n}+j} still has a chance of being prime has size comparable to something like {k \log X / \log_2 X}; furthermore this set {{\mathcal T}} is fairly well distributed in {\{1,\dots,y\}} in the sense that it does not concentrate too strongly in any short subinterval of {\{1,\dots,y\}}. The main new difficulty, not present in the previous paper, is to get lower bounds on the size of {{\mathcal T}} in addition to upper bounds, but this turns out to be achievable by a suitable modification of the arguments.

Using a version of the prime number theorem in arithmetic progressions due to Gallagher, one can show that for each remaining shift {j \in {\mathcal T}}, {{\mathbf n}+j} is going to be prime with probability comparable to {\log_2 X / \log X}, so one expects about {k} primes in the set {\{{\mathbf n} + j: j \in {\mathcal T}\}}. An upper bound sieve (e.g. the Selberg sieve) also shows that for any distinct {j,j' \in {\mathcal T}}, the probability that {{\mathbf n}+j} and {{\mathbf n}+j'} are both prime is {O( (\log_2 X / \log X)^2 )}. Using this and some routine second moment calculations, one can then show that with large probability, the set {\{{\mathbf n} + j: j \in {\mathcal T}\}} will indeed contain about {k} primes, no two of which are closer than {g} to each other; with no other numbers in this interval being prime, this gives a lower bound on {G_k(X)}.

Klaus Roth, who made fundamental contributions to analytic number theory, died this Tuesday, aged 90.

I never met or communicated with Roth personally, but was certainly influenced by his work; he wrote relatively few papers, but they tended to have outsized impact. For instance, he was one of the key people (together with Bombieri) to work on simplifying and generalising the large sieve, taking it from the technically formidable original formulation of Linnik and Rényi to the clean and general almost orthogonality principle that we have today (discussed for instance in these lecture notes of mine). The paper of Roth that had the most impact on my own personal work was his three-page paper proving what is now known as Roth’s theorem on arithmetic progressions:

Theorem 1 (Roth’s theorem on arithmetic progressions) Let {A} be a set of natural numbers of positive upper density (thus {\limsup_{N \rightarrow\infty} |A \cap \{1,\dots,N\}|/N > 0}). Then {A} contains infinitely many arithmetic progressions {a,a+r,a+2r} of length three (with {r} non-zero of course).

At the heart of Roth’s elegant argument was the following (surprising at the time) dichotomy: if {A} had some moderately large density within some arithmetic progression {P}, either one could use Fourier-analytic methods to detect the presence of an arithmetic progression of length three inside {A \cap P}, or else one could locate a long subprogression {P'} of {P} on which {A} had increased density. Iterating this dichotomy by an argument now known as the density increment argument, one eventually obtains Roth’s theorem, no matter which side of the dichotomy actually holds. This argument (and the many descendants of it), based on various “dichotomies between structure and randomness”, became essential in many other results of this type, most famously perhaps in Szemerédi’s proof of his celebrated theorem on arithmetic progressions that generalised Roth’s theorem to progressions of arbitrary length. More recently, my recent work on the Chowla and Elliott conjectures that was a crucial component of the solution of the Erdös discrepancy problem, relies on an entropy decrement argument which was directly inspired by the density increment argument of Roth.

The Erdös discrepancy problem also is connected with another well known theorem of Roth:

Theorem 2 (Roth’s discrepancy theorem for arithmetic progressions) Let {f(1),\dots,f(n)} be a sequence in {\{-1,+1\}}. Then there exists an arithmetic progression {a+r, a+2r, \dots, a+kr} in {\{1,\dots,n\}} with {r} positive such that

\displaystyle  |\sum_{j=1}^k f(a+jr)| \geq c n^{1/4}

for an absolute constant {c>0}.

In fact, Roth proved a stronger estimate regarding mean square discrepancy, which I am not writing down here; as with the Roth theorem in arithmetic progressions, his proof was short and Fourier-analytic in nature (although non-Fourier-analytic proofs have since been found, for instance the semidefinite programming proof of Lovasz). The exponent {1/4} is known to be sharp (a result of Matousek and Spencer).

As a particular corollary of the above theorem, for an infinite sequence {f(1), f(2), \dots} of signs, the sums {|\sum_{j=1}^k f(a+jr)|} are unbounded in {a,r,k}. The Erdös discrepancy problem asks whether the same statement holds when {a} is restricted to be zero. (Roth also established discrepancy theorems for other sets, such as rectangles, which will not be discussed here.)

Finally, one has to mention Roth’s most famous result, cited for instance in his Fields medal citation:

Theorem 3 (Roth’s theorem on Diophantine approximation) Let {\alpha} be an irrational algebraic number. Then for any {\varepsilon > 0} there is a quantity {c_{\alpha,\varepsilon}} such that

\displaystyle  |\alpha - \frac{a}{q}| > \frac{c_{\alpha,\varepsilon}}{q^{2+\varepsilon}}.

From the Dirichlet approximation theorem (or from the theory of continued fractions) we know that the exponent {2+\varepsilon} in the denominator cannot be reduced to {2} or below. A classical and easy theorem of Liouville gives the claim with the exponent {2+\varepsilon} replaced by the degree of the algebraic number {\alpha}; work of Thue and Siegel reduced this exponent, but Roth was the one who obtained the near-optimal result. An important point is that the constant {c_{\alpha,\varepsilon}} is ineffective – it is a major open problem in Diophantine approximation to produce any bound significantly stronger than Liouville’s theorem with effective constants. This is because the proof of Roth’s theorem does not exclude any single rational {a/q} from being close to {\alpha}, but instead very ingeniously shows that one cannot have two different rationals {a/q}, {a'/q'} that are unusually close to {\alpha}, even when the denominators {q,q'} are very different in size. (I refer to this sort of argument as a “dueling conspiracies” argument; they are strangely prevalent throughout analytic number theory.)

Chantal David, Andrew Granville, Emmanuel Kowalski, Phillipe Michel, Kannan Soundararajan, and I are running a program at MSRI in the Spring of 2017 (more precisely, from Jan 17, 2017 to May 26, 2017) in the area of analytic number theory, with the intention to bringing together many of the leading experts in all aspects of the subject and to present recent work on the many active areas of the subject (e.g. the distribution of the prime numbers, refinements of the circle method, a deeper understanding of the asymptotics of bounded multiplicative functions (and applications to Erdos discrepancy type problems!) and of the “pretentious” approach to analytic number theory, more “analysis-friendly” formulations of the theorems of Deligne and others involving trace functions over fields, and new subconvexity theorems for automorphic forms, to name a few).  Like any other semester MSRI program, there will be a number of workshops, seminars, and similar activities taking place while the members are in residence.  I’m personally looking forward to the program, which should be occurring in the midst of a particularly productive time for the subject.  Needless to say, I (and the rest of the organising committee) plan to be present for most of the program.

Applications for Postdoctoral Fellowships and Research Memberships for this program (and for other MSRI programs in this time period, namely the companion program in Harmonic Analysis and the Fall program in Geometric Group Theory, as well as the complementary program in all other areas of mathematics) remain open until Dec 1.  Applications are open to everyone, but require supporting documentation, such as a CV, statement of purpose, and letters of recommendation from other mathematicians; see the application page for more details.

The Chowla conjecture asserts, among other things, that one has the asymptotic

\displaystyle \frac{1}{X} \sum_{n \leq X} \lambda(n+h_1) \dots \lambda(n+h_k) = o(1)

as {X \rightarrow \infty} for any distinct integers {h_1,\dots,h_k}, where {\lambda} is the Liouville function. (The usual formulation of the conjecture also allows one to consider more general linear forms {a_i n + b_i} than the shifts {n+h_i}, but for sake of discussion let us focus on the shift case.) This conjecture remains open for {k \geq 2}, though there are now some partial results when one averages either in {x} or in the {h_1,\dots,h_k}, as discussed in this recent post.

A natural generalisation of the Chowla conjecture is the Elliott conjecture. Its original formulation was basically as follows: one had

\displaystyle \frac{1}{X} \sum_{n \leq X} g_1(n+h_1) \dots g_k(n+h_k) = o(1) \ \ \ \ \ (1)

whenever {g_1,\dots,g_k} were bounded completely multiplicative functions and {h_1,\dots,h_k} were distinct integers, and one of the {g_i} was “non-pretentious” in the sense that

\displaystyle \sum_p \frac{1 - \hbox{Re}( g_i(p) \overline{\chi(p)} p^{-it})}{p} = +\infty \ \ \ \ \ (2)

for all Dirichlet characters {\chi} and real numbers {t}. It is easy to see that some condition like (2) is necessary; for instance if {g(n) := \chi(n) n^{it}} and {\chi} has period {q} then {\frac{1}{X} \sum_{n \leq X} g(n+q) \overline{g(n)}} can be verified to be bounded away from zero as {X \rightarrow \infty}.

In a previous paper with Matomaki and Radziwill, we provided a counterexample to the original formulation of the Elliott conjecture, and proposed that (2) be replaced with the stronger condition

\displaystyle \inf_{|t| \leq X} \sum_{p \leq X} \frac{1 - \hbox{Re}( g_i(p) \overline{\chi(p)} p^{-it})}{p} \rightarrow +\infty \ \ \ \ \ (3)

as {X \rightarrow \infty} for any Dirichlet character {\chi}. To support this conjecture, we proved an averaged and non-asymptotic version of this conjecture which roughly speaking showed a bound of the form

\displaystyle \frac{1}{H^k} \sum_{h_1,\dots,h_k \leq H} |\frac{1}{X} \sum_{n \leq X} g_1(n+h_1) \dots g_k(n+h_k)| \leq \varepsilon

whenever {H} was an arbitrarily slowly growing function of {X}, {X} was sufficiently large (depending on {\varepsilon,k} and the rate at which {H} grows), and one of the {g_i} obeyed the condition

\displaystyle \inf_{|t| \leq AX} \sum_{p \leq X} \frac{1 - \hbox{Re}( g_i(p) \overline{\chi(p)} p^{-it})}{p} \geq A \ \ \ \ \ (4)

for some {A} that was sufficiently large depending on {k,\varepsilon}, and all Dirichlet characters {\chi} of period at most {A}. As further support of this conjecture, I recently established the bound

\displaystyle \frac{1}{\log \omega} |\sum_{X/\omega \leq n \leq X} \frac{g_1(n+h_1) g_2(n+h_2)}{n}| \leq \varepsilon

under the same hypotheses, where {\omega} is an arbitrarily slowly growing function of {X}.

In view of these results, it is tempting to conjecture that the condition (4) for one of the {g_i} should be sufficient to obtain the bound

\displaystyle |\frac{1}{X} \sum_{n \leq X} g_1(n+h_1) \dots g_k(n+h_k)| \leq \varepsilon

when {A} is large enough depending on {k,\varepsilon}. This may well be the case for {k=2}. However, the purpose of this blog post is to record a simple counterexample for {k>2}. Let’s take {k=3} for simplicity. Let {t_0} be a quantity much larger than {X} but much smaller than {X^2} (e.g. {t = X^{3/2}}), and set

\displaystyle g_1(n) := n^{it_0}; \quad g_2(n) := n^{-2it_0}; \quad g_3(n) := n^{it_0}.

For {X/2 \leq n \leq X}, Taylor expansion gives

\displaystyle (n+1)^{it} = n^{it_0} \exp( i t_0 / n ) + o(1)

and

\displaystyle (n+2)^{it} = n^{it_0} \exp( 2 i t_0 / n ) + o(1)

and hence

\displaystyle g_1(n) g_2(n+1) g_3(n+2) = 1 + o(1)

and hence

\displaystyle |\frac{1}{X} \sum_{X/2 \leq n \leq X} g_1(n) g_2(n+1) g_3(n+2)| \gg 1.

On the other hand one can easily verify that all of the {g_1,g_2,g_3} obey (4) (the restriction {|t| \leq AX} there prevents {t} from getting anywhere close to {t_0}). So it seems the correct non-asymptotic version of the Elliott conjecture is the following:

Conjecture 1 (Non-asymptotic Elliott conjecture) Let {k} be a natural number, and let {h_1,\dots,h_k} be integers. Let {\varepsilon > 0}, let {A} be sufficiently large depending on {k,\varepsilon,h_1,\dots,h_k}, and let {X} be sufficiently large depending on {k,\varepsilon,h_1,\dots,h_k,A}. Let {g_1,\dots,g_k} be bounded multiplicative functions such that for some {1 \leq i \leq k}, one has

\displaystyle \inf_{|t| \leq AX^{k-1}} \sum_{p \leq X} \frac{1 - \hbox{Re}( g_i(p) \overline{\chi(p)} p^{-it})}{p} \geq A

for all Dirichlet characters {\chi} of conductor at most {A}. Then

\displaystyle |\frac{1}{X} \sum_{n \leq X} g_1(n+h_1) \dots g_k(n+h_k)| \leq \varepsilon.

The {k=1} case of this conjecture follows from the work of Halasz; in my recent paper a logarithmically averaged version of the {k=2} case of this conjecture is established. The requirement to take {t} to be as large as {A X^{k-1}} does not emerge in the averaged Elliott conjecture in my previous paper with Matomaki and Radziwill; it thus seems that this averaging has concealed some of the subtler features of the Elliott conjecture. (However, this subtlety does not seem to affect the asymptotic version of the conjecture formulated in that paper, in which the hypothesis is of the form (3), and the conclusion is of the form (1).)

A similar subtlety arises when trying to control the maximal integral

\displaystyle \frac{1}{X} \int_X^{2X} \sup_\alpha \frac{1}{H} |\sum_{x \leq n \leq x+H} g(n) e(\alpha n)|\ dx. \ \ \ \ \ (5)

In my previous paper with Matomaki and Radziwill, we could show that easier expression

\displaystyle \frac{1}{X} \sup_\alpha \int_X^{2X} \frac{1}{H} |\sum_{x \leq n \leq x+H} g(n) e(\alpha n)|\ dx. \ \ \ \ \ (6)

was small (for {H} a slowly growing function of {X}) if {g} was bounded and completely multiplicative, and one had a condition of the form

\displaystyle \inf_{|t| \leq AX} \sum_{p \leq X} \frac{1 - \hbox{Re}( g(p) \overline{\chi(p)} p^{-it})}{p} \geq A \ \ \ \ \ (7)

for some large {A}. However, to obtain an analogous bound for (5) it now appears that one needs to strengthen the above condition to

\displaystyle \inf_{|t| \leq AX^2} \sum_{p \leq X} \frac{1 - \hbox{Re}( g(p) \overline{\chi(p)} p^{-it})}{p} \geq A

in order to address the counterexample in which {g(n) = n^{it_0}} for some {t_0} between {X} and {X^2}. This seems to suggest that proving (5) (which is closely related to the {k=3} case of the Chowla conjecture) could in fact be rather difficult; the estimation of (6) relied primarily of prior work of Matomaki and Radziwill which used the hypothesis (7), but as this hypothesis is not sufficient to conclude (5), some additional input must also be used.

Let {X} and {Y} be two random variables taking values in the same (discrete) range {R}, and let {E} be some subset of {R}, which we think of as the set of “bad” outcomes for either {X} or {Y}. If {X} and {Y} have the same probability distribution, then clearly

\displaystyle  {\bf P}( X \in E ) = {\bf P}( Y \in E ).

In particular, if it is rare for {Y} to lie in {E}, then it is also rare for {X} to lie in {E}.

If {X} and {Y} do not have exactly the same probability distribution, but their probability distributions are close to each other in some sense, then we can expect to have an approximate version of the above statement. For instance, from the definition of the total variation distance {\delta(X,Y)} between two random variables (or more precisely, the total variation distance between the probability distributions of two random variables), we see that

\displaystyle  {\bf P}(Y \in E) - \delta(X,Y) \leq {\bf P}(X \in E) \leq {\bf P}(Y \in E) + \delta(X,Y) \ \ \ \ \ (1)

for any {E \subset R}. In particular, if it is rare for {Y} to lie in {E}, and {X,Y} are close in total variation, then it is also rare for {X} to lie in {E}.

A basic inequality in information theory is Pinsker’s inequality

\displaystyle  \delta(X,Y) \leq \sqrt{\frac{1}{2} D_{KL}(X||Y)}

where the Kullback-Leibler divergence {D_{KL}(X||Y)} is defined by the formula

\displaystyle  D_{KL}(X||Y) = \sum_{x \in R} {\bf P}( X=x ) \log \frac{{\bf P}(X=x)}{{\bf P}(Y=x)}.

(See this previous blog post for a proof of this inequality.) A standard application of Jensen’s inequality reveals that {D_{KL}(X||Y)} is non-negative (Gibbs’ inequality), and vanishes if and only if {X}, {Y} have the same distribution; thus one can think of {D_{KL}(X||Y)} as a measure of how close the distributions of {X} and {Y} are to each other, although one should caution that this is not a symmetric notion of distance, as {D_{KL}(X||Y) \neq D_{KL}(Y||X)} in general. Inserting Pinsker’s inequality into (1), we see for instance that

\displaystyle  {\bf P}(X \in E) \leq {\bf P}(Y \in E) + \sqrt{\frac{1}{2} D_{KL}(X||Y)}.

Thus, if {X} is close to {Y} in the Kullback-Leibler sense, and it is rare for {Y} to lie in {E}, then it is rare for {X} to lie in {E} as well.

We can specialise this inequality to the case when {Y} a uniform random variable {U} on a finite range {R} of some cardinality {N}, in which case the Kullback-Leibler divergence {D_{KL}(X||U)} simplifies to

\displaystyle  D_{KL}(X||U) = \log N - {\bf H}(X)

where

\displaystyle  {\bf H}(X) := \sum_{x \in R} {\bf P}(X=x) \log \frac{1}{{\bf P}(X=x)}

is the Shannon entropy of {X}. Again, a routine application of Jensen’s inequality shows that {{\bf H}(X) \leq \log N}, with equality if and only if {X} is uniformly distributed on {R}. The above inequality then becomes

\displaystyle  {\bf P}(X \in E) \leq {\bf P}(U \in E) + \sqrt{\frac{1}{2}(\log N - {\bf H}(X))}. \ \ \ \ \ (2)

Thus, if {E} is a small fraction of {R} (so that it is rare for {U} to lie in {E}), and the entropy of {X} is very close to the maximum possible value of {\log N}, then it is rare for {X} to lie in {E} also.

The inequality (2) is only useful when the entropy {{\bf H}(X)} is close to {\log N} in the sense that {{\bf H}(X) = \log N - O(1)}, otherwise the bound is worse than the trivial bound of {{\bf P}(X \in E) \leq 1}. In my recent paper on the Chowla and Elliott conjectures, I ended up using a variant of (2) which was still non-trivial when the entropy {{\bf H}(X)} was allowed to be smaller than {\log N - O(1)}. More precisely, I used the following simple inequality, which is implicit in the arguments of that paper but which I would like to make more explicit in this post:

Lemma 1 (Pinsker-type inequality) Let {X} be a random variable taking values in a finite range {R} of cardinality {N}, let {U} be a uniformly distributed random variable in {R}, and let {E} be a subset of {R}. Then

\displaystyle  {\bf P}(X \in E) \leq \frac{(\log N - {\bf H}(X)) + \log 2}{\log 1/{\bf P}(U \in E)}.

Proof: Consider the conditional entropy {{\bf H}(X | 1_{X \in E} )}. On the one hand, we have

\displaystyle  {\bf H}(X | 1_{X \in E} ) = {\bf H}(X, 1_{X \in E}) - {\bf H}(1_{X \in E} )

\displaystyle  = {\bf H}(X) - {\bf H}(1_{X \in E})

\displaystyle  \geq {\bf H}(X) - \log 2

by Jensen’s inequality. On the other hand, one has

\displaystyle  {\bf H}(X | 1_{X \in E} ) = {\bf P}(X \in E) {\bf H}(X | X \in E )

\displaystyle  + (1-{\bf P}(X \in E)) {\bf H}(X | X \not \in E)

\displaystyle  \leq {\bf P}(X \in E) \log |E| + (1-{\bf P}(X \in E)) \log N

\displaystyle  = \log N - {\bf P}(X \in E) \log \frac{N}{|E|}

\displaystyle  = \log N - {\bf P}(X \in E) \log \frac{1}{{\bf P}(U \in E)},

where we have again used Jensen’s inequality. Putting the two inequalities together, we obtain the claim. \Box

Remark 2 As noted in comments, this inequality can be viewed as a special case of the more general inequality

\displaystyle  {\bf P}(X \in E) \leq \frac{D(X||Y) + \log 2}{\log 1/{\bf P}(Y \in E)}

for arbitrary random variables {X,Y} taking values in the same discrete range {R}, which follows from the data processing inequality

\displaystyle  D( f(X)||f(Y)) \leq D(X|| Y)

for arbitrary functions {f}, applied to the indicator function {f = 1_E}. Indeed one has

\displaystyle  D( 1_E(X) || 1_E(Y) ) = {\bf P}(X \in E) \log \frac{{\bf P}(X \in E)}{{\bf P}(Y \in E)}

\displaystyle + {\bf P}(X \not \in E) \log \frac{{\bf P}(X \not \in E)}{{\bf P}(Y \not \in E)}

\displaystyle  \geq {\bf P}(X \in E) \log \frac{1}{{\bf P}(Y \in E)} - h( {\bf P}(X \in E) )

\displaystyle  \geq {\bf P}(X \in E) \log \frac{1}{{\bf P}(Y \in E)} - \log 2

where {h(u) := u \log \frac{1}{u} + (1-u) \log \frac{1}{1-u}} is the entropy function.

Thus, for instance, if one has

\displaystyle  {\bf H}(X) \geq \log N - o(K)

and

\displaystyle  {\bf P}(U \in E) \leq \exp( - K )

for some {K} much larger than {1} (so that {1/K = o(1)}), then

\displaystyle  {\bf P}(X \in E) = o(1).

More informally: if the entropy of {X} is somewhat close to the maximum possible value of {\log N}, and it is exponentially rare for a uniform variable to lie in {E}, then it is still somewhat rare for {X} to lie in {E}. The estimate given is close to sharp in this regime, as can be seen by calculating the entropy of a random variable {X} which is uniformly distributed inside a small set {E} with some probability {p} and uniformly distributed outside of {E} with probability {1-p}, for some parameter {0 \leq p \leq 1}.

It turns out that the above lemma combines well with concentration of measure estimates; in my paper, I used one of the simplest such estimates, namely Hoeffding’s inequality, but there are of course many other estimates of this type (see e.g. this previous blog post for some others). Roughly speaking, concentration of measure inequalities allow one to make approximations such as

\displaystyle  F(U) \approx {\bf E} F(U)

with exponentially high probability, where {U} is a uniform distribution and {F} is some reasonable function of {U}. Combining this with the above lemma, we can then obtain approximations of the form

\displaystyle  F(X) \approx {\bf E} F(U) \ \ \ \ \ (3)

with somewhat high probability, if the entropy of {X} is somewhat close to maximum. This observation, combined with an “entropy decrement argument” that allowed one to arrive at a situation in which the relevant random variable {X} did have a near-maximum entropy, is the key new idea in my recent paper; for instance, one can use the approximation (3) to obtain an approximation of the form

\displaystyle  \sum_{j=1}^H \sum_{p \in {\mathcal P}} \lambda(n+j) \lambda(n+j+p) 1_{p|n+j}

\displaystyle  \approx \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+j+p)}{p}

for “most” choices of {n} and a suitable choice of {H} (with the latter being provided by the entropy decrement argument). The left-hand side is tied to Chowla-type sums such as {\sum_{n \leq x} \frac{\lambda(n)\lambda(n+1)}{n}} through the multiplicativity of {\lambda}, while the right-hand side, being a linear correlation involving two parameters {j,p} rather than just one, has “finite complexity” and can be treated by existing techniques such as the Hardy-Littlewood circle method. One could hope that one could similarly use approximations such as (3) in other problems in analytic number theory or combinatorics.

I’ve just uploaded two related papers to the arXiv:

This pair of papers is an outgrowth of these two recent blog posts and the ensuing discussion. In the first paper, we establish the following logarithmically averaged version of the Chowla conjecture (in the case {k=2} of two-point correlations (or “pair correlations”)):

Theorem 1 (Logarithmically averaged Chowla conjecture) Let {a_1,a_2} be natural numbers, and let {b_1,b_2} be integers such that {a_1 b_2 - a_2 b_1 \neq 0}. Let {1 \leq \omega(x) \leq x} be a quantity depending on {x} that goes to infinity as {x \rightarrow \infty}. Let {\lambda} denote the Liouville function. Then one has

\displaystyle  \sum_{x/\omega(x) < n \leq x} \frac{\lambda(a_1 n + b_1) \lambda(a_2 n+b_2)}{n} = o( \log \omega(x) ) \ \ \ \ \ (1)

as {x \rightarrow \infty}.

Thus for instance one has

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1)}{n} = o(\log x). \ \ \ \ \ (2)

For comparison, the non-averaged Chowla conjecture would imply that

\displaystyle  \sum_{n \leq x} \lambda(n) \lambda(n+1) = o(x) \ \ \ \ \ (3)

which is a strictly stronger estimate than (2), and remains open.

The arguments also extend to other completely multiplicative functions than the Liouville function. In particular, one obtains a slightly averaged version of the non-asymptotic Elliott conjecture that was shown in the previous blog post to imply a positive solution to the Erdos discrepancy problem. The averaged version of the conjecture established in this paper is slightly weaker than the one assumed in the previous blog post, but it turns out that the arguments there can be modified without much difficulty to accept this averaged Elliott conjecture as input. In particular, we obtain an unconditional solution to the Erdos discrepancy problem as a consequence; this is detailed in the second paper listed above. In fact we can also handle the vector-valued version of the Erdos discrepancy problem, in which the sequence {f(1), f(2), \dots} takes values in the unit sphere of an arbitrary Hilbert space, rather than in {\{-1,+1\}}.

Estimates such as (2) or (3) are known to be subject to the “parity problem” (discussed numerous times previously on this blog), which roughly speaking means that they cannot be proven solely using “linear” estimates on functions such as the von Mangoldt function. However, it is known that the parity problem can be circumvented using “bilinear” estimates, and this is basically what is done here.

We now describe in informal terms the proof of Theorem 1, focusing on the model case (2) for simplicity. Suppose for contradiction that the left-hand side of (2) was large and (say) positive. Using the multiplicativity {\lambda(pn) = -\lambda(n)}, we conclude that

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+p) 1_{p|n}}{n}

is also large and positive for all primes {p} that are not too large; note here how the logarithmic averaging allows us to leave the constraint {n \leq x} unchanged. Summing in {p}, we conclude that

\displaystyle  \sum_{n \leq x} \frac{ \sum_{p \in {\mathcal P}} \lambda(n) \lambda(n+p) 1_{p|n}}{n}

is large and positive for any given set {{\mathcal P}} of medium-sized primes. By a standard averaging argument, this implies that

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \lambda(n+j) \lambda(n+p+j) 1_{p|n+j} \ \ \ \ \ (4)

is large for many choices of {n}, where {H} is a medium-sized parameter at our disposal to choose, and we take {{\mathcal P}} to be some set of primes that are somewhat smaller than {H}. (A similar approach was taken in this recent paper of Matomaki, Radziwill, and myself to study sign patterns of the Möbius function.) To obtain the required contradiction, one thus wants to demonstrate significant cancellation in the expression (4). As in that paper, we view {n} as a random variable, in which case (4) is essentially a bilinear sum of the random sequence {(\lambda(n+1),\dots,\lambda(n+H))} along a random graph {G_{n,H}} on {\{1,\dots,H\}}, in which two vertices {j, j+p} are connected if they differ by a prime {p} in {{\mathcal P}} that divides {n+j}. A key difficulty in controlling this sum is that for randomly chosen {n}, the sequence {(\lambda(n+1),\dots,\lambda(n+H))} and the graph {G_{n,H}} need not be independent. To get around this obstacle we introduce a new argument which we call the “entropy decrement argument” (in analogy with the “density increment argument” and “energy increment argument” that appear in the literature surrounding Szemerédi’s theorem on arithmetic progressions, and also reminiscent of the “entropy compression argument” of Moser and Tardos, discussed in this previous post). This argument, which is a simple consequence of the Shannon entropy inequalities, can be viewed as a quantitative version of the standard subadditivity argument that establishes the existence of Kolmogorov-Sinai entropy in topological dynamical systems; it allows one to select a scale parameter {H} (in some suitable range {[H_-,H_+]}) for which the sequence {(\lambda(n+1),\dots,\lambda(n+H))} and the graph {G_{n,H}} exhibit some weak independence properties (or more precisely, the mutual information between the two random variables is small).

Informally, the entropy decrement argument goes like this: if the sequence {(\lambda(n+1),\dots,\lambda(n+H))} has significant mutual information with {G_{n,H}}, then the entropy of the sequence {(\lambda(n+1),\dots,\lambda(n+H'))} for {H' > H} will grow a little slower than linearly, due to the fact that the graph {G_{n,H}} has zero entropy (knowledge of {G_{n,H}} more or less completely determines the shifts {G_{n+kH,H}} of the graph); this can be formalised using the classical Shannon inequalities for entropy (and specifically, the non-negativity of conditional mutual information). But the entropy cannot drop below zero, so by increasing {H} as necessary, at some point one must reach a metastable region (cf. the finite convergence principle discussed in this previous blog post), within which very little mutual information can be shared between the sequence {(\lambda(n+1),\dots,\lambda(n+H))} and the graph {G_{n,H}}. Curiously, for the application it is not enough to have a purely quantitative version of this argument; one needs a quantitative bound (which gains a factor of a bit more than {\log H} on the trivial bound for mutual information), and this is surprisingly delicate (it ultimately comes down to the fact that the series {\sum_{j \geq 2} \frac{1}{j \log j \log\log j}} diverges, which is only barely true).

Once one locates a scale {H} with the low mutual information property, one can use standard concentration of measure results such as the Hoeffding inequality to approximate (4) by the significantly simpler expression

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+p+j)}{p}. \ \ \ \ \ (5)

The important thing here is that Hoeffding’s inequality gives exponentially strong bounds on the failure probability, which is needed to counteract the logarithms that are inevitably present whenever trying to use entropy inequalities. The expression (5) can then be controlled in turn by an application of the Hardy-Littlewood circle method and a non-trivial estimate

\displaystyle  \sup_\alpha \frac{1}{X} \int_X^{2X} |\frac{1}{H} \sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(1) \ \ \ \ \ (6)

for averaged short sums of a modulated Liouville function established in another recent paper by Matomäki, Radziwill and myself.

When one uses this method to study more general sums such as

\displaystyle  \sum_{n \leq x} \frac{g_1(n) g_2(n+1)}{n},

one ends up having to consider expressions such as

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} c_p \frac{g_1(n+j) g_2(n+p+j)}{p}.

where {c_p} is the coefficient {c_p := \overline{g_1}(p) \overline{g_2}(p)}. When attacking this sum with the circle method, one soon finds oneself in the situation of wanting to locate the large Fourier coefficients of the exponential sum

\displaystyle  S(\alpha) := \sum_{p \in {\mathcal P}} \frac{c_p}{p} e^{2\pi i \alpha p}.

In many cases (such as in the application to the Erdös discrepancy problem), the coefficient {c_p} is identically {1}, and one can understand this sum satisfactorily using the classical results of Vinogradov: basically, {S(\alpha)} is large when {\alpha} lies in a “major arc” and is small when it lies in a “minor arc”. For more general functions {g_1,g_2}, the coefficients {c_p} are more or less arbitrary; the large values of {S(\alpha)} are no longer confined to the major arc case. Fortunately, even in this general situation one can use a restriction theorem for the primes established some time ago by Ben Green and myself to show that there are still only a bounded number of possible locations {\alpha} (up to the uncertainty mandated by the Heisenberg uncertainty principle) where {S(\alpha)} is large, and we can still conclude by using (6). (Actually, as recently pointed out to me by Ben, one does not need the full strength of our result; one only needs the {L^4} restriction theorem for the primes, which can be proven fairly directly using Plancherel’s theorem and some sieve theory.)

It is tempting to also use the method to attack higher order cases of the (logarithmically) averaged Chowla conjecture, for instance one could try to prove the estimate

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1) \lambda(n+2)}{n} = o(\log x).

The above arguments reduce matters to obtaining some non-trivial cancellation for sums of the form

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+p+j) \lambda(n+2p+j)}{p}.

A little bit of “higher order Fourier analysis” (as was done for very similar sums in the ergodic theory context by Frantzikinakis-Host-Kra and Wooley-Ziegler) lets one control this sort of sum if one can establish a bound of the form

\displaystyle  \frac{1}{X} \int_X^{2X} \sup_\alpha |\frac{1}{H} \sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(1) \ \ \ \ \ (7)

where {X} goes to infinity and {H} is a very slowly growing function of {X}. This looks very similar to (6), but the fact that the supremum is now inside the integral makes the problem much more difficult. However it looks worth attacking (7) further, as this estimate looks like it should have many nice applications (beyond just the {k=3} case of the logarithmically averaged Chowla or Elliott conjectures, which is already interesting).

For higher {k} than {k=3}, the same line of analysis requires one to replace the linear phase {e(\alpha n)} by more complicated phases, such as quadratic phases {e(\alpha n^2 + \beta n)} or even {k-2}-step nilsequences. Given that (7) is already beyond the reach of current literature, these even more complicated expressions are also unavailable at present, but one can imagine that they will eventually become tractable, in which case we would obtain an averaged form of the Chowla conjecture for all {k}, which would have a number of consequences (such as a logarithmically averaged version of Sarnak’s conjecture, as per this blog post).

It would of course be very nice to remove the logarithmic averaging, and be able to establish bounds such as (3). I did attempt to do so, but I do not see a way to use the entropy decrement argument in a manner that does not require some sort of averaging of logarithmic type, as it requires one to pick a scale {H} that one cannot specify in advance, which is not a problem for logarithmic averages (which are quite stable with respect to dilations) but is problematic for ordinary averages. But perhaps the problem can be circumvented by some clever modification of the argument. One possible approach would be to start exploiting multiplicativity at products of primes, and not just individual primes, to try to keep the scale fixed, but this makes the concentration of measure part of the argument much more complicated as one loses some independence properties (coming from the Chinese remainder theorem) which allowed one to conclude just from the Hoeffding inequality.

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 5,633 other followers