In analytic number theory, it is a well-known phenomenon that for many arithmetic functions {f: {\bf N} \rightarrow {\bf C}} of interest in number theory, it is significantly easier to estimate logarithmic sums such as

\displaystyle \sum_{n \leq x} \frac{f(n)}{n}

than it is to estimate summatory functions such as

\displaystyle \sum_{n \leq x} f(n).

(Here we are normalising {f} to be roughly constant in size, e.g. {f(n) = O( n^{o(1)} )} as {n \rightarrow \infty}.) For instance, when {f} is the von Mangoldt function {\Lambda}, the logarithmic sums {\sum_{n \leq x} \frac{\Lambda(n)}{n}} can be adequately estimated by Mertens’ theorem, which can be easily proven by elementary means (see Notes 1); but a satisfactory estimate on the summatory function {\sum_{n \leq x} \Lambda(n)} requires the prime number theorem, which is substantially harder to prove (see Notes 2). (From a complex-analytic or Fourier-analytic viewpoint, the problem is that the logarithmic sums {\sum_{n \leq x} \frac{f(n)}{n}} can usually be controlled just from knowledge of the Dirichlet series {\sum_n \frac{f(n)}{n^s}} for {s} near {1}; but the summatory functions require control of the Dirichlet series {\sum_n \frac{f(n)}{n^s}} for {s} on or near a large portion of the line {\{ 1+it: t \in {\bf R} \}}. See Notes 2 for further discussion.)

Viewed conversely, whenever one has a difficult estimate on a summatory function such as {\sum_{n \leq x} f(n)}, one can look to see if there is a “cheaper” version of that estimate that only controls the logarithmic sums {\sum_{n \leq x} \frac{f(n)}{n}}, which is easier to prove than the original, more “expensive” estimate. In this post, we shall do this for two theorems, a classical theorem of Halasz on mean values of multiplicative functions on long intervals, and a much more recent result of Matomaki and Radziwiłł on mean values of multiplicative functions in short intervals. The two are related; the former theorem is an ingredient in the latter (though in the special case of the Matomaki-Radziwiłł theorem considered here, we will not need Halasz’s theorem directly, instead using a key tool in the proof of that theorem).

We begin with Halasz’s theorem. Here is a version of this theorem, due to Montgomery and to Tenenbaum:

Theorem 1 (Halasz-Montgomery-Tenenbaum) Let {f: {\bf N} \rightarrow {\bf C}} be a multiplicative function with {|f(n)| \leq 1} for all {n}. Let {x \geq 3} and {T \geq 1}, and set

\displaystyle M := \min_{|t| \leq T} \sum_{p \leq x} \frac{1 - \hbox{Re}( f(p) p^{-it} )}{p}.

Then one has

\displaystyle \frac{1}{x} \sum_{n \leq x} f(n) \ll (1+M) e^{-M} + \frac{1}{\sqrt{T}}.

Informally, this theorem asserts that {\sum_{n \leq x} f(n)} is small compared with {x}, unless {f} “pretends” to be like the character {p \mapsto p^{it}} on primes for some small {y}. (This is the starting point of the “pretentious” approach of Granville and Soundararajan to analytic number theory, as developed for instance here.) We now give a “cheap” version of this theorem which is significantly weaker (both because it settles for controlling logarithmic sums rather than summatory functions, it requires {f} to be completely multiplicative instead of multiplicative, it requires a strong bound on the analogue of the quantity {M}, and because it only gives qualitative decay rather than quantitative estimates), but easier to prove:

Theorem 2 (Cheap Halasz) Let {x} be an asymptotic parameter goingto infinity. Let {f: {\bf N} \rightarrow {\bf C}} be a completely multiplicative function (possibly depending on {x}) such that {|f(n)| \leq 1} for all {n}, such that

\displaystyle \sum_{p \leq x} \frac{1 - \hbox{Re}( f(p) )}{p} \gg \log\log x. \ \ \ \ \ (1)

 

Then

\displaystyle \frac{1}{\log x} \sum_{n \leq x} \frac{f(n)}{n} = o(1). \ \ \ \ \ (2)

 

Note that now that we are content with estimating exponential sums, we no longer need to preclude the possibility that {f(p)} pretends to be like {p^{it}}; see Exercise 11 of Notes 1 for a related observation.

To prove this theorem, we first need a special case of the Turan-Kubilius inequality.

Lemma 3 (Turan-Kubilius) Let {x} be a parameter going to infinity, and let {1 < P < x} be a quantity depending on {x} such that {P = x^{o(1)}} and {P \rightarrow \infty} as {x \rightarrow \infty}. Then

\displaystyle \sum_{n \leq x} \frac{ | \frac{1}{\log \log P} \sum_{p \leq P: p|n} 1 - 1 |}{n} = o( \log x ).

Informally, this lemma is asserting that

\displaystyle \sum_{p \leq P: p|n} 1 \approx \log \log P

for most large numbers {n}. Another way of writing this heuristically is in terms of Dirichlet convolutions:

\displaystyle 1 \approx 1 * \frac{1}{\log\log P} 1_{{\mathcal P} \cap [1,P]}.

This type of estimate was previously discussed as a tool to establish a criterion of Katai and Bourgain-Sarnak-Ziegler for Möbius orthogonality estimates in this previous blog post. See also Section 5 of Notes 1 for some similar computations.

Proof: By Cauchy-Schwarz it suffices to show that

\displaystyle \sum_{n \leq x} \frac{ | \frac{1}{\log \log P} \sum_{p \leq P: p|n} 1 - 1 |^2}{n} = o( \log x ).

Expanding out the square, it suffices to show that

\displaystyle \sum_{n \leq x} \frac{ (\frac{1}{\log \log P} \sum_{p \leq P: p|n} 1)^j}{n} = \log x + o( \log x )

for {j=0,1,2}.

We just show the {j=2} case, as the {j=0,1} cases are similar (and easier). We rearrange the left-hand side as

\displaystyle \frac{1}{(\log\log P)^2} \sum_{p_1, p_2 \leq P} \sum_{n \leq x: p_1,p_2|n} \frac{1}{n}.

We can estimate the inner sum as {(1+o(1)) \frac{1}{[p_1,p_2]} \log x}. But a routine application of Mertens’ theorem (handling the diagonal case when {p_1=p_2} separately) shows that

\displaystyle \sum_{p_1, p_2 \leq P} \frac{1}{[p_1,p_2]} = (1+o(1)) (\log\log P)^2

and the claim follows. \Box

Remark 4 As an alternative to the Turan-Kubilius inequality, one can use the Ramaré identity

\displaystyle \sum_{p \leq P: p|n} \frac{1}{\# \{ p' \leq P: p'|n\} + 1} - 1 = 1_{(p,n)=1 \hbox{ for all } p \leq P}

(see e.g. Section 17.3 of Friedlander-Iwaniec). This identity turns out to give superior quantitative results than the Turan-Kubilius inequality in applications; see the paper of Matomaki and Radziwiłł for an instance of this.

We now prove Theorem 2. Let {Q} denote the left-hand side of (2); by the triangle inequality we have {Q=O(1)}. By Lemma 3 (for some {P = x^{o(1)}} to be chosen later) and the triangle inequality we have

\displaystyle \sum_{n \leq x} \frac{\frac{1}{\log \log P} \sum_{p \leq P: p|n} f(n)}{n} = Q \log x + o( \log x ).

We rearrange the left-hand side as

\displaystyle \frac{1}{\log\log P} \sum_{p \leq P} \frac{f(p)}{p} \sum_{m \leq x/p} \frac{f(m)}{m}.

We now replace the constraint {m \leq x/p} by {m \leq x}. The error incurred in doing so is

\displaystyle O( \frac{1}{\log\log P} \sum_{p \leq P} \frac{1}{p} \sum_{x/P \leq m \leq x} \frac{1}{m} )

which by Mertens’ theorem is {O(\log P) = o( \log x )}. Thus we have

\displaystyle \frac{1}{\log\log P} \sum_{p \leq P} \frac{f(p)}{p} \sum_{m \leq x} \frac{f(m)}{m} = Q \log x + o( \log x ).

But by definition of {Q}, we have {\sum_{m \leq x} \frac{f(m)}{m} = Q \log x}, thus

\displaystyle [1 - \frac{1}{\log\log P} \sum_{p \leq P} \frac{f(p)}{p}] Q = o(1). \ \ \ \ \ (3)

 

From Mertens’ theorem, the expression in brackets can be rewritten as

\displaystyle \frac{1}{\log\log P} \sum_{p \leq P} \frac{1 - f(p)}{p} + o(1)

and so the real part of this expression is

\displaystyle \frac{1}{\log\log P} \sum_{p \leq P} \frac{1 - \hbox{Re} f(p)}{p} + o(1).

By (1), Mertens’ theorem and the hypothesis on {f} we have

\displaystyle \sum_{p \leq x^\varepsilon} \frac{(1 - \hbox{Re} f(p)) \log p}{p} \gg \log\log x^\varepsilon - O_\varepsilon(1)

for any {\varepsilon > 0}. This implies that we can find {P = x^{o(1)}} going to infinity such that

\displaystyle \sum_{p \leq P} \frac{(1 - \hbox{Re} f(p)) \log p}{p} \gg (1-o(1))\log\log P

and thus the expression in brackets has real part {\gg 1-o(1)}. The claim follows.

The Turan-Kubilius argument is certainly not the most efficient way to estimate sums such as {\frac{1}{n} \sum_{n \leq x} f(n)}. In the exercise below we give a significantly more accurate estimate that works when {f} is non-negative.

Exercise 5 (Granville-Koukoulopoulos-Matomaki)

  • (i) If {g} is a completely multiplicative function with {g(p) \in \{0,1\}} for all primes {p}, show that

    \displaystyle (e^{-\gamma}-o(1)) \prod_{p \leq x} (1 - \frac{g(p)}{p})^{-1} \leq \sum_{n \leq x} \frac{g(n)}{n} \leq \prod_{p \leq x} (1 - \frac{g(p)}{p})^{-1}.

    as {x \rightarrow \infty}. (Hint: for the upper bound, expand out the Euler product. For the lower bound, show that {\sum_{n \leq x} \frac{g(n)}{n} \times \sum_{n \leq x} \frac{h(n)}{n} \ge \sum_{n \leq x} \frac{1}{n}}, where {h} is the completely multiplicative function with {h(p) = 1-g(p)} for all primes {p}.)

  • (ii) If {g} is multiplicative and takes values in {[0,1]}, show that

    \displaystyle \sum_{n \leq x} \frac{g(n)}{n} \asymp \prod_{p \leq x} (1 - \frac{g(p)}{p})^{-1}

    \displaystyle \asymp \exp( \sum_{p \leq x} \frac{g(p)}{p} )

    for all {x \geq 1}.

Now we turn to a very recent result of Matomaki and Radziwiłł on mean values of multiplicative functions in short intervals. For sake of illustration we specialise their results to the simpler case of the Liouville function {\lambda}, although their arguments actually work (with some additional effort) for arbitrary multiplicative functions of magnitude at most {1} that are real-valued (or more generally, stay far from complex characters {p \mapsto p^{it}}). Furthermore, we give a qualitative form of their estimates rather than a quantitative one:

Theorem 6 (Matomaki-Radziwiłł, special case) Let {X} be a parameter going to infinity, and let {2 \leq h \leq X} be a quantity going to infinity as {X \rightarrow \infty}. Then for all but {o(X)} of the integers {x \in [X,2X]}, one has

\displaystyle \sum_{x \leq n \leq x+h} \lambda(n) = o( h ).

Equivalently, one has

\displaystyle \sum_{X \leq x \leq 2X} |\sum_{x \leq n \leq x+h} \lambda(n)|^2 = o( h^2 X ). \ \ \ \ \ (4)

 

A simple sieving argument (see Exercise 18 of Supplement 4) shows that one can replace {\lambda} by the Möbius function {\mu} and obtain the same conclusion. See this recent note of Matomaki and Radziwiłł for a simple proof of their (quantitative) main theorem in this special case.

Of course, (4) improves upon the trivial bound of {O( h^2 X )}. Prior to this paper, such estimates were only known (using arguments similar to those in Section 3 of Notes 6) for {h \geq X^{1/6+\varepsilon}} unconditionally, or for {h \geq \log^A X} for some sufficiently large {A} if one assumed the Riemann hypothesis. This theorem also represents some progress towards Chowla’s conjecture (discussed in Supplement 4) that

\displaystyle \sum_{n \leq x} \lambda(n+h_1) \dots \lambda(n+h_k) = o( x )

as {x \rightarrow \infty} for any fixed distinct {h_1,\dots,h_k}; indeed, it implies that this conjecture holds if one performs a small amount of averaging in the {h_1,\dots,h_k}.

Below the fold, we give a “cheap” version of the Matomaki-Radziwiłł argument. More precisely, we establish

Theorem 7 (Cheap Matomaki-Radziwiłł) Let {X} be a parameter going to infinity, and let {1 \leq T \leq X}. Then

\displaystyle \int_X^{X^A} \left|\sum_{x \leq n \leq e^{1/T} x} \frac{\lambda(n)}{n}\right|^2\frac{dx}{x} = o\left( \frac{\log X}{T^2} \right), \ \ \ \ \ (5)

 

for any fixed {A>1}.

Note that (5) improves upon the trivial bound of {O( \frac{\log X}{T^2} )}. Again, one can replace {\lambda} with {\mu} if desired. Due to the cheapness of Theorem 7, the proof will require few ingredients; the deepest input is the improved zero-free region for the Riemann zeta function due to Vinogradov and Korobov. Other than that, the main tools are the Turan-Kubilius result established above, and some Fourier (or complex) analysis.

— 1. Proof of theorem —

We now prove Theorem 7. We first observe that it will suffice to show that

\displaystyle \int_0^\infty \varphi( \frac{\log x}{\log X} ) \left|\sum_n \eta( T( \log x - \log n ) ) \frac{\lambda(n)}{n}\right|^2\ \frac{dx}{x} = o\left( \frac{\log X}{T^2} \right)

for any smooth {\eta, \varphi: {\bf R} \rightarrow {\bf R}} supported on (say) {[-2,2]} and {(1,+\infty)} respectively, as the claim follows by taking {\eta} and {\varphi} to be approximations to {1_{[-1,0]}} and {1_{[1,A]}} respectively and using the triangle inequality to control the error.

We need some quantities {P_- \leq P_+ \leq X} that go to infinity reasonably fast; more specifically we take

\displaystyle P_- := \exp(\log^{0.98} X )

and

\displaystyle P_+ := \exp(\log^{0.99} X).

By Lemma 3 and the triangle inequality, we can replace {\lambda(n)} in (5) by {\frac{1}{\log \log P_+ - \log\log P_-} \sum_{P_- < p \leq P_+: p|n} \lambda(n)} while only incurring an acceptable error. Indeed, writing

\displaystyle f(n) := \lambda(n) - \frac{1}{\log \log P_+ - \log\log P_-} \sum_{P_- < p \leq P_+: p|n} \lambda(n),

we have

\displaystyle |f(n)| \ll |\frac{1}{\log\log P_+} \sum_{p \leq P_+: p|n} 1 - 1| + |\frac{1}{\log\log P_-} \sum_{p \leq P_-: p|n} 1 - 1|

and so from Lemma 3 and the triangle inequality we have

\displaystyle \sum_{n: x \leq n \leq x^A} \frac{|f(n)|}{n} = o( \log X )

for any fixed {A}, which implies that

\displaystyle \int_0^\infty \varphi( \frac{\log x}{\log X} ) \sum_n \eta( T( \log x - \log n ) ) \frac{|f(n)|}{n}\ \frac{dx}{x} = o\left( \frac{\log X}{T} \right)

and hence

\displaystyle \int_0^\infty \varphi( \frac{\log x}{\log X} ) \left|\sum_n \eta( T( \log x - \log n ) ) \frac{f(n)}{n}\right|^2\ \frac{dx}{x} = o\left( \frac{\log X}{T^2} \right)

since the inner sum is {O(1/T)}. The claim then follows from the triangle inequality.

Since {\log\log P_- = 0.98 \log\log X} and {\log\log P_+ = 0.99 \log\log X}, our task is now to show that

\displaystyle \int_0^\infty \varphi( \frac{\log x}{\log X} ) \left|\sum_n \eta( T(\log x - \log n)) \frac{\sum_{P_- < p \leq P_+: p|n} \lambda(n)}{n}\right|^2\frac{dx}{x}

\displaystyle = o\left( \frac{\log X}{T^2} (\log\log X)^2 \right).

I will (perhaps idiosyncratically) adopt a Fourier-analytic point of view here, rather than a more traditional complex-analytic point of view (for instance, we will use Fourier transforms as a substitute for Dirichlet series). To bring the Fourier perspective to the forefront, we make the change of variables {x = e^u} and {n = pm}, and note that {\varphi( \frac{\log x}{\log X} ) = \varphi( \frac{\log m}{\log X} ) + o(1)}, to rearrange the previous claim as

\displaystyle \int_{\bf R} |\sum_{P_- < p \leq P_+} \frac{1}{p} F( u - \log p )|^2\ du = o( \log X (\log\log X)^2 ).

where

\displaystyle F(y) := T \sum_m \varphi( \frac{\log m}{\log X} ) \eta( T( y - \log m) ) \frac{\lambda(m)}{m}. \ \ \ \ \ (6)

 

Introducing the normalised discrete measure

\displaystyle \mu := \frac{1}{\log\log P_+ - \log\log P_-} \sum_{P_- < p \leq P_+} \frac{1}{p} \delta_{\log p},

it thus suffices to show that

\displaystyle \| F * \mu \|_{L^2({\bf R})}^2 = o( \log X )

where {*} now denotes ordinary (Fourier) convolution rather than Dirichlet convolution.

From Mertens’ theorem we see that {\mu} has total mass {O(1)}; also, from the triangle inequality (and the hypothesis {T \leq X}) we see that {F} is supported on {[(1-o(1)) \log X, (2+o(1))\log X]} and obeys the pointwise bound of {O(1)}. Thus we see that the trivial bound on {\| F * \mu \|_{L^2({\bf R})}^2} is {O( \log X)} by Young’s inequality. To improve upon this, we use Fourier analysis. By Plancherel’s theorem, we have

\displaystyle \| F * \mu \|_{L^2({\bf R})}^2 = \int_{\bf R} |\hat F(\xi)|^2 |\hat \mu(\xi)|^2\ d\xi

where {\hat F, \hat \mu} are the Fourier transforms

\displaystyle \hat F(\xi) := \int_{\bf R} F(x) e^{-2\pi i x \xi}\ dx

and

\displaystyle \hat \mu(\xi) := \int_{\bf R} e^{-2\pi i x \xi}\ d\mu(x).

From Plancherel’s theorem we have

\displaystyle \int_{\bf R} |\hat F(\xi)|^2 \ll \log X.

Since the derivative of {F} is bounded by {O(T)}, a similar application of Plancherel also gives

\displaystyle \int_{\bf R} |\xi|^2 |\hat F(\xi)|^2 \ll T^2 \log X \leq X^2 \log X

so the contribution of those {\xi} with {\hat \mu(\xi)=o(1)} or {X/|\xi| = o(1)} is acceptable. Also, from the definition of {F} we have

\displaystyle \hat F(\xi) = \hat \eta( \xi / T ) \sum_m \varphi( \frac{\log m}{\log X} ) \frac{\lambda(m)}{m^{1 + 2\pi i \xi}}

and so from the prime number theorem we have {\hat F(\xi) = o(1)} when {\xi = O(1)}; since {\hat \mu(\xi) = O(1)}, we see that the contribution of the region {|\xi| = O(1)} is also acceptable. It thus suffices to show that

\displaystyle \hat \mu(\xi) = o(1)

whenever {\xi = O(X)} and {1/|\xi|=o(1)}. But by definition of {\mu}, we may expand {\hat \mu(\xi)} as

\displaystyle \frac{1}{\log\log P_+ - \log\log P_-} \sum_{P_- < p \leq P_+} \frac{1}{p^{1 + 2\pi i \xi}}

so by smoothed dyadic decomposition it suffices to show that

\displaystyle \sum_p \varphi( \frac{\log p}{\log Q} ) \frac{\log p}{p^{1 + \frac{1}{\log Q} + 2\pi i \xi}} = o( \log Q )

whenever {P_- \ll Q \ll P_+}. We replace the summation over primes with a von Mangoldt function weight to rewrite this as

\displaystyle \sum_n \varphi( \frac{\log n}{\log Q} ) \frac{\Lambda(n)}{n^{1 + \frac{1}{\log Q} + 2\pi i \xi}} = o( \log Q ).

Performing a Fourier expansion of the smooth function {\varphi}, it thus suffices to show the Dirichlet series bound

\displaystyle -\frac{\zeta'}{\zeta}(\sigma+it) = \sum_n \frac{\Lambda(n)}{n^{\sigma+it}} = o( \log^{0.98} |t| )

as {|t| \rightarrow \infty} and {\sigma > 1} (we use the crude bound {-\frac{\zeta'}{\zeta}(\sigma+it) \ll \frac{1}{\sigma-1}} to deal with the {t=O(1)} contribution). But this follows from the Vinogradov-Korobov bounds (who in fact get a bound of {O( \log^{2/3}(|t|) \log\log^{1/3}(|t|) )} as {|t| \rightarrow \infty}); see Exercise 43 of Notes 2 combined with Exercise 4(i) of Notes 5.

Remark 8 If one were working with a more general completely multiplicative function {f} than the Liouville function {\lambda}, then one would have to use a duality argument to control the large values of {\hat \mu} (which could occur at a couple more locations than {\xi = O(1)}), and use some version of Halasz’s theorem to also obtain some non-trivial bounds on {F} at those large values (this would require some hypothesis that {f} does not pretend to be like any of the characters {p \mapsto p^{it}} with {t = O(X)}). These new ingredients are in a similar spirit to the “log-free density theorem” from Theorem 6 of Notes 7. See the Matomaki-Radziwiłł paper for details (in the non-cheap case).