In analytic number theory, it is a well-known phenomenon that for many arithmetic functions of interest in number theory, it is significantly easier to estimate logarithmic sums such as

than it is to estimate summatory functions such as

(Here we are normalising to be roughly constant in size, e.g. as .) For instance, when is the von Mangoldt function , the logarithmic sums can be adequately estimated by Mertens’ theorem, which can be easily proven by elementary means (see Notes 1); but a satisfactory estimate on the summatory function requires the prime number theorem, which is substantially harder to prove (see Notes 2). (From a complex-analytic or Fourier-analytic viewpoint, the problem is that the logarithmic sums can usually be controlled just from knowledge of the Dirichlet series for near ; but the summatory functions require control of the Dirichlet series for on or near a large portion of the line . See Notes 2 for further discussion.)

Viewed conversely, whenever one has a difficult estimate on a summatory function such as , one can look to see if there is a “cheaper” version of that estimate that only controls the logarithmic sums , which is easier to prove than the original, more “expensive” estimate. In this post, we shall do this for two theorems, a classical theorem of Halasz on mean values of multiplicative functions on long intervals, and a much more recent result of Matomaki and RadziwiÅ‚Å‚ on mean values of multiplicative functions in short intervals. The two are related; the former theorem is an ingredient in the latter (though in the special case of the Matomaki-RadziwiÅ‚Å‚ theorem considered here, we will not need Halasz’s theorem directly, instead using a key tool in the *proof* of that theorem).

We begin with Halasz’s theorem. Here is a version of this theorem, due to Montgomery and to Tenenbaum:

Theorem 1 (Halasz-Montgomery-Tenenbaum)Let be a multiplicative function with for all . Let and , and setThen one has

Informally, this theorem asserts that is small compared with , unless “pretends” to be like the character on primes for some small . (This is the starting point of the “pretentious” approach of Granville and Soundararajan to analytic number theory, as developed for instance here.) We now give a “cheap” version of this theorem which is significantly weaker (both because it settles for controlling logarithmic sums rather than summatory functions, it requires to be completely multiplicative instead of multiplicative, it requires a strong bound on the analogue of the quantity , and because it only gives qualitative decay rather than quantitative estimates), but easier to prove:

Theorem 2 (Cheap Halasz)Let be an asymptotic parameter goingto infinity. Let be a completely multiplicative function (possibly depending on ) such that for all , such that

Note that now that we are content with estimating exponential sums, we no longer need to preclude the possibility that pretends to be like ; see Exercise 11 of Notes 1 for a related observation.

To prove this theorem, we first need a special case of the Turan-Kubilius inequality.

Lemma 3 (Turan-Kubilius)Let be a parameter going to infinity, and let be a quantity depending on such that and as . Then

Informally, this lemma is asserting that

for most large numbers . Another way of writing this heuristically is in terms of Dirichlet convolutions:

This type of estimate was previously discussed as a tool to establish a criterion of Katai and Bourgain-Sarnak-Ziegler for Möbius orthogonality estimates in this previous blog post. See also Section 5 of Notes 1 for some similar computations.

*Proof:* By Cauchy-Schwarz it suffices to show that

Expanding out the square, it suffices to show that

for .

We just show the case, as the cases are similar (and easier). We rearrange the left-hand side as

We can estimate the inner sum as . But a routine application of Mertens’ theorem (handling the diagonal case when separately) shows that

and the claim follows.

Remark 4As an alternative to the Turan-Kubilius inequality, one can use the Ramaré identity(see e.g. Section 17.3 of Friedlander-Iwaniec). This identity turns out to give superior quantitative results than the Turan-Kubilius inequality in applications; see the paper of Matomaki and RadziwiÅ‚Å‚ for an instance of this.

We now prove Theorem 2. Let denote the left-hand side of (2); by the triangle inequality we have . By Lemma 3 (for some to be chosen later) and the triangle inequality we have

We rearrange the left-hand side as

We now replace the constraint by . The error incurred in doing so is

which by Mertens’ theorem is . Thus we have

But by definition of , we have , thus

From Mertens’ theorem, the expression in brackets can be rewritten as

and so the real part of this expression is

By (1), Mertens’ theorem and the hypothesis on we have

for any . This implies that we can find going to infinity such that

and thus the expression in brackets has real part . The claim follows.

The Turan-Kubilius argument is certainly not the most efficient way to estimate sums such as . In the exercise below we give a significantly more accurate estimate that works when is non-negative.

Exercise 5(Granville-Koukoulopoulos-Matomaki)

- (i) If is a completely multiplicative function with for all primes , show that
as . (

Hint:for the upper bound, expand out the Euler product. For the lower bound, show that , where is the completely multiplicative function with for all primes .)- (ii) If is multiplicative and takes values in , show that
for all .

Now we turn to a very recent result of Matomaki and Radziwiłł on mean values of multiplicative functions in short intervals. For sake of illustration we specialise their results to the simpler case of the Liouville function , although their arguments actually work (with some additional effort) for arbitrary multiplicative functions of magnitude at most that are real-valued (or more generally, stay far from complex characters ). Furthermore, we give a qualitative form of their estimates rather than a quantitative one:

Theorem 6 (Matomaki-RadziwiÅ‚Å‚, special case)Let be a parameter going to infinity, and let be a quantity going to infinity as . Then for all but of the integers , one has

A simple sieving argument (see Exercise 18 of Supplement 4) shows that one can replace by the Möbius function and obtain the same conclusion. See this recent note of Matomaki and Radziwiłł for a simple proof of their (quantitative) main theorem in this special case.

Of course, (4) improves upon the trivial bound of . Prior to this paper, such estimates were only known (using arguments similar to those in Section 3 of Notes 6) for unconditionally, or for for some sufficiently large if one assumed the Riemann hypothesis. This theorem also represents some progress towards Chowla’s conjecture (discussed in Supplement 4) that

as for any fixed distinct ; indeed, it implies that this conjecture holds if one performs a small amount of averaging in the .

Below the fold, we give a “cheap” version of the Matomaki-Radziwiłł argument. More precisely, we establish

Theorem 7 (Cheap Matomaki-Radziwiłł)Let be a parameter going to infinity, and let . Then

Note that (5) improves upon the trivial bound of . Again, one can replace with if desired. Due to the cheapness of Theorem 7, the proof will require few ingredients; the deepest input is the improved zero-free region for the Riemann zeta function due to Vinogradov and Korobov. Other than that, the main tools are the Turan-Kubilius result established above, and some Fourier (or complex) analysis.

** — 1. Proof of theorem — **

We now prove Theorem 7. We first observe that it will suffice to show that

for any smooth supported on (say) and respectively, as the claim follows by taking and to be approximations to and respectively and using the triangle inequality to control the error.

We need some quantities that go to infinity reasonably fast; more specifically we take

and

By Lemma 3 and the triangle inequality, we can replace in (5) by while only incurring an acceptable error. Indeed, writing

we have

and so from Lemma 3 and the triangle inequality we have

for any fixed , which implies that

and hence

since the inner sum is . The claim then follows from the triangle inequality.

Since and , our task is now to show that

I will (perhaps idiosyncratically) adopt a Fourier-analytic point of view here, rather than a more traditional complex-analytic point of view (for instance, we will use Fourier transforms as a substitute for Dirichlet series). To bring the Fourier perspective to the forefront, we make the change of variables and , and note that , to rearrange the previous claim as

Introducing the normalised discrete measure

it thus suffices to show that

where now denotes ordinary (Fourier) convolution rather than Dirichlet convolution.

From Mertens’ theorem we see that has total mass ; also, from the triangle inequality (and the hypothesis ) we see that is supported on and obeys the pointwise bound of . Thus we see that the trivial bound on is by Young’s inequality. To improve upon this, we use Fourier analysis. By Plancherel’s theorem, we have

where are the Fourier transforms

and

From Plancherel’s theorem we have

Since the derivative of is bounded by , a similar application of Plancherel also gives

so the contribution of those with or is acceptable. Also, from the definition of we have

and so from the prime number theorem we have when ; since , we see that the contribution of the region is also acceptable. It thus suffices to show that

whenever and . But by definition of , we may expand as

so by smoothed dyadic decomposition it suffices to show that

whenever . We replace the summation over primes with a von Mangoldt function weight to rewrite this as

Performing a Fourier expansion of the smooth function , it thus suffices to show the Dirichlet series bound

as and (we use the crude bound to deal with the contribution). But this follows from the Vinogradov-Korobov bounds (who in fact get a bound of as ); see Exercise 43 of Notes 2 combined with Exercise 4(i) of Notes 5.

Remark 8If one were working with a more general completely multiplicative function than the Liouville function , then one would have to use a duality argument to control the large values of (which could occur at a couple more locations than ), and use some version of Halasz’s theorem to also obtain some non-trivial bounds on at those large values (this would require some hypothesis that does not pretend to be like any of the characters with ). These new ingredients are in a similar spirit to the “log-free density theorem” from Theorem 6 of Notes 7. See the Matomaki-Radziwiłł paper for details (in the non-cheap case).

## 27 comments

Comments feed for this article

25 February, 2015 at 2:43 am

AnonymousIn the statement of theorem 6, the function is not defined.

[Oops, that term should not have been there; deleted now. -T.]25 February, 2015 at 6:32 am

AnonymousWhich kind of estimate is needed to prove that for almost every x the error term in the PNT is the conjectured one? (perhaps some kind of averaged Chowla conjecture is sufficient?)

25 February, 2015 at 8:42 am

Terence TaoI’m not sure which conjectured error term for PNT you are referring to, but if one wants to hold for almost all , this is already equivalent (via the explicit formula) to the Riemann hypothesis (which is in turn equivalent to the stronger statement that , part of the remarkable “self-improving” property of the RH).

If instead one wants for most , then it would basically suffice to get good asymptotics for for averaged between 1 and – that is to say, an averaged form of the prime tuples conjecture. In principle one could derive such asymptotics from (averaged) Chowla type estimates but one would need the error term to be better than the main term by a factor of or more (in fact with the usual sieves one would need a gain of the order of ). This seems out of reach of current methods, even with the recent breakthrough of Matomaki and Radziwill.

25 February, 2015 at 10:39 pm

Avi LevyWhen you say “cheap”, do you mean “weaker and easier to prove” or something slightly different?

[Yes, this is the intended meaning; see the second paragraph of the blog post. -T.]26 February, 2015 at 6:19 pm

Bharath RamsundarIn the introductory section the statement “The two are related; the latter theorem is an ingredient in the former” should read “The two are related; the former theorem is an ingredient in the latter.”

[Corrected, thanks – T.]4 March, 2015 at 11:50 am

AnonymousThis is an instructive post. One comment: in Theorem 2 and its proof, “tends to infinity” should perhaps be “does not tend to zero” (for the sake of applicability), and further care with the dependence of f upon x in the final steps might be warranted

[The current argument can tolerate an arbitrary dependence of on , so long as it stays bounded by 1 of course. For applications, I would recommend using the “expensive” version of Halasz’s theorem rather than the “cheap” version, which is intended to illustrate the basic idea behind the theorem rather than to be of direct use in applications. -T.]4 March, 2015 at 10:13 pm

AnonymousI meant (unless I’m just really confused) that the theorem can never be applied because its hypothesis is never satisfied (by Merten’s theorem), and even the weakened hypothesis is consistent with f vanishing on primes up to x^1/10 and being -1 on the rest, which makes the second to last step hard to follow

5 March, 2015 at 8:08 am

Terence TaoOops, you’re right, there were a number of issues with the statement and proof of the theorem. I think I’ve fixed them now (replacing the hypothesis with one that actually can be satisfied).

5 March, 2015 at 3:23 pm

Luq MalikReblogged this on Luq Malik.

15 March, 2015 at 9:59 am

TKA minor typographical remark: it should be Radziwiłł (in LaTeX: Radziwi\l \l )

[Corrected, thanks – T.]17 March, 2015 at 10:16 pm

An averaged form of Chowla’s conjecture | What's new[…] this bound from a generalisation of a recent result of Matomaki and Radziwill (discussed in this previous post) on averages of multiplicative functions in short intervals. For “minor arc” , we can […]

14 August, 2015 at 12:20 am

Updates and plans III. | Combinatorics and more[…] (Thanks to Tami Ziegler) We followed over here here sparsely and laymanly a few developments in analytic number theory (mainly related to gaps in primes and Möbius randomness). It is a pleasure to mention another breakthrough, largely orthogonal to earlier ones by Kaisa Matomaki and Maksym Radziwill. (Here is a link to the paper and to related blog posts by Terry Tao (1), (2)). […]

15 September, 2015 at 1:47 am

EGDear professor,

Assuming that is roughly constant in size, you said that it’s been observed for many years that is more easily handled than . Putting asside any complex analysis -which we often use to handle these sums-, isn’t it because of the following observation ?

If has an exceptionnally big value , then the relative error coming from this term in is approximately , whereas in the case of , the relative error would be approximately which is smaller.

From this point view, wouldn’t it be even easier to study sums like or even (and so on…) when is “small” (e.g. the number of distinct prime divisors ) ?

We for sure lose the multiplicativity though… And in a sense, seems optimal because it’s the best easy way to have a small relative error altogether with the multiplicativity, but there may be more complicated multiplicative functions that allow us to have a smaller error ?

I find it interesting to notice that this only occurs around “these functions” ( etc…) because in any other case, going from to , the relative error is always .

Thanks for your time.

16 September, 2015 at 8:27 am

Terence TaoYes, this is one way of thinking about it. To put it another way: the sum is sensitive to the values of on intervals such as (as this would be expected to contribute about half of the total sum) or even for some fixed . As such, this sum can be influenced by components of of the form for some bounded (as such functions can be almost constant on intervals such as if ). This is the main reason why the prime number theorem (which is basically about estimating ) is so difficult to prove, as one must show that the von Mangoldt function does not contain a component that looks like , and this is equivalent to ruling out zeroes of the zeta function at (as can be seen by the explicit formula).

For the logarithmic sum , the behaviour of at is negligible; one needs to look at on a much larger interval such as or at least before one starts to influence a significant portion of this sum. Because of this, components of such as are now irrelevant; in the language of Dirichlet series, it is now only the behaviour of the Dirichlet series of near (as opposed to near ) which is of any significant consequence.

In principle one could take advantage of further averaging as you say by using a sum such as , which now only “sees” components that do not oscillate over such large intervals as or . In particular, any component that looks like would now have negligible impact (whereas such components still have a significant influence on the logarithmic and unweighted sums). However, as you say, in multiplicative number theory one does not usually see expressions such as (which are not multiplicative in n), and so there is usually little additional benefit in performing any additional averaging beyond logarithmic.

16 September, 2015 at 10:50 am

AnonymousSince has normal order , is it possible to approximate by the (multiplicative!) function ?

16 September, 2015 at 12:01 pm

Terence TaoSuch an approximation is not very good for controlling sums, due to the heavy-tailed nature of . For instance, has an average value of about , despite being typically of size – the mean and median are very different from each other! Also one runs into significant problems when trying to localise to primes (or almost primes), on which becomes bounded.

18 September, 2015 at 5:27 am

AnonymousThe difficulty of approximating by a multiplicative function may be explained by a theorem of Birch (J. London Math. Soc. 42(1967) , 149-151), according to which the only multiplicative functions of with a non-decreasing(!) normal order are the positive powers of ..

27 September, 2015 at 11:23 am

elie520Thanks for your anwser :)

14 November, 2015 at 5:57 am

Kyle PrattSorry to resurrect a dead comments sections, but I was hoping you could explain how we arrive at the last equation of the post. It seems to me that we still require .

14 November, 2015 at 7:44 am

Terence Taois chosen to be for a sufficiently slowly decaying exponent, and , so any expression which is (independently of the choice of Q) will also be for a sufficiently slowly growing o(1).

14 November, 2015 at 8:05 am

Kyle PrattI do not see how we can take to be so large, i.e. that the in goes to zero sufficiently slowly. By trivial estimation we can ignore all with , so we can take . But if is this small and is large (near ) it seems that is smaller than .

14 November, 2015 at 9:10 am

Terence TaoAh, I see the problem now, thanks. One has to remove the small primes from to avoid this. I’ve rewritten the argument accordingly.

16 November, 2015 at 12:15 pm

Kyle PrattI think Lemma 3 as written is insufficient for the proof of Theorem 7. To get a factor of , we would want a version where we can sum over with , and an error term of .

16 November, 2015 at 12:34 pm

Terence TaoActually, I think Lemma 3 is enough once one uses Cauchy-Schwarz to eliminate the local averaging in intervals like . I’ve added some text to the argument to detail this.

16 November, 2015 at 12:59 pm

Kyle PrattIn the equation immediately after ‘for any fixed , which implies that’, could you provide more detail here? The appearance of the factor seems mysterious. Also, in the equation beneath, it seems like one would use the trivial bound on one of the sums, and then use the equation above for the other, instead of Cauchy-Schwarz.

16 November, 2015 at 1:50 pm

Terence TaoThanks for the correction. The factor of T comes from interchanging the sum and integral, and then upper bounding the integral.

16 November, 2015 at 3:01 pm

Kyle PrattGreat! Thanks.