You are currently browsing the tag archive for the ‘Dirichlet hyperbola method’ tag.

One of the basic problems in analytic number theory is to obtain bounds and asymptotics for sums of the form

$\displaystyle \sum_{n \leq x} f(n)$

in the limit ${x \rightarrow \infty}$, where ${n}$ ranges over natural numbers less than ${x}$, and ${f: {\bf N} \rightarrow {\bf C}}$ is some arithmetic function of number-theoretic interest. (It is also often convenient to replace this sharply truncated sum with a smoother sum such as ${\sum_n f(n) \psi(n/x)}$, but we will not discuss this technicality here.) For instance, the prime number theorem is equivalent to the assertion

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + o(x)$

where ${\Lambda}$ is the von Mangoldt function, while the Riemann hypothesis is equivalent to the stronger assertion

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O(x^{1/2+o(1)}).$

It is thus of interest to develop techniques to estimate such sums ${\sum_{n \leq x} f(n)}$. Of course, the difficulty of this task depends on how “nice” the function ${f}$ is. The functions ${f}$ that come up in number theory lie on a broad spectrum of “niceness”, with some particularly nice functions being quite easy to sum, and some being insanely difficult.

At the easiest end of the spectrum are those functions ${f}$ that exhibit some sort of regularity or “smoothness”. Examples of smoothness include “Archimedean” smoothness, in which ${f(n)}$ is the restriction of some smooth function ${f: {\bf R} \rightarrow {\bf C}}$ from the reals to the natural numbers, and the derivatives of ${f}$ are well controlled. A typical example is

$\displaystyle \sum_{n \leq x} \log n.$

One can already get quite good bounds on this quantity by comparison with the integral ${\int_1^x \log t\ dt}$, namely

$\displaystyle \sum_{n \leq x} \log n = x \log x - x + O(\log x),$

with sharper bounds available by using tools such as the Euler-Maclaurin formula (see this blog post). Exponentiating such asymptotics, incidentally, leads to one of the standard proofs of Stirling’s formula (as discussed in this blog post).

One can also consider “non-Archimedean” notions of smoothness, such as periodicity relative to a small period ${q}$. Indeed, if ${f}$ is periodic with period ${q}$ (and is thus essentially a function on the cyclic group ${{\bf Z}/q{\bf Z}}$), then one has the easy bound

$\displaystyle \sum_{n \leq x} f(n) = \frac{x}{q} \sum_{n \in {\bf Z}/q{\bf Z}} f(n) + O( \sum_{n \in {\bf Z}/q{\bf Z}} |f(n)| ).$

In particular, we have the fundamental estimate

$\displaystyle \sum_{n \leq x: q|n} 1 = \frac{x}{q} + O(1). \ \ \ \ \ (1)$

This is a good estimate when ${q}$ is much smaller than ${x}$, but as ${q}$ approaches ${x}$ in magnitude, the error term ${O(1)}$ begins to overwhelm the main term ${\frac{n}{q}}$, and one needs much more delicate information on the fractional part of ${\frac{n}{q}}$ in order to obtain good estimates at this point.

One can also consider functions ${f}$ which combine “Archimedean” and “non-Archimedean” smoothness into an “adelic” smoothness. We will not define this term precisely here (though the concept of a Schwartz-Bruhat function is one way to capture this sort of concept), but a typical example might be

$\displaystyle \sum_{n \leq x} \chi(n) \log n$

where ${\chi}$ is periodic with some small period ${q}$. By using techniques such as summation by parts, one can estimate such sums using the techniques used to estimate sums of periodic functions or functions with (Archimedean) smoothness.

Another class of functions that is reasonably well controlled are the multiplicative functions, in which ${f(nm) = f(n) f(m)}$ whenever ${n,m}$ are coprime. Here, one can use the powerful techniques of multiplicative number theory, for instance by working with the Dirichlet series

$\displaystyle \sum_{n=1}^\infty \frac{f(n)}{n^s}$

which are clearly related to the partial sums ${\sum_{n \leq x} f(n)}$ (essentially via the Mellin transform, a cousin of the Fourier and Laplace transforms); for this post we ignore the (important) issue of how to make sense of this series when it is not absolutely convergent (but see this previous blog post for more discussion). A primary reason that this technique is effective is that the Dirichlet series of a multiplicative function factorises as an Euler product

$\displaystyle \sum_{n=1}^\infty \frac{f(n)}{n^s} = \prod_p (\sum_{j=0}^\infty \frac{f(p^j)}{p^{js}}).$

One also obtains similar types of representations for functions that are not quite multiplicative, but are closely related to multiplicative functions, such as the von Mangoldt function ${\Lambda}$ (whose Dirichlet series ${\sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)}}$ is not given by an Euler product, but instead by the logarithmic derivative of an Euler product).

Moving another notch along the spectrum between well-controlled and ill-controlled functions, one can consider functions ${f}$ that are divisor sums such as

$\displaystyle f(n) = \sum_{d \leq R; d|n} g(d) = \sum_{d \leq R} 1_{d|n} g(d)$

for some other arithmetic function ${g}$, and some level ${R}$. This is a linear combination of periodic functions ${1_{d|n} g(d)}$ and is thus technically periodic in ${n}$ (with period equal to the least common multiple of all the numbers from ${1}$ to ${R}$), but in practice this periodic is far too large to be useful (except for extremely small levels ${R}$, e.g. ${R = O(\log x)}$). Nevertheless, we can still control the sum ${\sum_{n \leq x} f(n)}$ simply by rearranging the summation:

$\displaystyle \sum_{n \leq x} f(n) = \sum_{d \leq R} g(d) \sum_{n \leq x: d|n} 1$

and thus by (1) one can bound this by the sum of a main term ${x \sum_{d \leq R} \frac{g(d)}{d}}$ and an error term ${O( \sum_{d \leq R} |g(d)| )}$. As long as the level ${R}$ is significantly less than ${x}$, one may expect the main term to dominate, and one can often estimate this term by a variety of techniques (for instance, if ${g}$ is multiplicative, then multiplicative number theory techniques are quite effective, as mentioned previously). Similarly for other slight variants of divisor sums, such as expressions of the form

$\displaystyle \sum_{d \leq R; d | n} g(d) \log \frac{n}{d}$

or expressions of the form

$\displaystyle \sum_{d \leq R} F_d(n)$

where each ${F_d}$ is periodic with period ${d}$.

One of the simplest examples of this comes when estimating the divisor function

$\displaystyle \tau(n) := \sum_{d|n} 1,$

which counts the number of divisors up to ${n}$. This is a multiplicative function, and is therefore most efficiently estimated using the techniques of multiplicative number theory; but for reasons that will become clearer later, let us “forget” the multiplicative structure and estimate the above sum by more elementary methods. By applying the preceding method, we see that

$\displaystyle \sum_{n \leq x} \tau(n) = \sum_{d \leq x} \sum_{n \leq x:d|n} 1$

$\displaystyle = \sum_{d \leq x} (\frac{x}{d} + O(1))$

$\displaystyle = x \log x + O(x). \ \ \ \ \ (2)$

Here, we are (barely) able to keep the error term smaller than the main term; this is right at the edge of the divisor sum method, because the level ${R}$ in this case is equal to ${x}$. Unfortunately, at this high choice of level, it is not always possible to always keep the error term under control like this. For instance, if one wishes to use the standard divisor sum representation

$\displaystyle \Lambda(n) = \sum_{d|n} \mu(d) \log \frac{n}{d},$

where ${\mu}$ is the Möbius function, to compute ${\sum_{n \leq x} \Lambda(n)}$, then one ends up looking at

$\displaystyle \sum_{n \leq x} \Lambda(n) = \sum_{d \leq x} \mu(d) \sum_{n \leq x:d|n} \log \frac{n}{d}$

$\displaystyle = \sum_{d \leq x} \mu(d) ( \frac{n}{d} \log \frac{n}{d} - \frac{n}{d} + O(\log \frac{n}{d}) )$

From Dirichlet series methods, it is not difficult to establish the identities

$\displaystyle \lim_{s\rightarrow 1^+} \sum_{n=1}^\infty \frac{\mu(n)}{n^s} = 0$

and

$\displaystyle \lim_{s \rightarrow 1^+} \sum_{n=1}^\infty \frac{\mu(n) \log n}{n^s} = -1.$

This suggests (but does not quite prove) that one has

$\displaystyle \sum_{n=1}^\infty \frac{\mu(n)}{n} = 0 \ \ \ \ \ (3)$

and

$\displaystyle \sum_{n=1}^\infty \frac{\mu(n)\log n}{n} = -1 \ \ \ \ \ (4)$

in the sense of conditionally convergent series. Assuming one can justify this (which, ultimately, requires one to exclude zeroes of the Riemann zeta function on the line ${\hbox{Re}(s)=1}$, as discussed in this previous post), one is eventually left with the estimate ${x + O(x)}$, which is useless as a lower bound (and recovers only the classical Chebyshev estimate ${\sum_{n \leq x} \Lambda(n) \ll x}$ as the upper bound). The inefficiency here when compared to the situation with the divisor function ${\tau}$ can be attributed to the signed nature of the Möbius function ${\mu(n)}$, which causes some cancellation in the divisor sum expansion that needs to be compensated for with improved estimates.

However, there are a number of tricks available to reduce the level of divisor sums. The simplest comes from exploiting the change of variables ${d \mapsto \frac{n}{d}}$, which can in principle reduce the level by a square root. For instance, when computing the divisor function ${\tau(n) = \sum_{d|n} 1}$, one can observe using this change of variables that every divisor of ${n}$ above ${\sqrt{n}}$ is paired with one below ${\sqrt{n}}$, and so we have

$\displaystyle \tau(n) = 2 \sum_{d \leq \sqrt{n}: d|n} 1 \ \ \ \ \ (5)$

except when ${n}$ is a perfect square, in which case one must subtract one from the right-hand side. Using this reduced-level divisor sum representation, one can obtain an improvement to (2), namely

$\displaystyle \sum_{n \leq x} \tau(n) = x \log x + (2\gamma-1) x + O(\sqrt{x}).$

This type of argument is also known as the Dirichlet hyperbola method. A variant of this argument can also deduce the prime number theorem from (3), (4) (and with some additional effort, one can even drop the use of (4)); this is discussed at this previous blog post.

Using this square root trick, one can now also control divisor sums such as

$\displaystyle \sum_{n \leq x} \tau(n^2+1).$

(Note that ${\tau(n^2+1)}$ has no multiplicativity properties in ${n}$, and so multiplicative number theory techniques cannot be directly applied here.) The level of the divisor sum here is initially of order ${x^2}$, which is too large to be useful; but using the square root trick, we can expand this expression as

$\displaystyle 2 \sum_{n \leq x} \sum_{d \leq n: d | n^2+1} 1$

which one can rewrite as

$\displaystyle 2 \sum_{d \leq x} \sum_{d \leq n \leq x: n^2+1 = 0 \hbox{ mod } d} 1.$

The constraint ${n^2+1=0 \hbox{ mod } d}$ is periodic in ${n}$ with period ${d}$, so we can write this as

$\displaystyle 2 \sum_{d \leq x} ( \frac{x}{d} \rho(d) + O(\rho(d)) )$

where ${\rho(d)}$ is the number of solutions in ${{\bf Z}/d{\bf Z}}$ to the equation ${n^2+1 = 0 \hbox{ mod } d}$, and so

$\displaystyle \sum_{n \leq x} \tau(n^2+1) = 2x \sum_{d \leq x} \frac{\rho(d)}{d} + O(\sum_{d \leq x} \rho(d)).$

The function ${\rho}$ is multiplicative, and can be easily computed at primes ${p}$ and prime powers ${p^j}$ using tools such as quadratic reciprocity and Hensel’s lemma. For instance, by Fermat’s two-square theorem, ${\rho(p)}$ is equal to ${2}$ for ${p=1 \hbox{ mod } 4}$ and ${0}$ for ${p=3 \hbox{ mod } 4}$. From this and standard multiplicative number theory methods (e.g. by obtaining asymptotics on the Dirichlet series ${\sum_d \frac{\rho(d)}{d^s}}$), one eventually obtains the asymptotic

$\displaystyle \sum_{d \leq x} \frac{\rho(d)}{d} = \frac{3}{2\pi} \log x + O(1)$

and also

$\displaystyle \sum_{d \leq x} \rho(d) = O(x)$

and thus

$\displaystyle \sum_{n \leq x} \tau(n^2+1) = \frac{3}{\pi} x \log x + O(x).$

Similar arguments give asymptotics for ${\tau}$ on other quadratic polynomials; see for instance this paper of Hooley and these papers by McKee. Note that the irreducibility of the polynomial will be important. If one considers instead a sum involving a reducible polynomial, such as ${\sum_{n \leq x} \tau(n^2-1)}$, then the analogous quantity ${\rho(n)}$ becomes significantly larger, leading to a larger growth rate (of order ${x \log^2 x}$ rather than ${x\log x}$) for the sum.

However, the square root trick is insufficient by itself to deal with higher order sums involving the divisor function, such as

$\displaystyle \sum_{n \leq x} \tau(n^3+1);$

the level here is initially of order ${x^3}$, and the square root trick only lowers this to about ${x^{3/2}}$, creating an error term that overwhelms the main term. And indeed, the asymptotic for such this sum has not yet been rigorously established (although if one heuristically drops error terms, one can arrive at a reasonable conjecture for this asymptotic), although some results are known if one averages over additional parameters (see e.g. this paper of Greaves, or this paper of Matthiesen).

Nevertheless, there is an ingenious argument of Erdös that allows one to obtain good upper and lower bounds for these sorts of sums, in particular establishing the asymptotic

$\displaystyle x \log x \ll \sum_{n \leq x} \tau(P(n)) \ll x \log x \ \ \ \ \ (6)$

for any fixed irreducible non-constant polynomial ${P}$ that maps ${{\bf N}}$ to ${{\bf N}}$ (with the implied constants depending of course on the choice of ${P}$). There is also the related moment bound

$\displaystyle \sum_{n \leq x} \tau^m(P(n)) \ll x \log^{O(1)} x \ \ \ \ \ (7)$

for any fixed ${P}$ (not necessarily irreducible) and any fixed ${m \geq 1}$, due to van der Corput; this bound is in fact used to dispose of some error terms in the proof of (6). These should be compared with what one can obtain from the divisor bound ${\tau(n) \ll n^{O(1/\log \log n)}}$ and the trivial bound ${\tau(n) \geq 1}$, giving the bounds

$\displaystyle x \ll \sum_{n \leq x} \tau^m(P(n)) \ll x^{1 + O(\frac{1}{\log \log x})}$

for any fixed ${m \geq 1}$.

The lower bound in (6) is easy, since one can simply lower the level in (5) to obtain the lower bound

$\displaystyle \tau(n) \geq \sum_{d \leq n^\theta: d|n} 1$

for any ${\theta>0}$, and the preceding methods then easily allow one to obtain the lower bound by taking ${\theta}$ small enough (more precisely, if ${P}$ has degree ${d}$, one should take ${\theta}$ equal to ${1/d}$ or less). The upper bounds in (6) and (7) are more difficult. Ideally, if we could obtain upper bounds of the form

$\displaystyle \tau(n) \ll \sum_{d \leq n^\theta: d|n} 1 \ \ \ \ \ (8)$

for any fixed ${\theta > 0}$, then the preceding methods would easily establish both results. Unfortunately, this bound can fail, as illustrated by the following example. Suppose that ${n}$ is the product of ${k}$ distinct primes ${p_1 \ldots p_k}$, each of which is close to ${n^{1/k}}$. Then ${n}$ has ${2^k}$ divisors, with ${\binom{n}{j}}$ of them close to ${n^{j/k}}$ for each ${0 \ldots j \leq k}$. One can think of (the logarithms of) these divisors as being distributed according to what is essentially a Bernoulli distribution, thus a randomly selected divisor of ${n}$ has magnitude about ${n^{j/k}}$, where ${j}$ is a random variable which has the same distribution as the number of heads in ${k}$ independently tossed fair coins. By the law of large numbers, ${j}$ should concentrate near ${k/2}$ when ${k}$ is large, which implies that the majority of the divisors of ${n}$ will be close to ${n^{1/2}}$. Sending ${k \rightarrow \infty}$, one can show that the bound (8) fails whenever ${\theta < 1/2}$.

This however can be fixed in a number of ways. First of all, even when ${\theta<1/2}$, one can show weaker substitutes for (8). For instance, for any fixed ${\theta > 0}$ and ${m \geq 1}$ one can show a bound of the form

$\displaystyle \tau(n)^m \ll \sum_{d \leq n^\theta: d|n} \tau(d)^C \ \ \ \ \ (9)$

for some ${C}$ depending only on ${m,\theta}$. This nice elementary inequality (first observed by Landreau) already gives a quite short proof of van der Corput’s bound (7).

For Erdös’s upper bound (6), though, one cannot afford to lose these additional factors of ${\tau(d)}$, and one must argue more carefully. Here, the key observation is that the counterexample discussed earlier – when the natural number ${n}$ is the product of a large number of fairly small primes – is quite atypical; most numbers have at least one large prime factor. For instance, the number of natural numbers less than ${x}$ that contain a prime factor between ${x^{1/2}}$ and ${x}$ is equal to

$\displaystyle \sum_{x^{1/2} \leq p \leq x} (\frac{x}{p} + O(1)),$

which, thanks to Mertens’ theorem

$\displaystyle \sum_{p \leq x} \frac{1}{p} = \log\log x + M+o(1)$

for some absolute constant ${M}$, is comparable to ${x}$. In a similar spirit, one can show by similarly elementary means that the number of natural numbers ${m}$ less than ${x}$ that are ${x^{1/m}}$-smooth, in the sense that all prime factors are at most ${x^{1/m}}$, is only about ${m^{-cm} x}$ or so. Because of this, one can hope that the bound (8), while not true in full generality, will still be true for most natural numbers ${n}$, with some slightly weaker substitute available (such as (7)) for the exceptional numbers ${n}$. This turns out to be the case by an elementary but careful argument.

The Erdös argument is quite robust; for instance, the more general inequality

$\displaystyle x \log^{2^m-1} x \ll \sum_{n \leq x} \tau(P(n))^m \ll x \log^{2^m-1} x$

for fixed irreducible ${P}$ and ${m \geq 1}$, which improves van der Corput’s inequality (8) was shown by Delmer using the same methods. (A slight error in the original paper of Erdös was also corrected in this latter paper.) In a forthcoming revision to my paper on the Erdös-Straus conjecture, Christian Elsholtz and I have also applied this method to obtain bounds such as

$\displaystyle \sum_{a \leq A} \sum_{b \leq B} \tau(a^2 b + 1) \ll AB \log(A+B),$

which turn out to be enough to obtain the right asymptotics for the number of solutions to the equation ${\frac{4}{p}= \frac{1}{x}+\frac{1}{y}+\frac{1}{z}}$.

Below the fold I will provide some more details of the arguments of Landreau and of Erdös.