You are currently browsing the tag archive for the ‘divisor function’ tag.
Kaisa Matomäki, Xuancheng Shao, Joni Teräväinen, and myself have just uploaded to the arXiv our preprint “Higher uniformity of arithmetic functions in short intervals I. All intervals“. This paper investigates the higher order (Gowers) uniformity of standard arithmetic functions in analytic number theory (and specifically, the Möbius function , the von Mangoldt function
, and the generalised divisor functions
) in short intervals
, where
is large and
lies in the range
for a fixed constant
(that one would like to be as small as possible). If we let
denote one of the functions
, then there is extensive literature on the estimation of short sums
Traditionally, asymptotics for such sums are expressed in terms of a “main term” of some arithmetic nature, plus an error term that is estimated in magnitude. For instance, a sum such as would be approximated in terms of a main term that vanished (or is negligible) if
is “minor arc”, but would be expressible in terms of something like a Ramanujan sum if
was “major arc”, together with an error term. We found it convenient to cancel off such main terms by subtracting an approximant
from each of the arithmetic functions
and then getting upper bounds on remainder correlations such as
- For the Möbius function
, we simply set
, as per the Möbius pseudorandomness conjecture. (One could choose a more sophisticated approximant in the presence of a Siegel zero, as I did with Joni in this recent paper, but we do not do so here.)
- For the von Mangoldt function
, we eventually went with the Cramér-Granville approximant
, where
and
.
- For the divisor functions
, we used a somewhat complicated-looking approximant
for some explicit polynomials
, chosen so that
and
have almost exactly the same sums along arithmetic progressions (see the paper for details).
The objective is then to obtain bounds on sums such as (1) that improve upon the “trivial bound” that one can get with the triangle inequality and standard number theory bounds such as the Brun-Titchmarsh inequality. For and
, the Siegel-Walfisz theorem suggests that it is reasonable to expect error terms that have “strongly logarithmic savings” in the sense that they gain a factor of
over the trivial bound for any
; for
, the Dirichlet hyperbola method suggests instead that one has “power savings” in that one should gain a factor of
over the trivial bound for some
. In the case of the Möbius function
, there is an additional trick (introduced by Matomäki and Teräväinen) that allows one to lower the exponent
somewhat at the cost of only obtaining “weakly logarithmic savings” of shape
for some small
.
Our main estimates on sums of the form (1) work in the following ranges:
- For
, one can obtain strongly logarithmic savings on (1) for
, and power savings for
.
- For
, one can obtain weakly logarithmic savings for
.
- For
, one can obtain power savings for
.
- For
, one can obtain power savings for
.
Conjecturally, one should be able to obtain power savings in all cases, and lower down to zero, but the ranges of exponents and savings given here seem to be the limit of current methods unless one assumes additional hypotheses, such as GRH. The
result for correlation against Fourier phases
was established previously by Zhan, and the
result for such phases and
was established previously by by Matomäki and Teräväinen.
By combining these results with tools from additive combinatorics, one can obtain a number of applications:
- Direct insertion of our bounds in the recent work of Kanigowski, Lemanczyk, and Radziwill on the prime number theorem on dynamical systems that are analytic skew products gives some improvements in the exponents there.
- We can obtain a “short interval” version of a multiple ergodic theorem along primes established by Frantzikinakis-Host-Kra and Wooley-Ziegler, in which we average over intervals of the form
rather than
.
- We can obtain a “short interval” version of the “linear equations in primes” asymptotics obtained by Ben Green, Tamar Ziegler, and myself in this sequence of papers, where the variables in these equations lie in short intervals
rather than long intervals such as
.
We now briefly discuss some of the ingredients of proof of our main results. The first step is standard, using combinatorial decompositions (based on the Heath-Brown identity and (for the result) the Ramaré identity) to decompose
into more tractable sums of the following types:
- Type
sums, which are basically of the form
for some weights
of controlled size and some cutoff
that is not too large;
- Type
sums, which are basically of the form
for some weights
,
of controlled size and some cutoffs
that are not too close to
or to
;
- Type
sums, which are basically of the form
for some weights
of controlled size and some cutoff
that is not too large.
The precise ranges of the cutoffs depend on the choice of
; our methods fail once these cutoffs pass a certain threshold, and this is the reason for the exponents
being what they are in our main results.
The Type sums involving nilsequences can be treated by methods similar to those in this previous paper of Ben Green and myself; the main innovations are in the treatment of the Type
and Type
sums.
For the Type sums, one can split into the “abelian” case in which (after some Fourier decomposition) the nilsequence
is basically of the form
, and the “non-abelian” case in which
is non-abelian and
exhibits non-trivial oscillation in a central direction. In the abelian case we can adapt arguments of Matomaki and Shao, which uses Cauchy-Schwarz and the equidistribution properties of polynomials to obtain good bounds unless
is “major arc” in the sense that it resembles (or “pretends to be”)
for some Dirichlet character
and some frequency
, but in this case one can use classical multiplicative methods to control the correlation. It turns out that the non-abelian case can be treated similarly. After applying Cauchy-Schwarz, one ends up analyzing the equidistribution of the four-variable polynomial sequence
For the type sum, a model sum to study is
In a sequel to this paper (currently in preparation), we will obtain analogous results for almost all intervals with
in the range
, in which we will be able to lower
all the way to
.
Kaisa Matomaki, Maksym Radziwill, and I have uploaded to the arXiv our paper “Correlations of the von Mangoldt and higher divisor functions II. Divisor correlations in short ranges“. This is a sequel of sorts to our previous paper on divisor correlations, though the proof techniques in this paper are rather different. As with the previous paper, our interest is in correlations such as
for medium-sized and large
, where
are natural numbers and
is the
divisor function (actually our methods can also treat a generalisation in which
is non-integer, but for simplicity let us stick with the integer case for this discussion). Our methods also allow for one of the divisor function factors to be replaced with a von Mangoldt function, but (in contrast to the previous paper) we cannot treat the case when both factors are von Mangoldt.
As discussed in this previous post, one heuristically expects an asymptotic of the form
for any fixed , where
is a certain explicit (but rather complicated) polynomial of degree
. Such asymptotics are known when
, but remain open for
. In the previous paper, we were able to obtain a weaker bound of the form
for of the shifts
, whenever the shift range
lies between
and
. But the methods become increasingly hard to use as
gets smaller. In this paper, we use a rather different method to obtain the even weaker bound
for of the shifts
, where
can now be as short as
. The constant
can be improved, but there are serious obstacles to using our method to go below
(as the exceptionally large values of
then begin to dominate). This can be viewed as an analogue to our previous paper on correlations of bounded multiplicative functions on average, in which the functions
are now unbounded, and indeed our proof strategy is based in large part on that paper (but with many significant new technical complications).
We now discuss some of the ingredients of the proof. Unsurprisingly, the first step is the circle method, expressing (1) in terms of exponential sums such as
Actually, it is convenient to first prune slightly by zeroing out this function on “atypical” numbers
that have an unusually small or large number of factors in a certain sense, but let us ignore this technicality for this discussion. The contribution of
for “major arc”
can be treated by standard techniques (and is the source of the main term
; the main difficulty comes from treating the contribution of “minor arc”
.
In our previous paper on bounded multiplicative functions, we used Plancherel’s theorem to estimate the global norm
, and then also used the Katai-Bourgain-Sarnak-Ziegler orthogonality criterion to control local
norms
, where
was a minor arc interval of length about
, and these two estimates together were sufficient to get a good bound on correlations by an application of Hölder’s inequality. For
, it is more convenient to use Dirichlet series methods (and Ramaré-type factorisations of such Dirichlet series) to control local
norms on minor arcs, in the spirit of the proof of the Matomaki-Radziwill theorem; a key point is to develop “log-free” mean value theorems for Dirichlet series associated to functions such as
, so as not to wipe out the (rather small) savings one will get over the trivial bound from this method. On the other hand, the global
bound will definitely be unusable, because the
sum
has too many unwanted factors of
. Fortunately, we can substitute this global
bound with a “large values” bound that controls expressions such as
for a moderate number of disjoint intervals , with a bound that is slightly better (for
a medium-sized power of
) than what one would have obtained by bounding each integral
separately. (One needs to save more than
for the argument to work; we end up saving a factor of about
.) This large values estimate is probably the most novel contribution of the paper. After taking the Fourier transform, matters basically reduce to getting a good estimate for
where is the midpoint of
; thus we need some upper bound on the large local Fourier coefficients of
. These coefficients are difficult to calculate directly, but, in the spirit of a paper of Ben Green and myself, we can try to replace
by a more tractable and “pseudorandom” majorant
for which the local Fourier coefficients are computable (on average). After a standard duality argument, one ends up having to control expressions such as
after various averaging in the parameters. These local Fourier coefficients of
turn out to be small on average unless
is “major arc”. One then is left with a mostly combinatorial problem of trying to bound how often this major arc scenario occurs. This is very close to a computation in the previously mentioned paper of Ben and myself; there is a technical wrinkle in that the
are not as well separated as they were in my paper with Ben, but it turns out that one can modify the arguments in that paper to still obtain a satisfactory estimate in this case (after first grouping nearby frequencies
together, and modifying the duality argument accordingly).
Let be the divisor function. A classical application of the Dirichlet hyperbola method gives the asymptotic
where denotes the estimate
as
. Much better error estimates are possible here, but we will not focus on the lower order terms in this discussion. For somewhat idiosyncratic reasons I will interpret this estimate (and the other analytic number theory estimates discussed here) through the probabilistic lens. Namely, if
is a random number selected uniformly between
and
, then the above estimate can be written as
that is to say the random variable has mean approximately
. (But, somewhat paradoxically, this is not the median or mode behaviour of this random variable, which instead concentrates near
, basically thanks to the Hardy-Ramanujan theorem.)
Now we turn to the pair correlations for a fixed positive integer
. There is a classical computation of Ingham that shows that
The error term in (2) has been refined by many subsequent authors, as has the uniformity of the estimates in the aspect, as these topics are related to other questions in analytic number theory, such as fourth moment estimates for the Riemann zeta function; but we will not consider these more subtle features of the estimate here. However, we will look at the next term in the asymptotic expansion for (2) below the fold.
Using our probabilistic lens, the estimate (2) can be written as
From (1) (and the asymptotic negligibility of the shift by ) we see that the random variables
and
both have a mean of
, so the additional factor of
represents some arithmetic coupling between the two random variables.
Ingham’s formula can be established in a number of ways. Firstly, one can expand out and use the hyperbola method (splitting into the cases
and
and removing the overlap). If one does so, one soon arrives at the task of having to estimate sums of the form
for various . For
much less than
this can be achieved using a further application of the hyperbola method, but for
comparable to
things get a bit more complicated, necessitating the use of non-trivial estimates on Kloosterman sums in order to obtain satisfactory control on error terms. A more modern approach proceeds using automorphic form methods, as discussed in this previous post. A third approach, which unfortunately is only heuristic at the current level of technology, is to apply the Hardy-Littlewood circle method (discussed in this previous post) to express (2) in terms of exponential sums
for various frequencies
. The contribution of “major arc”
can be computed after a moderately lengthy calculation which yields the right-hand side of (2) (as well as the correct lower order terms that are currently being suppressed), but there does not appear to be an easy way to show directly that the “minor arc” contributions are of lower order, although the methods discussed previously do indirectly show that this is ultimately the case.
Each of the methods outlined above requires a fair amount of calculation, and it is not obvious while performing them that the factor will emerge at the end. One can at least explain the
as a normalisation constant needed to balance the
factor (at a heuristic level, at least). To see this through our probabilistic lens, introduce an independent copy
of
, then
using symmetry to order (discarding the diagonal case
) and making the change of variables
, we see that (4) is heuristically consistent with (3) as long as the asymptotic mean of
in
is equal to
. (This argument is not rigorous because there was an implicit interchange of limits present, but still gives a good heuristic “sanity check” of Ingham’s formula.) Indeed, if
denotes the asymptotic mean in
, then we have (heuristically at least)
and we obtain the desired consistency after multiplying by .
This still however does not explain the presence of the factor. Intuitively it is reasonable that if
has many prime factors, and
has a lot of factors, then
will have slightly more factors than average, because any common factor to
and
will automatically be acquired by
. But how to quantify this effect?
One heuristic way to proceed is through analysis of local factors. Observe from the fundamental theorem of arithmetic that we can factor
where the product is over all primes , and
is the local version of
at
(which in this case, is just one plus the
–valuation
of
:
). Note that all but finitely many of the terms in this product will equal
, so the infinite product is well-defined. In a similar fashion, we can factor
where
(or in terms of valuations, ). Heuristically, the Chinese remainder theorem suggests that the various factors
behave like independent random variables, and so the correlation between
and
should approximately decouple into the product of correlations between the local factors
and
. And indeed we do have the following local version of Ingham’s asymptotics:
Proposition 1 (Local Ingham asymptotics) For fixed
and integer
, we have
and
From the Euler formula
we see that
and so one can “explain” the arithmetic factor in Ingham’s asymptotic as the product of the arithmetic factors
in the (much easier) local Ingham asymptotics. Unfortunately we have the usual “local-global” problem in that we do not know how to rigorously derive the global asymptotic from the local ones; this problem is essentially the same issue as the problem of controlling the minor arc contributions in the circle method, but phrased in “physical space” language rather than “frequency space”.
Remark 2 The relation between the local means
and the global mean
can also be seen heuristically through the application
of Mertens’ theorem, where
is Pólya’s magic exponent, which serves as a useful heuristic limiting threshold in situations where the product of local factors is divergent.
Let us now prove this proposition. One could brute-force the computations by observing that for any fixed , the valuation
is equal to
with probability
, and with a little more effort one can also compute the joint distribution of
and
, at which point the proposition reduces to the calculation of various variants of the geometric series. I however find it cleaner to proceed in a more recursive fashion (similar to how one can prove the geometric series formula by induction); this will also make visible the vague intuition mentioned previously about how common factors of
and
force
to have a factor also.
It is first convenient to get rid of error terms by observing that in the limit , the random variable
converges vaguely to a uniform random variable
on the profinite integers
, or more precisely that the pair
converges vaguely to
. Because of this (and because of the easily verified uniform integrability properties of
and their powers), it suffices to establish the exact formulae
in the profinite setting (this setting will make it easier to set up the recursion).
We begin with (5). Observe that is coprime to
with probability
, in which case
is equal to
. Conditioning to the complementary probability
event that
is divisible by
, we can factor
where
is also uniformly distributed over the profinite integers, in which event we have
. We arrive at the identity
As and
have the same distribution, the quantities
and
are equal, and (5) follows by a brief amount of high-school algebra.
We use a similar method to treat (6). First treat the case when is coprime to
. Then we see that with probability
,
and
are simultaneously coprime to
, in which case
. Furthermore, with probability
,
is divisible by
and
is not; in which case we can write
as before, with
and
. Finally, in the remaining event with probability
,
is divisible by
and
is not; we can then write
, so that
and
. Putting all this together, we obtain
and the claim (6) in this case follows from (5) and a brief computation (noting that in this case).
Now suppose that is divisible by
, thus
for some integer
. Then with probability
,
and
are simultaneously coprime to
, in which case
. In the remaining
event, we can write
, and then
and
. Putting all this together we have
which by (5) (and replacing by
) leads to the recursive relation
and (6) then follows by induction on the number of powers of .
The estimate (2) of Ingham was refined by Estermann, who obtained the more accurate expansion
for certain complicated but explicit coefficients . For instance,
is given by the formula
where is the Euler-Mascheroni constant,
The formula for is similar but even more complicated. The error term
was improved by Heath-Brown to
; it is conjectured (for instance by Conrey and Gonek) that one in fact has square root cancellation
here, but this is well out of reach of current methods.
These lower order terms are traditionally computed either from a Dirichlet series approach (using Perron’s formula) or a circle method approach. It turns out that a refinement of the above heuristics can also predict these lower order terms, thus keeping the calculation purely in physical space as opposed to the “multiplicative frequency space” of the Dirichlet series approach, or the “additive frequency space” of the circle method, although the computations are arguably as messy as the latter computations for the purposes of working out the lower order terms. We illustrate this just for the term below the fold.
In analytic number theory, an arithmetic function is simply a function from the natural numbers
to the real or complex numbers. (One occasionally also considers arithmetic functions taking values in more general rings than
or
, as in this previous blog post, but we will restrict attention here to the classical situation of real or complex arithmetic functions.) Experience has shown that a particularly tractable and relevant class of arithmetic functions for analytic number theory are the multiplicative functions, which are arithmetic functions
with the additional property that
whenever are coprime. (One also considers arithmetic functions, such as the logarithm function
or the von Mangoldt function, that are not genuinely multiplicative, but interact closely with multiplicative functions, and can be viewed as “derived” versions of multiplicative functions; see this previous post.) A typical example of a multiplicative function is the divisor function
that counts the number of divisors of a natural number . (The divisor function
is also denoted
in the literature.) The study of asymptotic behaviour of multiplicative functions (and their relatives) is known as multiplicative number theory, and is a basic cornerstone of modern analytic number theory.
There are various approaches to multiplicative number theory, each of which focuses on different asymptotic statistics of arithmetic functions . In elementary multiplicative number theory, which is the focus of this set of notes, particular emphasis is given on the following two statistics of a given arithmetic function
:
- The summatory functions
of an arithmetic function
, as well as the associated natural density
(if it exists).
- The logarithmic sums
of an arithmetic function
, as well as the associated logarithmic density
(if it exists).
Here, we are normalising the arithmetic function being studied to be of roughly unit size up to logarithms, obeying bounds such as
,
, or at worst
A classical case of interest is when is an indicator function
of some set
of natural numbers, in which case we also refer to the natural or logarithmic density of
as the natural or logarithmic density of
respectively. However, in analytic number theory it is usually more convenient to replace such indicator functions with other related functions that have better multiplicative properties. For instance, the indicator function
of the primes is often replaced with the von Mangoldt function
.
Typically, the logarithmic sums are relatively easy to control, but the summatory functions require more effort in order to obtain satisfactory estimates; see Exercise 6 below.
If an arithmetic function is multiplicative (or closely related to a multiplicative function), then there is an important further statistic on an arithmetic function
beyond the summatory function and the logarithmic sum, namely the Dirichlet series
for various real or complex numbers . Under the hypothesis (43), this series is absolutely convergent for real numbers
, or more generally for complex numbers
with
. As we will see below the fold, when
is multiplicative then the Dirichlet series enjoys an important Euler product factorisation which has many consequences for analytic number theory.
In the elementary approach to multiplicative number theory presented in this set of notes, we consider Dirichlet series only for real numbers (and focusing particularly on the asymptotic behaviour as
); in later notes we will focus instead on the important complex-analytic approach to multiplicative number theory, in which the Dirichlet series (4) play a central role, and are defined not only for complex numbers with large real part, but are often extended analytically or meromorphically to the rest of the complex plane as well.
Remark 1 The elementary and complex-analytic approaches to multiplicative number theory are the two classical approaches to the subject. One could also consider a more “Fourier-analytic” approach, in which one studies convolution-type statistics such as
as
for various cutoff functions
, such as smooth, compactly supported functions. See this previous blog post for an instance of such an approach. Another related approach is the “pretentious” approach to multiplicative number theory currently being developed by Granville-Soundararajan and their collaborators. We will occasionally make reference to these more modern approaches in these notes, but will primarily focus on the classical approaches.
To reverse the process and derive control on summatory functions or logarithmic sums starting from control of Dirichlet series is trickier, and usually requires one to allow to be complex-valued rather than real-valued if one wants to obtain really accurate estimates; we will return to this point in subsequent notes. However, there is a cheap way to get upper bounds on such sums, known as Rankin’s trick, which we will discuss later in these notes.
The basic strategy of elementary multiplicative theory is to first gather useful estimates on the statistics of “smooth” or “non-oscillatory” functions, such as the constant function , the harmonic function
, or the logarithm function
; one also considers the statistics of periodic functions such as Dirichlet characters. These functions can be understood without any multiplicative number theory, using basic tools from real analysis such as the (quantitative version of the) integral test or summation by parts. Once one understands the statistics of these basic functions, one can then move on to statistics of more arithmetically interesting functions, such as the divisor function (2) or the von Mangoldt function
that we will discuss below. A key tool to relate these functions to each other is that of Dirichlet convolution, which is an operation that interacts well with summatory functions, logarithmic sums, and particularly well with Dirichlet series.
This is only an introduction to elementary multiplicative number theory techniques. More in-depth treatments may be found in this text of Montgomery-Vaughan, or this text of Bateman-Diamond.
One of the basic problems in analytic number theory is to obtain bounds and asymptotics for sums of the form
in the limit , where
ranges over natural numbers less than
, and
is some arithmetic function of number-theoretic interest. (It is also often convenient to replace this sharply truncated sum with a smoother sum such as
, but we will not discuss this technicality here.) For instance, the prime number theorem is equivalent to the assertion
where is the von Mangoldt function, while the Riemann hypothesis is equivalent to the stronger assertion
It is thus of interest to develop techniques to estimate such sums . Of course, the difficulty of this task depends on how “nice” the function
is. The functions
that come up in number theory lie on a broad spectrum of “niceness”, with some particularly nice functions being quite easy to sum, and some being insanely difficult.
At the easiest end of the spectrum are those functions that exhibit some sort of regularity or “smoothness”. Examples of smoothness include “Archimedean” smoothness, in which
is the restriction of some smooth function
from the reals to the natural numbers, and the derivatives of
are well controlled. A typical example is
One can already get quite good bounds on this quantity by comparison with the integral , namely
with sharper bounds available by using tools such as the Euler-Maclaurin formula (see this blog post). Exponentiating such asymptotics, incidentally, leads to one of the standard proofs of Stirling’s formula (as discussed in this blog post).
One can also consider “non-Archimedean” notions of smoothness, such as periodicity relative to a small period . Indeed, if
is periodic with period
(and is thus essentially a function on the cyclic group
), then one has the easy bound
In particular, we have the fundamental estimate
This is a good estimate when is much smaller than
, but as
approaches
in magnitude, the error term
begins to overwhelm the main term
, and one needs much more delicate information on the fractional part of
in order to obtain good estimates at this point.
One can also consider functions which combine “Archimedean” and “non-Archimedean” smoothness into an “adelic” smoothness. We will not define this term precisely here (though the concept of a Schwartz-Bruhat function is one way to capture this sort of concept), but a typical example might be
where is periodic with some small period
. By using techniques such as summation by parts, one can estimate such sums using the techniques used to estimate sums of periodic functions or functions with (Archimedean) smoothness.
Another class of functions that is reasonably well controlled are the multiplicative functions, in which whenever
are coprime. Here, one can use the powerful techniques of multiplicative number theory, for instance by working with the Dirichlet series
which are clearly related to the partial sums (essentially via the Mellin transform, a cousin of the Fourier and Laplace transforms); for this post we ignore the (important) issue of how to make sense of this series when it is not absolutely convergent (but see this previous blog post for more discussion). A primary reason that this technique is effective is that the Dirichlet series of a multiplicative function factorises as an Euler product
One also obtains similar types of representations for functions that are not quite multiplicative, but are closely related to multiplicative functions, such as the von Mangoldt function (whose Dirichlet series
is not given by an Euler product, but instead by the logarithmic derivative of an Euler product).
Moving another notch along the spectrum between well-controlled and ill-controlled functions, one can consider functions that are divisor sums such as
for some other arithmetic function , and some level
. This is a linear combination of periodic functions
and is thus technically periodic in
(with period equal to the least common multiple of all the numbers from
to
), but in practice this periodic is far too large to be useful (except for extremely small levels
, e.g.
). Nevertheless, we can still control the sum
simply by rearranging the summation:
and thus by (1) one can bound this by the sum of a main term and an error term
. As long as the level
is significantly less than
, one may expect the main term to dominate, and one can often estimate this term by a variety of techniques (for instance, if
is multiplicative, then multiplicative number theory techniques are quite effective, as mentioned previously). Similarly for other slight variants of divisor sums, such as expressions of the form
or expressions of the form
where each is periodic with period
.
One of the simplest examples of this comes when estimating the divisor function
which counts the number of divisors up to . This is a multiplicative function, and is therefore most efficiently estimated using the techniques of multiplicative number theory; but for reasons that will become clearer later, let us “forget” the multiplicative structure and estimate the above sum by more elementary methods. By applying the preceding method, we see that
Here, we are (barely) able to keep the error term smaller than the main term; this is right at the edge of the divisor sum method, because the level in this case is equal to
. Unfortunately, at this high choice of level, it is not always possible to always keep the error term under control like this. For instance, if one wishes to use the standard divisor sum representation
where is the Möbius function, to compute
, then one ends up looking at
From Dirichlet series methods, it is not difficult to establish the identities
and
This suggests (but does not quite prove) that one has
in the sense of conditionally convergent series. Assuming one can justify this (which, ultimately, requires one to exclude zeroes of the Riemann zeta function on the line , as discussed in this previous post), one is eventually left with the estimate
, which is useless as a lower bound (and recovers only the classical Chebyshev estimate
as the upper bound). The inefficiency here when compared to the situation with the divisor function
can be attributed to the signed nature of the Möbius function
, which causes some cancellation in the divisor sum expansion that needs to be compensated for with improved estimates.
However, there are a number of tricks available to reduce the level of divisor sums. The simplest comes from exploiting the change of variables , which can in principle reduce the level by a square root. For instance, when computing the divisor function
, one can observe using this change of variables that every divisor of
above
is paired with one below
, and so we have
except when is a perfect square, in which case one must subtract one from the right-hand side. Using this reduced-level divisor sum representation, one can obtain an improvement to (2), namely
This type of argument is also known as the Dirichlet hyperbola method. A variant of this argument can also deduce the prime number theorem from (3), (4) (and with some additional effort, one can even drop the use of (4)); this is discussed at this previous blog post.
Using this square root trick, one can now also control divisor sums such as
(Note that has no multiplicativity properties in
, and so multiplicative number theory techniques cannot be directly applied here.) The level of the divisor sum here is initially of order
, which is too large to be useful; but using the square root trick, we can expand this expression as
which one can rewrite as
The constraint is periodic in
with period
, so we can write this as
where is the number of solutions in
to the equation
, and so
The function is multiplicative, and can be easily computed at primes
and prime powers
using tools such as quadratic reciprocity and Hensel’s lemma. For instance, by Fermat’s two-square theorem,
is equal to
for
and
for
. From this and standard multiplicative number theory methods (e.g. by obtaining asymptotics on the Dirichlet series
), one eventually obtains the asymptotic
and also
and thus
Similar arguments give asymptotics for on other quadratic polynomials; see for instance this paper of Hooley and these papers by McKee. Note that the irreducibility of the polynomial will be important. If one considers instead a sum involving a reducible polynomial, such as
, then the analogous quantity
becomes significantly larger, leading to a larger growth rate (of order
rather than
) for the sum.
However, the square root trick is insufficient by itself to deal with higher order sums involving the divisor function, such as
the level here is initially of order , and the square root trick only lowers this to about
, creating an error term that overwhelms the main term. And indeed, the asymptotic for such this sum has not yet been rigorously established (although if one heuristically drops error terms, one can arrive at a reasonable conjecture for this asymptotic), although some results are known if one averages over additional parameters (see e.g. this paper of Greaves, or this paper of Matthiesen).
Nevertheless, there is an ingenious argument of Erdös that allows one to obtain good upper and lower bounds for these sorts of sums, in particular establishing the asymptotic
for any fixed irreducible non-constant polynomial that maps
to
(with the implied constants depending of course on the choice of
). There is also the related moment bound
for any fixed (not necessarily irreducible) and any fixed
, due to van der Corput; this bound is in fact used to dispose of some error terms in the proof of (6). These should be compared with what one can obtain from the divisor bound
and the trivial bound
, giving the bounds
for any fixed .
The lower bound in (6) is easy, since one can simply lower the level in (5) to obtain the lower bound
for any , and the preceding methods then easily allow one to obtain the lower bound by taking
small enough (more precisely, if
has degree
, one should take
equal to
or less). The upper bounds in (6) and (7) are more difficult. Ideally, if we could obtain upper bounds of the form
for any fixed , then the preceding methods would easily establish both results. Unfortunately, this bound can fail, as illustrated by the following example. Suppose that
is the product of
distinct primes
, each of which is close to
. Then
has
divisors, with
of them close to
for each
. One can think of (the logarithms of) these divisors as being distributed according to what is essentially a Bernoulli distribution, thus a randomly selected divisor of
has magnitude about
, where
is a random variable which has the same distribution as the number of heads in
independently tossed fair coins. By the law of large numbers,
should concentrate near
when
is large, which implies that the majority of the divisors of
will be close to
. Sending
, one can show that the bound (8) fails whenever
.
This however can be fixed in a number of ways. First of all, even when , one can show weaker substitutes for (8). For instance, for any fixed
and
one can show a bound of the form
for some depending only on
. This nice elementary inequality (first observed by Landreau) already gives a quite short proof of van der Corput’s bound (7).
For Erdös’s upper bound (6), though, one cannot afford to lose these additional factors of , and one must argue more carefully. Here, the key observation is that the counterexample discussed earlier – when the natural number
is the product of a large number of fairly small primes – is quite atypical; most numbers have at least one large prime factor. For instance, the number of natural numbers less than
that contain a prime factor between
and
is equal to
which, thanks to Mertens’ theorem
for some absolute constant , is comparable to
. In a similar spirit, one can show by similarly elementary means that the number of natural numbers
less than
that are
-smooth, in the sense that all prime factors are at most
, is only about
or so. Because of this, one can hope that the bound (8), while not true in full generality, will still be true for most natural numbers
, with some slightly weaker substitute available (such as (7)) for the exceptional numbers
. This turns out to be the case by an elementary but careful argument.
The Erdös argument is quite robust; for instance, the more general inequality
for fixed irreducible and
, which improves van der Corput’s inequality (8) was shown by Delmer using the same methods. (A slight error in the original paper of Erdös was also corrected in this latter paper.) In a forthcoming revision to my paper on the Erdös-Straus conjecture, Christian Elsholtz and I have also applied this method to obtain bounds such as
which turn out to be enough to obtain the right asymptotics for the number of solutions to the equation .
Below the fold I will provide some more details of the arguments of Landreau and of Erdös.
Given a positive integer , let
denote the number of divisors of n (including 1 and n), thus for instance d(6)=4, and more generally, if n has a prime factorisation
(1)
then (by the fundamental theorem of arithmetic)
. (2)
Clearly, . The divisor bound asserts that, as
gets large, one can improve this trivial bound to
(3)
for any , where
depends only on
; equivalently, in asymptotic notation one has
. In fact one has a more precise bound
. (4)
The divisor bound is useful in many applications in number theory, harmonic analysis, and even PDE (on periodic domains); it asserts that for any large number n, only a “logarithmically small” set of numbers less than n will actually divide n exactly, even in the worst-case scenario when n is smooth. (The average value of d(n) is much smaller, being about on the average, as can be seen easily from the double counting identity
,
or from the heuristic that a randomly chosen number m less than n has a probability about 1/m of dividing n, and . However, (4) is the correct “worst case” bound, as I discuss below.)
The divisor bound is elementary to prove (and not particularly difficult), and I was asked about it recently, so I thought I would provide the proof here, as it serves as a case study in how to establish worst-case estimates in elementary multiplicative number theory.
[Update, Sep 24: some applications added.]
Recent Comments