Given a positive integer , let denote the number of divisors of n (including 1 and n), thus for instance d(6)=4, and more generally, if n has a prime factorisation
then (by the fundamental theorem of arithmetic)
Clearly, . The divisor bound asserts that, as gets large, one can improve this trivial bound to
for any , where depends only on ; equivalently, in asymptotic notation one has . In fact one has a more precise bound
The divisor bound is useful in many applications in number theory, harmonic analysis, and even PDE (on periodic domains); it asserts that for any large number n, only a “logarithmically small” set of numbers less than n will actually divide n exactly, even in the worst-case scenario when n is smooth. (The average value of d(n) is much smaller, being about on the average, as can be seen easily from the double counting identity
or from the heuristic that a randomly chosen number m less than n has a probability about 1/m of dividing n, and . However, (4) is the correct “worst case” bound, as I discuss below.)
The divisor bound is elementary to prove (and not particularly difficult), and I was asked about it recently, so I thought I would provide the proof here, as it serves as a case study in how to establish worst-case estimates in elementary multiplicative number theory.
[Update, Sep 24: some applications added.]
— Proof of (3) —
Let’s first prove the weak form of the divisor bound (3), which is already good enough for many applications (because a loss of is often negligible if, say, the final goal is to extract some polynomial factor in n in one’s eventual estimates). By rearranging a bit, our task is to show that for any , the expression
is bounded uniformly in n by some constant depending on . Using (1) and (2), we can express (5) as a product
where each term involves a different prime , and the are at least 1. We have thus “localised” the problem to studying the effect of each individual prime independently. (We are able to do this because d(n) is a multiplicative function.)
Let’s fix a prime and look at a single term . The numerator is linear in , while the denominator is exponential. Thus, as per Malthus, we expect the denominator to dominate, at least when is large. But, because of the , the numerator might be able to exceed the denominator when is small – but only if is also small.
Following these heuristics, we now divide into cases. Suppose that is large, say . Then (by Taylor expansion), and so the contribution of to the product (6) is at most 1. So all the large primes give a net contribution of at most 1 here.
What about the small primes, in which ? Well, by Malthus, we know that the sequence goes to zero as . Since convergent sequences are bounded, we therefore have some bound of the form for some depending only on and , but not on . So, each small prime gives a bounded contribution to (6) (uniformly in n). But the number of small primes is itself bounded (uniformly in n). Thus the total product in (6) is also bounded uniformly in n, and the claim follows.
— Proof of (4) —
One can refine the above analysis to get a more explicit value of , which will let us get (4), as follows.
Again consider the product (6) for some . As discussed previously, each prime larger than gives a contribution of at most 1. What about the small primes? Here we can estimate the denominator from below by Taylor expansion:
(the point here being that our bound is uniform in ). One can of course use undergraduate calculus to try to sharpen the bound here, but it turns out not to improve by too much, basically because the Taylor approximation is quite accurate when x is small, which is the important case here.
Anyway, inserting this bound into (6), we see that (6) is in fact bounded by
Now let’s be very crude and bound from below by , and bound the number of primes less than by . (One can of course be more efficient about this, but again it turns out not to improve the final bounds too much. A general principle is that when one estimating an expression such as , or more generally the product of B terms, each of size about A, then it is far more important to get a good bound for B than to get a good bound for A, except in those cases when A is very close to 1.) We thus conclude that
unwinding what this means for (3), we obtain
for all and . If we now set for a sufficiently large C, then the second term of the RHS dominates the first (as can be seen by taking logarithms), and the claim (4) follows.
The above argument also suggests the counterexample that will demonstrate that (4) is basically sharp. Pick , and let n be the product of all the primes up to . The prime number theorem tells us that . On the other hand, the prime number theorem also tells us that the number of primes dividing n is , so by (2), . Using these numbers we see that (4) is tight up to constants. [If one does not care about the constants, then one does not need the full strength of the prime number theorem to show that (4) is sharp; the more elementary bounds of Chebyshev, that say that the number of primes up to N is comparable to up to constants, would suffice.]
— Why is the divisor bound useful? —
One principal application of the divisor bound (and some generalisations of that bound) is to control the number of solutions to a Diophantine equation. For instance, (3) immediately implies that for any fixed positive n, the number of solutions to the equation
with x,y integer, is only at most. (For x and y real, the number of solutions is of course infinite.) This can be leveraged to some other Diophantine equations by high-school algebra. For instance, thanks to the identity , we conclude that the number of integer solutions to
is also at most ; similarly, the identity implies that the number of integer solutions to
is at most . (Note from Bezout’s theorem (or direct calculation) that and determine up to at most a finite ambiguity.)
Now consider the number of solutions to the equation
In this case, does not split over the rationals , and so one cannot directly exploit the divisor bound for the rational integers . However, we can factor over the Gaussian rationals . Happily, the Gaussian integers enjoy essentially the same divisor bound as the rational integers ; the Gaussian integers have unique factorisation, but perhaps more importantly they only have a finite number of units ( to be precise). Because of this, one can easily check that also has at most solutions.
One can similarly exploit the divisor bound on other number fields; for instance the divisor bound for lets one count solutions to or . On the other hand, not all number fields have the divisor bound. For instance, has an infinite number of units, which means that the number of solutions to Pell’s equation
Another application of the divisor bound comes up in sieve theory. Here, one is often dealing with functions of the form , where the sieve weights typically have size , and the sum is over all d that divide n. The divisor bound (3) then implies that the sieve function also has size . This bound is too crude to deal with the most delicate components of a sieve theory estimate, but is often very useful for dealing with error terms (especially those which have gained a factor of relative to the main terms for some ).