You are currently browsing the tag archive for the ‘elementary number theory’ tag.

The Riemann zeta function {\zeta(s)} is defined in the region {\hbox{Re}(s)>1} by the absolutely convergent series

\displaystyle  \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = 1 + \frac{1}{2^s} + \frac{1}{3^s} + \ldots. \ \ \ \ \ (1)

Thus, for instance, it is known that {\zeta(2)=\pi^2/6}, and thus

\displaystyle  \sum_{n=1}^\infty \frac{1}{n^2} = 1 + \frac{1}{4} + \frac{1}{9} + \ldots = \frac{\pi^2}{6}. \ \ \ \ \ (2)

For {\hbox{Re}(s) \leq 1}, the series on the right-hand side of (1) is no longer absolutely convergent, or even conditionally convergent. Nevertheless, the {\zeta} function can be extended to this region (with a pole at {s=1}) by analytic continuation. For instance, it can be shown that after analytic continuation, one has {\zeta(0) = -1/2}, {\zeta(-1) = -1/12}, and {\zeta(-2)=0}, and more generally

\displaystyle  \zeta(-s) = - \frac{B_{s+1}}{s+1} \ \ \ \ \ (3)

for {s=1,2,\ldots}, where {B_n} are the Bernoulli numbers. If one formally applies (1) at these values of {s}, one obtains the somewhat bizarre formulae

\displaystyle  \sum_{n=1}^\infty 1 = 1 + 1 + 1 + \ldots = -1/2 \ \ \ \ \ (4)

\displaystyle  \sum_{n=1}^\infty n = 1 + 2 + 3 + \ldots = -1/12 \ \ \ \ \ (5)

\displaystyle  \sum_{n=1}^\infty n^2 = 1 + 4 + 9 + \ldots = 0 \ \ \ \ \ (6)

and

\displaystyle  \sum_{n=1}^\infty n^s = 1 + 2^s + 3^s + \ldots = -\frac{B_{s+1}}{s+1}. \ \ \ \ \ (7)

Clearly, these formulae do not make sense if one stays within the traditional way to evaluate infinite series, and so it seems that one is forced to use the somewhat unintuitive analytic continuation interpretation of such sums to make these formulae rigorous. But as it stands, the formulae look “wrong” for several reasons. Most obviously, the summands on the left are all positive, but the right-hand sides can be zero or negative. A little more subtly, the identities do not appear to be consistent with each other. For instance, if one adds (4) to (5), one obtains

\displaystyle  \sum_{n=1}^\infty (n+1) = 2 + 3 + 4 + \ldots = -7/12 \ \ \ \ \ (8)

whereas if one subtracts {1} from (5) one obtains instead

\displaystyle  \sum_{n=2}^\infty n = 0 + 2 + 3 + 4 + \ldots = -13/12 \ \ \ \ \ (9)

and the two equations seem inconsistent with each other.

However, it is possible to interpret (4), (5), (6) by purely real-variable methods, without recourse to complex analysis methods such as analytic continuation, thus giving an “elementary” interpretation of these sums that only requires undergraduate calculus; we will later also explain how this interpretation deals with the apparent inconsistencies pointed out above.

To see this, let us first consider a convergent sum such as (2). The classical interpretation of this formula is the assertion that the partial sums

\displaystyle \sum_{n=1}^N \frac{1}{n^2} = 1 + \frac{1}{4} + \frac{1}{9} + \ldots + \frac{1}{N^2}

converge to {\pi^2/6} as {N \rightarrow \infty}, or in other words that

\displaystyle  \sum_{n=1}^N \frac{1}{n^2} = \frac{\pi^2}{6} + o(1)

where {o(1)} denotes a quantity that goes to zero as {N \rightarrow \infty}. Actually, by using the integral test estimate

\displaystyle  \sum_{n=N+1}^\infty \frac{1}{n^2} \leq \int_N^\infty \frac{dx}{x^2} = \frac{1}{N}

we have the sharper result

\displaystyle  \sum_{n=1}^N \frac{1}{n^2} = \frac{\pi^2}{6} + O(\frac{1}{N}).

Thus we can view {\frac{\pi^2}{6}} as the leading coefficient of the asymptotic expansion of the partial sums of {\sum_{n=1}^\infty 1/n^2}.

One can then try to inspect the partial sums of the expressions in (4), (5), (6), but the coefficients bear no obvious relationship to the right-hand sides:

\displaystyle  \sum_{n=1}^N 1 = N

\displaystyle  \sum_{n=1}^N n = \frac{1}{2} N^2 + \frac{1}{2} N

\displaystyle  \sum_{n=1}^N n^2 = \frac{1}{3} N^3 + \frac{1}{2} N^2 + \frac{1}{6} N.

For (7), the classical Faulhaber formula (or Bernoulli formula) gives

\displaystyle  \sum_{n=1}^N n^s = \frac{1}{s+1} \sum_{j=0}^s \binom{s+1}{j} B_j N^{s+1-j} \ \ \ \ \ (10)

\displaystyle  = \frac{1}{s+1} N^{s+1} + \frac{1}{2} N^s + \frac{s}{12} N^{s-1} + \ldots + B_s N

for {s \geq 2}, which has a vague resemblance to (7), but again the connection is not particularly clear.

The problem here is the discrete nature of the partial sum

\displaystyle  \sum_{n=1}^N n^s = \sum_{n \leq N} n^s,

which (if {N} is viewed as a real number) has jump discontinuities at each positive integer value of {N}. These discontinuities yield various artefacts when trying to approximate this sum by a polynomial in {N}. (These artefacts also occur in (2), but happen in that case to be obscured in the error term {O(1/N)}; but for the divergent sums (4), (5), (6), (7), they are large enough to cause real trouble.)

However, these issues can be resolved by replacing the abruptly truncated partial sums {\sum_{n=1}^N n^s} with smoothed sums {\sum_{n=1}^\infty \eta(n/N) n^s}, where {\eta: {\bf R}^+ \rightarrow {\bf R}} is a cutoff function, or more precisely a compactly supported bounded function that equals {1} at {0}. The case when {\eta} is the indicator function {1_{[0,1]}} then corresponds to the traditional partial sums, with all the attendant discretisation artefacts; but if one chooses a smoother cutoff, then these artefacts begin to disappear (or at least become lower order), and the true asymptotic expansion becomes more manifest.

Note that smoothing does not affect the asymptotic value of sums that were already absolutely convergent, thanks to the dominated convergence theorem. For instance, we have

\displaystyle  \sum_{n=1}^\infty \eta(n/N) \frac{1}{n^2} = \frac{\pi^2}{6} + o(1)

whenever {\eta} is a cutoff function (since {\eta(n/N) \rightarrow 1} pointwise as {N \rightarrow \infty} and is uniformly bounded). If {\eta} is equal to {1} on a neighbourhood of the origin, then the integral test argument then recovers the {O(1/N)} decay rate:

\displaystyle  \sum_{n=1}^\infty \eta(n/N) \frac{1}{n^2} = \frac{\pi^2}{6} + O(\frac{1}{N}).

However, smoothing can greatly improve the convergence properties of a divergent sum. The simplest example is Grandi’s series

\displaystyle  \sum_{n=1}^\infty (-1)^{n-1} = 1 - 1 + 1 - \ldots.

The partial sums

\displaystyle  \sum_{n=1}^N (-1)^{n-1} = \frac{1}{2} + \frac{1}{2} (-1)^{N-1}

oscillate between {1} and {0}, and so this series is not conditionally convergent (and certainly not absolutely convergent). However, if one performs analytic continuation on the series

\displaystyle  \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^s} = 1 - \frac{1}{2^s} + \frac{1}{3^s} - \ldots

and sets {s = 0}, one obtains a formal value of {1/2} for this series. This value can also be obtained by smooth summation. Indeed, for any cutoff function {\eta}, we can regroup

\displaystyle  \sum_{n=1}^\infty (-1)^{n-1} \eta(n/N) =

\displaystyle  \frac{\eta(1/N)}{2} + \sum_{m=1}^\infty \frac{\eta((2m-1)/N) - 2\eta(2m/N) + \eta((2m+1)/N)}{2}.

If {\eta} is twice continuously differentiable (i.e. {\eta \in C^2}), then from Taylor expansion we see that the summand has size {O(1/N^2)}, and also (from the compact support of {\eta}) is only non-zero when {m=O(N)}. This leads to the asymptotic

\displaystyle  \sum_{n=1}^\infty (-1)^{n-1} \eta(n/N) = \frac{1}{2} + O( \frac{1}{N} )

and so we recover the value of {1/2} as the leading term of the asymptotic expansion.

Exercise 1 Show that if {\eta} is merely once continuously differentiable (i.e. {\eta \in C^1}), then we have a similar asymptotic, but with an error term of {o(1)} instead of {O(1/N)}. This is an instance of a more general principle that smoother cutoffs lead to better error terms, though the improvement sometimes stops after some degree of regularity.

Remark 1 The most famous instance of smoothed summation is Cesáro summation, which corresponds to the cutoff function {\eta(x) := (1-x)_+}. Unsurprisingly, when Cesáro summation is applied to Grandi’s series, one again recovers the value of {1/2}.

If we now revisit the divergent series (4), (5), (6), (7) with smooth summation in mind, we finally begin to see the origin of the right-hand sides. Indeed, for any fixed smooth cutoff function {\eta}, we will shortly show that

\displaystyle  \sum_{n=1}^\infty \eta(n/N) = -\frac{1}{2} + C_{\eta,0} N + O(\frac{1}{N}) \ \ \ \ \ (11)

\displaystyle  \sum_{n=1}^\infty n \eta(n/N) = -\frac{1}{12} + C_{\eta,1} N^2 + O(\frac{1}{N}) \ \ \ \ \ (12)

\displaystyle  \sum_{n=1}^\infty n^2 \eta(n/N) = C_{\eta,2} N^3 + O(\frac{1}{N}) \ \ \ \ \ (13)

and more generally

\displaystyle  \sum_{n=1}^\infty n^s \eta(n/N) = -\frac{B_{s+1}}{s+1} + C_{\eta,s} N^{s+1} + O(\frac{1}{N}) \ \ \ \ \ (14)

for any fixed {s=1,2,3,\ldots} where {C_{\eta,s}} is the Archimedean factor

\displaystyle  C_{\eta,s} := \int_0^\infty x^s \eta(x)\ dx \ \ \ \ \ (15)

(which is also essentially the Mellin transform of {\eta}). Thus we see that the values (4), (5), (6), (7) obtained by analytic continuation are nothing more than the constant terms of the asymptotic expansion of the smoothed partial sums. This is not a coincidence; we will explain the equivalence of these two interpretations of such sums (in the model case when the analytic continuation has only finitely many poles and does not grow too fast at infinity) below the fold.

This interpretation clears up the apparent inconsistencies alluded to earlier. For instance, the sum {\sum_{n=1}^\infty n = 1 + 2 + 3 + \ldots} consists only of non-negative terms, as does its smoothed partial sums {\sum_{n=1}^\infty n \eta(n/N)} (if {\eta} is non-negative). Comparing this with (13), we see that this forces the highest-order term {C_{\eta,1} N^2} to be non-negative (as indeed it is), but does not prohibit the lower-order constant term {-\frac{1}{12}} from being negative (which of course it is).

Similarly, if we add together (12) and (11) we obtain

\displaystyle  \sum_{n=1}^\infty (n+1) \eta(n/N) = -\frac{7}{12} + C_{\eta,1} N^2 + C_{\eta,0} N + O(\frac{1}{N}) \ \ \ \ \ (16)

while if we subtract {1} from (12) we obtain

\displaystyle  \sum_{n=2}^\infty n \eta(n/N) = -\frac{13}{12} + C_{\eta,1} N^2 + O(\frac{1}{N}). \ \ \ \ \ (17)

These two asymptotics are not inconsistent with each other; indeed, if we shift the index of summation in (17), we can write

\displaystyle  \sum_{n=2}^\infty n \eta(n/N) = \sum_{n=1}^\infty (n+1) \eta((n+1)/N) \ \ \ \ \ (18)

and so we now see that the discrepancy between the two sums in (8), (9) come from the shifting of the cutoff {\eta(n/N)}, which is invisible in the formal expressions in (8), (9) but become manifestly present in the smoothed sum formulation.

Exercise 2 By Taylor expanding {\eta(n+1/N)} and using (11), (18) show that (16) and (17) are indeed consistent with each other, and in particular one can deduce the latter from the former.

Read the rest of this entry »

I’ve just uploaded to the arXiv the paper A remark on partial sums involving the Möbius function, submitted to Bull. Aust. Math. Soc..

The Möbius function {\mu(n)} is defined to equal {(-1)^k} when {n} is the product of {k} distinct primes, and equal to zero otherwise; it is closely connected to the distribution of the primes. In 1906, Landau observed that one could show using purely elementary means that the prime number theorem

\displaystyle  \sum_{p \leq x} 1 = (1+o(1)) \frac{x}{\log x} \ \ \ \ \ (1)

(where {o(1)} denotes a quantity that goes to zero as {x \rightarrow \infty}) was logically equivalent to the partial sum estimates

\displaystyle  \sum_{n \leq x} \mu(n) = o(x) \ \ \ \ \ (2)

and

\displaystyle  \sum_{n \leq x} \frac{\mu(n)}{n} = o(1); \ \ \ \ \ (3)

we give a sketch of the proof of these equivalences below the fold.

On the other hand, these three inequalities are all easy to prove if the {o()} terms are replaced by their {O()} counterparts. For instance, by observing that the binomial coefficient {\binom{2n}{n}} is bounded by {4^n} on the one hand (by Pascal’s triangle or the binomial theorem), and is divisible by every prime between {n} and {2n} on the other hand, we conclude that

\displaystyle  \sum_{n < p \leq 2n} \log p \leq n \log 4

from which it is not difficult to show that

\displaystyle  \sum_{p \leq x} 1 = O( \frac{x}{\log x} ). \ \ \ \ \ (4)

Also, since {|\mu(n)| \leq 1}, we clearly have

\displaystyle  |\sum_{n \leq x} \mu(n)| \leq x.

Finally, one can also show that

\displaystyle  |\sum_{n \leq x} \frac{\mu(n)}{n}| \leq 1. \ \ \ \ \ (5)

Indeed, assuming without loss of generality that {x} is a positive integer, and summing the inversion formula {1_{n=1} = \sum_{d|n} \mu(d)} over all {n \leq x} one sees that

\displaystyle  1 = \sum_{d \leq x} \mu(d) \left\lfloor \frac{x}{d}\right \rfloor = \sum_{d \leq x} \mu(d) \frac{x}{d} - \sum_{d<x} \mu(d) \left \{ \frac{x}{d} \right\} \ \ \ \ \ (6)

and the claim follows by bounding {|\mu(d) \left \{ \frac{x}{d} \right\}|} by {1}.

In this paper I extend these observations to more general multiplicative subsemigroups of the natural numbers. More precisely, if {P} is any set of primes (finite or infinite), I show that

\displaystyle  |\sum_{n \in \langle P \rangle: n \leq x} \frac{\mu(n)}{n}| \leq 1 \ \ \ \ \ (7)

and that

\displaystyle  \sum_{n \in \langle P \rangle: n \leq x} \frac{\mu(n)}{n} = \prod_{p \in P} (1-\frac{1}{p}) + o(1), \ \ \ \ \ (8)

where {\langle P \rangle} is the multiplicative semigroup generated by {P}, i.e. the set of natural numbers whose prime factors lie in {P}.

Actually the methods are completely elementary (the paper is just six pages long), and I can give the proof of (7) in full here. Again we may take {x} to be a positive integer. Clearly we may assume that

\displaystyle  \sum_{n \in \langle P \rangle: n \leq x} \frac{1}{n} > 1, \ \ \ \ \ (9)

as the claim is trivial otherwise.

If {P'} denotes the primes that are not in {P}, then Möbius inversion gives us

\displaystyle  1_{n \in \langle P' \rangle} = \sum_{d|n; d \in \langle P \rangle} \mu(d).

Summing this for {1 \leq n \leq x} gives

\displaystyle  \sum_{n \in \langle P' \rangle: n \leq x} 1 = \sum_{d \in \langle P \rangle: d \leq x} \mu(d) \frac{x}{d} - \sum_{d \in \langle P \rangle: d \leq x} \mu(d) \left \{ \frac{x}{d} \right \}.

We can bound {|\mu(d) \left \{ \frac{x}{d} \right \}| \leq 1 - \frac{1}{d}} and so

\displaystyle  |\sum_{d \in \langle P \rangle: d \leq x} \mu(d) \frac{x}{d}| \leq \sum_{n \in \langle P' \rangle: n \leq x} 1 + \sum_{n \in \langle P \rangle: n \leq x} 1 - \sum_{n \in \langle P \rangle: n \leq x} \frac{1}{n}.

The claim now follows from (9), since {\langle P \rangle} and {\langle P' \rangle} overlap only at {1}.

As special cases of (7) we see that

\displaystyle  |\sum_{d \leq x: d|m} \frac{\mu(d)}{d}| \leq 1

and

\displaystyle  |\sum_{n \leq x: (m,n)=1} \frac{\mu(n)}{n}| \leq 1

for all {m,x}. Since {\mu(mn) = \mu(m) \mu(n) 1_{(m,n)=1}}, we also have

\displaystyle  |\sum_{n \leq x} \frac{\mu(mn)}{n}| \leq 1.

One might hope that these inequalities (which gain a factor of {\log x} over the trivial bound) might be useful when performing effective sieve theory, or effective estimates on various sums involving the primes or arithmetic functions.

This inequality (7) is so simple to state and prove that I must think that it was known to, say, Landau or Chebyshev, but I can’t find any reference to it in the literature. [Update, Sep 4: I have learned that similar results have been obtained in a paper by Granville and Soundararajan, and have updated the paper appropriately.] The proof of (8) is a simple variant of that used to prove (7) but I will not detail it here.

Curiously, this is one place in number theory where the elementary methods seem superior to the analytic ones; there is a zeta function {\zeta_P(s) = \sum_{n \in \langle P \rangle} \frac{1}{n^s} = \prod_{p \in P} (1-\frac{1}{p^s})^{-1}} associated to this problem, but it need not have a meromorphic continuation beyond the region {\{ \hbox{Re}(s) > 1 \}}, and it turns out to be remarkably difficult to use this function to establish the above results. (I do have a proof of this form, which I in fact found before I stumbled on the elementary proof, but it is far longer and messier.)

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 4,051 other followers