The Riemann zeta function {\zeta(s)} is defined in the region {\hbox{Re}(s)>1} by the absolutely convergent series

\displaystyle  \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = 1 + \frac{1}{2^s} + \frac{1}{3^s} + \ldots. \ \ \ \ \ (1)

Thus, for instance, it is known that {\zeta(2)=\pi^2/6}, and thus

\displaystyle  \sum_{n=1}^\infty \frac{1}{n^2} = 1 + \frac{1}{4} + \frac{1}{9} + \ldots = \frac{\pi^2}{6}. \ \ \ \ \ (2)

For {\hbox{Re}(s) \leq 1}, the series on the right-hand side of (1) is no longer absolutely convergent, or even conditionally convergent. Nevertheless, the {\zeta} function can be extended to this region (with a pole at {s=1}) by analytic continuation. For instance, it can be shown that after analytic continuation, one has {\zeta(0) = -1/2}, {\zeta(-1) = -1/12}, and {\zeta(-2)=0}, and more generally

\displaystyle  \zeta(-s) = - \frac{B_{s+1}}{s+1} \ \ \ \ \ (3)

for {s=1,2,\ldots}, where {B_n} are the Bernoulli numbers. If one formally applies (1) at these values of {s}, one obtains the somewhat bizarre formulae

\displaystyle  \sum_{n=1}^\infty 1 = 1 + 1 + 1 + \ldots = -1/2 \ \ \ \ \ (4)

\displaystyle  \sum_{n=1}^\infty n = 1 + 2 + 3 + \ldots = -1/12 \ \ \ \ \ (5)

\displaystyle  \sum_{n=1}^\infty n^2 = 1 + 4 + 9 + \ldots = 0 \ \ \ \ \ (6)

and

\displaystyle  \sum_{n=1}^\infty n^s = 1 + 2^s + 3^s + \ldots = -\frac{B_{s+1}}{s+1}. \ \ \ \ \ (7)

Clearly, these formulae do not make sense if one stays within the traditional way to evaluate infinite series, and so it seems that one is forced to use the somewhat unintuitive analytic continuation interpretation of such sums to make these formulae rigorous. But as it stands, the formulae look “wrong” for several reasons. Most obviously, the summands on the left are all positive, but the right-hand sides can be zero or negative. A little more subtly, the identities do not appear to be consistent with each other. For instance, if one adds (4) to (5), one obtains

\displaystyle  \sum_{n=1}^\infty (n+1) = 2 + 3 + 4 + \ldots = -7/12 \ \ \ \ \ (8)

whereas if one subtracts {1} from (5) one obtains instead

\displaystyle  \sum_{n=2}^\infty n = 0 + 2 + 3 + 4 + \ldots = -13/12 \ \ \ \ \ (9)

and the two equations seem inconsistent with each other.

However, it is possible to interpret (4), (5), (6) by purely real-variable methods, without recourse to complex analysis methods such as analytic continuation, thus giving an “elementary” interpretation of these sums that only requires undergraduate calculus; we will later also explain how this interpretation deals with the apparent inconsistencies pointed out above.

To see this, let us first consider a convergent sum such as (2). The classical interpretation of this formula is the assertion that the partial sums

\displaystyle \sum_{n=1}^N \frac{1}{n^2} = 1 + \frac{1}{4} + \frac{1}{9} + \ldots + \frac{1}{N^2}

converge to {\pi^2/6} as {N \rightarrow \infty}, or in other words that

\displaystyle  \sum_{n=1}^N \frac{1}{n^2} = \frac{\pi^2}{6} + o(1)

where {o(1)} denotes a quantity that goes to zero as {N \rightarrow \infty}. Actually, by using the integral test estimate

\displaystyle  \sum_{n=N+1}^\infty \frac{1}{n^2} \leq \int_N^\infty \frac{dx}{x^2} = \frac{1}{N}

we have the sharper result

\displaystyle  \sum_{n=1}^N \frac{1}{n^2} = \frac{\pi^2}{6} + O(\frac{1}{N}).

Thus we can view {\frac{\pi^2}{6}} as the leading coefficient of the asymptotic expansion of the partial sums of {\sum_{n=1}^\infty 1/n^2}.

One can then try to inspect the partial sums of the expressions in (4), (5), (6), but the coefficients bear no obvious relationship to the right-hand sides:

\displaystyle  \sum_{n=1}^N 1 = N

\displaystyle  \sum_{n=1}^N n = \frac{1}{2} N^2 + \frac{1}{2} N

\displaystyle  \sum_{n=1}^N n^2 = \frac{1}{3} N^3 + \frac{1}{2} N^2 + \frac{1}{6} N.

For (7), the classical Faulhaber formula (or Bernoulli formula) gives

\displaystyle  \sum_{n=1}^N n^s = \frac{1}{s+1} \sum_{j=0}^s \binom{s+1}{j} B_j N^{s+1-j} \ \ \ \ \ (10)

\displaystyle  = \frac{1}{s+1} N^{s+1} + \frac{1}{2} N^s + \frac{s}{12} N^{s-1} + \ldots + B_s N

for {s \geq 2}, which has a vague resemblance to (7), but again the connection is not particularly clear.

The problem here is the discrete nature of the partial sum

\displaystyle  \sum_{n=1}^N n^s = \sum_{n \leq N} n^s,

which (if {N} is viewed as a real number) has jump discontinuities at each positive integer value of {N}. These discontinuities yield various artefacts when trying to approximate this sum by a polynomial in {N}. (These artefacts also occur in (2), but happen in that case to be obscured in the error term {O(1/N)}; but for the divergent sums (4), (5), (6), (7), they are large enough to cause real trouble.)

However, these issues can be resolved by replacing the abruptly truncated partial sums {\sum_{n=1}^N n^s} with smoothed sums {\sum_{n=1}^\infty \eta(n/N) n^s}, where {\eta: {\bf R}^+ \rightarrow {\bf R}} is a cutoff function, or more precisely a compactly supported bounded function that equals {1} at {0}. The case when {\eta} is the indicator function {1_{[0,1]}} then corresponds to the traditional partial sums, with all the attendant discretisation artefacts; but if one chooses a smoother cutoff, then these artefacts begin to disappear (or at least become lower order), and the true asymptotic expansion becomes more manifest.

Note that smoothing does not affect the asymptotic value of sums that were already absolutely convergent, thanks to the dominated convergence theorem. For instance, we have

\displaystyle  \sum_{n=1}^\infty \eta(n/N) \frac{1}{n^2} = \frac{\pi^2}{6} + o(1)

whenever {\eta} is a cutoff function (since {\eta(n/N) \rightarrow 1} pointwise as {N \rightarrow \infty} and is uniformly bounded). If {\eta} is equal to {1} on a neighbourhood of the origin, then the integral test argument then recovers the {O(1/N)} decay rate:

\displaystyle  \sum_{n=1}^\infty \eta(n/N) \frac{1}{n^2} = \frac{\pi^2}{6} + O(\frac{1}{N}).

However, smoothing can greatly improve the convergence properties of a divergent sum. The simplest example is Grandi’s series

\displaystyle  \sum_{n=1}^\infty (-1)^{n-1} = 1 - 1 + 1 - \ldots.

The partial sums

\displaystyle  \sum_{n=1}^N (-1)^{n-1} = \frac{1}{2} + \frac{1}{2} (-1)^{N-1}

oscillate between {1} and {0}, and so this series is not conditionally convergent (and certainly not absolutely convergent). However, if one performs analytic continuation on the series

\displaystyle  \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^s} = 1 - \frac{1}{2^s} + \frac{1}{3^s} - \ldots

and sets {s = 0}, one obtains a formal value of {1/2} for this series. This value can also be obtained by smooth summation. Indeed, for any cutoff function {\eta}, we can regroup

\displaystyle  \sum_{n=1}^\infty (-1)^{n-1} \eta(n/N) =

\displaystyle \frac{\eta(1/N)}{2} + \sum_{m=1}^\infty \frac{\eta((2m-1)/N) - 2\eta(2m/N) + \eta((2m+1)/N)}{2}.

If {\eta} is twice continuously differentiable (i.e. {\eta \in C^2}), then from Taylor expansion we see that the summand has size {O(1/N^2)}, and also (from the compact support of {\eta}) is only non-zero when {m=O(N)}. This leads to the asymptotic

\displaystyle  \sum_{n=1}^\infty (-1)^{n-1} \eta(n/N) = \frac{1}{2} + O( \frac{1}{N} )

and so we recover the value of {1/2} as the leading term of the asymptotic expansion.

Exercise 1 Show that if {\eta} is merely once continuously differentiable (i.e. {\eta \in C^1}), then we have a similar asymptotic, but with an error term of {o(1)} instead of {O(1/N)}. This is an instance of a more general principle that smoother cutoffs lead to better error terms, though the improvement sometimes stops after some degree of regularity.

Remark 1 The most famous instance of smoothed summation is Cesáro summation, which corresponds to the cutoff function {\eta(x) := (1-x)_+}. Unsurprisingly, when Cesáro summation is applied to Grandi’s series, one again recovers the value of {1/2}.

If we now revisit the divergent series (4), (5), (6), (7) with smooth summation in mind, we finally begin to see the origin of the right-hand sides. Indeed, for any fixed smooth cutoff function {\eta}, we will shortly show that

\displaystyle  \sum_{n=1}^\infty \eta(n/N) = -\frac{1}{2} + C_{\eta,0} N + O(\frac{1}{N}) \ \ \ \ \ (11)

\displaystyle  \sum_{n=1}^\infty n \eta(n/N) = -\frac{1}{12} + C_{\eta,1} N^2 + O(\frac{1}{N}) \ \ \ \ \ (12)

\displaystyle  \sum_{n=1}^\infty n^2 \eta(n/N) = C_{\eta,2} N^3 + O(\frac{1}{N}) \ \ \ \ \ (13)

and more generally

\displaystyle  \sum_{n=1}^\infty n^s \eta(n/N) = -\frac{B_{s+1}}{s+1} + C_{\eta,s} N^{s+1} + O(\frac{1}{N}) \ \ \ \ \ (14)

for any fixed {s=1,2,3,\ldots} where {C_{\eta,s}} is the Archimedean factor

\displaystyle  C_{\eta,s} := \int_0^\infty x^s \eta(x)\ dx \ \ \ \ \ (15)

(which is also essentially the Mellin transform of {\eta}). Thus we see that the values (4), (5), (6), (7) obtained by analytic continuation are nothing more than the constant terms of the asymptotic expansion of the smoothed partial sums. This is not a coincidence; we will explain the equivalence of these two interpretations of such sums (in the model case when the analytic continuation has only finitely many poles and does not grow too fast at infinity) below the fold.

This interpretation clears up the apparent inconsistencies alluded to earlier. For instance, the sum {\sum_{n=1}^\infty n = 1 + 2 + 3 + \ldots} consists only of non-negative terms, as does its smoothed partial sums {\sum_{n=1}^\infty n \eta(n/N)} (if {\eta} is non-negative). Comparing this with (13), we see that this forces the highest-order term {C_{\eta,1} N^2} to be non-negative (as indeed it is), but does not prohibit the lower-order constant term {-\frac{1}{12}} from being negative (which of course it is).

Similarly, if we add together (12) and (11) we obtain

\displaystyle  \sum_{n=1}^\infty (n+1) \eta(n/N) = -\frac{7}{12} + C_{\eta,1} N^2 + C_{\eta,0} N + O(\frac{1}{N}) \ \ \ \ \ (16)

while if we subtract {1} from (12) we obtain

\displaystyle  \sum_{n=2}^\infty n \eta(n/N) = -\frac{13}{12} + C_{\eta,1} N^2 + O(\frac{1}{N}). \ \ \ \ \ (17)

These two asymptotics are not inconsistent with each other; indeed, if we shift the index of summation in (17), we can write

\displaystyle  \sum_{n=2}^\infty n \eta(n/N) = \sum_{n=1}^\infty (n+1) \eta((n+1)/N) \ \ \ \ \ (18)

and so we now see that the discrepancy between the two sums in (8), (9) come from the shifting of the cutoff {\eta(n/N)}, which is invisible in the formal expressions in (8), (9) but become manifestly present in the smoothed sum formulation.

Exercise 2 By Taylor expanding {\eta(n+1/N)} and using (11), (18) show that (16) and (17) are indeed consistent with each other, and in particular one can deduce the latter from the former.

— 1. Smoothed asymptotics —

We now prove (11), (12), (13), (14). We will prove the first few asymptotics by ad hoc methods, but then switch to the systematic method of the Euler-Maclaurin formula to establish the general case.

For sake of argument we shall assume that the smooth cutoff {\eta: {\bf R}^+ \rightarrow {\bf R}} is supported in the interval {[0,1]} (the general case is similar, and can also be deduced from this case by redefining the {N} parameter). Thus the sum {\sum_{n=1}^\infty \eta(n/N) x^s} is now only non-trivial in the range {n \leq N}.

To establish (11), we shall exploit the trapezoidal rule. For any smooth function {f: {\bf R} \rightarrow {\bf R}}, and on an interval {[n,n+1]}, we see from Taylor expansion that

\displaystyle  f(n+\theta) = f(n) + \theta f'(n) + O( \|f\|_{\dot C^2} )

for any {0 \leq \theta \leq 1}, {\|f\|_{\dot C^2} := \sup_{x \in {\bf R}} |f''(x)|}. In particular we have

\displaystyle  f(n+1) = f(n) + f'(n) + O( \|f\|_{\dot C^2} )

and

\displaystyle  \int_n^{n+1} f(x)\ dx = f(n) + \frac{1}{2} f'(n) + O( \|f\|_{\dot C^2} );

eliminating {f'(n)}, we conclude that

\displaystyle  \int_n^{n+1} f(x)\ dx = \frac{1}{2} f(n) + \frac{1}{2} f(n+1) + O( \|f\|_{\dot C^2} ).

Summing in {n}, we conclude the trapezoidal rule

\displaystyle  \int_0^N f(x)\ dx = \frac{1}{2} f(0) + f(1) + \ldots + f(N-1) + \frac{1}{2} f(N) + O( N \|f\|_{\dot C^2} ).

We apply this with {f(x) := \eta(x/N)}, which has a {\dot C^2} norm of {O(1/N^2)} from the chain rule, and conclude that

\displaystyle  \int_0^N \eta(x/N)\ dx = \frac{1}{2} + \sum_{n=1}^\infty \eta(n/N) + O( 1/N ).

But from (15) and a change of variables, the left-hand side is just {N c_{\eta,0}}. This gives (11).

The same argument does not quite work with (12); one would like to now set {f(x) := x \eta(x/N)}, but the {\dot C^2} norm is now too large ({O(1/N)} instead of {O(1/N^2)}). To get around this we have to refine the trapezoidal rule by performing the more precise Taylor expansion

\displaystyle  f(n+\theta) = f(n) + \theta f'(n) + \frac{1}{2} \theta^2 f''(n) + O( \|f\|_{\dot C^3} )

where {\|f\|_{\dot C^3} := \sup_{x \in {\bf R}} |f'''(x)|}. Now we have

\displaystyle  f(n+1) = f(n) + f'(n) + \frac{1}{2} f''(n) + O( \|f\|_{\dot C^3} )

and

\displaystyle  \int_n^{n+1} f(x)\ dx = f(n) + \frac{1}{2} f'(n) + \frac{1}{6} f''(n) + O( \|f\|_{\dot C^3} ).

We cannot simultaneously eliminate both {f'(n)} and {f''(n)}. However, using the additional Taylor expansion

\displaystyle  f'(n+1) = f'(n) + f''(n) + O( \|f\|_{\dot C^3} )

one obtains

\displaystyle  \int_n^{n+1} f(x)\ dx = \frac{1}{2} f(n) + \frac{1}{2} f(n+1) + \frac{1}{12} (f'(n) - f'(n+1)) + O( \|f\|_{\dot C^3} )

and thus on summing in {n}, and assuming that {f} vanishes to second order at {N}, one has (by telescoping series)

\displaystyle  \int_0^N f(x)\ dx = \frac{1}{2} f(0) + \frac{1}{12} f'(0) + \sum_{n=1}^N f(n) + O( N \|f\|_{\dot C^3} ).

We apply this with {f(x) := x \eta(x/N)}. After a few applications of the chain rule and product rule, we see that {\|f\|_{\dot C^3} = O(1/N^2)}; also, {f(0)=0}, {f'(0)=1}, and {\int_0^N f(x)\ dx = N^2 c_{\eta,1}}. This gives (12).

The proof of (13) is similar. With a fourth order Taylor expansion, the above arguments give

\displaystyle  f(n+1) = f(n) + f'(n) + \frac{1}{2} f''(n) + \frac{1}{6} f'''(x) + O( \|f\|_{\dot C^4} ),

\displaystyle  \int_n^{n+1} f(x)\ dx = f(n) + \frac{1}{2} f'(n) + \frac{1}{6} f''(n) + \frac{1}{24} f'''(n) + O( \|f\|_{\dot C^4} )

and

\displaystyle  f'(n+1) = f'(n) + f''(n) + \frac{1}{2} f'''(n) + O( \|f\|_{\dot C^4} ).

Here we have a minor miracle (equivalent to the vanishing of the third Bernoulli number {B_3}) that the {f'''} term is automatically eliminated when we eliminate the {f''} term, yielding

\displaystyle  \int_n^{n+1} f(x)\ dx = \frac{1}{2} f(n) + \frac{1}{2} f(n+1) + \frac{1}{12} (f'(n) - f'(n+1))

\displaystyle  + O( \|f\|_{\dot C^4} )

and thus

\displaystyle  \int_0^N f(x)\ dx = \frac{1}{2} f(0) + \frac{1}{12} f'(0) + \sum_{n=1}^N f(n) + O( N \|f\|_{\dot C^4} ).

With {f(x) := x^2 \eta(x/N)}, the left-hand side is {N^3 c_{\eta,2}}, the first two terms on the right-hand side vanish, and the {\dot C^4} norm is {O(1/N^2)}, giving (13).

Now we do the general case (14). We define the Bernoulli numbers {B_0, B_1, \ldots} recursively by the formula

\displaystyle  \sum_{k=0}^{s-1} \binom{s}{k} B_k = s \ \ \ \ \ (19)

for all {s =1,2,\ldots}, or equivalently

\displaystyle  B_{s-1} := 1 - \frac{s-1}{2} B_{s-2} - \frac{(s-1)(s-2)}{3!} B_{s-3} - \ldots - \frac{1}{s} B_0.

The first few values of {B_s} can then be computed:

\displaystyle  B_0=1; B_1=1/2; B_2=1/6; B_3=0; B_4=-1/30; \ldots.

From (19) we see that

\displaystyle  \sum_{k=0}^\infty \frac{B_k}{k!} [ P^{(k)}(1) - P^{(k)}(0) ] = P'(1) \ \ \ \ \ (20)

for any polynomial {P} (with {P^{(k)}} being the {k}-fold derivative of {P}); indeed, (19) is precisely this identity with {P(x) := x^s}, and the general case then follows by linearity.

As (20) holds for all polynomials, it also holds for all formal power series (if we ignore convergence issues). If we then replace {P} by the formal power series

\displaystyle  P(x) = e^{tx} = \sum_{k=0}^\infty t^k \frac{x^k}{k!}

we conclude the formal power series (in {t}) identity

\displaystyle  \sum_{k=0}^\infty \frac{B_k}{k!} t^k (e^t-1) = t e^t

leading to the familiar generating function

\displaystyle  \sum_{k=0}^\infty \frac{B_k}{k!} t^k = \frac{t e^t}{e^t-1}

for the Bernoulli numbers.

If we apply (20) with {P} equal to the antiderivative of another polynomial {Q}, we conclude that

\displaystyle  \int_0^1 Q(x)\ dx + \frac{1}{2} (Q(1)-Q(0)) + \sum_{k=2}^\infty \frac{B_k}{k!} [Q^{(k-1)}(1) - Q^{(k-1)}(0)] = Q(1)

which we rearrange as the identity

\displaystyle  \int_0^1 Q(x)\ dx = \frac{1}{2}(Q(0)+Q(1)) - \sum_{k=2}^\infty \frac{B_k}{k!} [Q^{(k-1)}(1) - Q^{(k-1)}(0)]

which can be viewed as a precise version of the trapezoidal rule in the polynomial case. Note that if {Q} has degree {d}, the only the summands with {2 \leq k \leq d} can be non-vanishing.

Now let {f} be a smooth function. We have a Taylor expansion

\displaystyle  f(x) = Q(x) + O( \|f\|_{\dot C^{s+2}} )

for {0 \leq x \leq 1} and some polynomial {Q} of degree at most {s+1}; also

\displaystyle  f^{(k-1)}(x) = Q^{(k-1)}(x) + O( \|f\|_{\dot C^{s+2}} )

for {0 \leq x \leq 1} and {k \leq s+2}. We conclude that

\displaystyle  \int_0^1 f(x)\ dx = \frac{1}{2}(f(0)+f(1))

\displaystyle  - \sum_{k=2}^{s+1} \frac{B_k}{k!} [f^{(k-1)}(1) - f^{(k-1)}(0)]

\displaystyle  + O( \|f\|_{\dot C^{s+2}} ).

Translating this by an arbitrary integer {n} (which does not affect the {\dot C^{s+2}} norm), we obtain

\displaystyle  \int_n^{n+1} f(x)\ dx = \frac{1}{2}(f(n)+f(n+1)) - \sum_{k=2}^{s+1} \frac{B_k}{k!} [f^{(k-1)}(n+1) - f^{(k-1)}(n)] \ \ \ \ \ (21)

\displaystyle  + O( \|f\|_{\dot C^{s+2}} ).

Summing the telescoping series, and assuming that {f} vanishes to a sufficiently high order at {N}, we conclude the Euler-Maclaurin formula

\displaystyle  \int_0^N f(x)\ dx = \frac{1}{2} f(0) + \sum_{n=1}^N f(n) + \sum_{k=2}^{s+1} \frac{B_k}{k!} f^{(k-1)}(0) + O( N \|f\|_{\dot C^{s+2}} ). \ \ \ \ \ (22)

We apply this with {f(x) := x^s \eta(x/N)}. The left-hand side is {c_{\eta,s} N^s}. All the terms in the sum vanish except for the {k=s+1} term, which is {\frac{B_{s+1}}{s+1}}. Finally, from many applications of the product rule and chain rule (or by viewing {f(x) = N^s g(x/N)} where {g} is the smooth function {g(x) := x^s \eta(x)}) we see that {\|f\|_{\dot C^{s+2}} = O(1/N^2)}, and the claim (14) follows.

Remark 2 By using a higher regularity norm than the {\dot C^{s+2}} norm, we see that the error term {O(1/N)} can in fact be improved to {O(1/N^B)} for any fixed {B>0}, if {\eta} is sufficiently smooth.

Exercise 3 Use (21) to derive Faulhaber’s formula (10). Note how the presence of boundary terms at {N} cause the right-hand side of (10) to be quite different from the right-hand side of (14); thus we see how non-smooth partial summation creates artefacts that can completely obscure the smoothed asymptotics.

— 2. Connection with analytic continuation —

Now we connect the interpretation of divergent series as the constant term of smoothed partial sum asymptotics, with the more traditional interpretation via analytic continuation. For sake of concreteness we shall just discuss the situation with the Riemann zeta function series {\sum_{n=1}^\infty \frac{1}{n^s}}, though the connection extends to far more general series than just this one.

In the previous section, we have computed asymptotics for the partial sums

\displaystyle  \sum_{n=1}^\infty \frac{1}{n^s} \eta(n/N)

when {s} is a negative integer. A key point (which was somewhat glossed over in the above analysis) was that the function {x^{-s} \eta(x)} was smooth, even at the origin; this was implicitly used to bound various {\dot C^k} norms in the error terms.

Now suppose that {s} is a complex number with {\hbox{Re}(s)<1}, which is not necessarily a negative integer. Then {x^{-s} \eta(x)} becomes singular at the origin, and the above asymptotic analysis is not directly applicable. However, if one instead considers the telescoped partial sum

\displaystyle  \sum_{n=1}^\infty \frac{1}{n^s} \eta(n/N) - \sum_{n=1}^\infty \frac{1}{n^s} \eta(2n/N),

with {\eta} equal to {1} near the origin, then by applying (22) to the function {f(x) := x^{-s} \eta(x/N) - x^{-s} \eta(2x/N)} (which vanishes near the origin, and is now smooth everywhere), we soon obtain the asymptotic

\displaystyle  \sum_{n=1}^\infty \frac{1}{n^s} \eta(n/N) - \sum_{n=1}^\infty \frac{1}{n^s} \eta(2n/N) = c_{\eta,-s} (N^{1-s} - (N/2)^{1-s}) + O(1/N). \ \ \ \ \ (23)

Applying this with {N} equal to a power of two and summing the telescoping series, one concludes that

\displaystyle  \sum_{n=1}^\infty \frac{1}{n^s} \eta(n/N) = \zeta(s) + c_{\eta,-s} N^{1-s} + O(1/N) \ \ \ \ \ (24)

for some complex number {\zeta(s)} which is basically the sum of the various {O(1/N)} terms appearing in (23). By modifying the above arguments, it is not difficult to extend this asymptotic to other numbers than powers of two, and to show that {\zeta(s)} is independent of the choice of cutoff {\eta}.

From (24) we have

\displaystyle  \zeta(s) = \lim_{N \rightarrow \infty} \sum_{n=1}^\infty \frac{1}{n^s} \eta(n/N) - c_{\eta,-s} N^{1-s},

which can be viewed as a definition of {\zeta} in the region {\hbox{Re}(s)<1}. For instance, from (14), we have now proven (3) with this definition of {\zeta(s)}. However it is difficult to compute {\zeta(s)} exactly for most other values of {s}.

For each fixed {N}, it is not hard to see that the expression {\frac{1}{n^s} \eta(n/N) - c_{\eta,-s} N^{1-s}} is complex analytic in {s}. Also, by a closer inspection of the error terms in the Euler-Maclaurin formula analysis, it is not difficult to show that for {s} in any compact region of {\{ s \in {\bf C}: \hbox{Re}(s) < 1\}}, these expressions converge uniformly as {N \rightarrow \infty}. Applying Morera’s theorem, we conclude that our definition of {\zeta(s)} is complex analytic in the region {\{ s \in {\bf C}: \hbox{Re} s < 1 \}}.

We still have to connect this definition with the traditional definition (1) of the zeta function on the other half of the complex plane. To do this, we observe that

\displaystyle  c_{\eta,-s} N^{1-s} = \int_0^N x^{-s} \eta(x/N)\ dx = \int_1^N x^{-s} \eta(x/N)\ dx - \frac{1}{s-1}

for {N} large enough. Thus we have

\displaystyle  \zeta(s) = \frac{1}{s-1} + \lim_{N \rightarrow \infty} \sum_{n=1}^\infty \frac{1}{n^s} \eta(n/N) - \int_1^N x^{-s} \eta(x/N)\ dx

for {\hbox{Re} s < 1}. The point of doing this is that this definition also makes sense in the region {\hbox{Re}(s) > 1} (due to the absolute convergence of the sum {\sum_{n=1}^\infty \frac{1}{n^s}} and integral {\int_1^\infty x^{-s} dx}. By using the trapezoidal rule, one also sees that this definition makes sense in the region {\hbox{Re}(s) > 0}, with locally uniform convergence there also. So we in fact have a globally complex analytic definition of {\zeta(s) - \frac{1}{s-1}}, and thus a meromorphic definition of {\zeta(s)} on the complex plane. Note also that this definition gives the asymptotic

\displaystyle  \zeta(s) = \frac{1}{s-1} + \gamma + O(|s-1|) \ \ \ \ \ (25)

near {s=1}, where {\gamma} is Euler’s constant.

We have thus seen that asymptotics on smoothed partial sums of {\frac{1}{n^s}} gives rise to the familiar meromorphic properties of the Riemann zeta function {\zeta(s)}. It turns out that by combining the tools of Fourier analysis and complex analysis, one can reverse this procedure and deduce the asymptotics of {\frac{1}{n^s}} from the meromorphic properties of the zeta function.

Let’s see how. Fix a complex number {s} with {\hbox{Re}(s) < 1}, and a smooth cutoff function {\eta: {\bf R}^+ \rightarrow {\bf R}} which equals one near the origin, and consider the expression

\displaystyle  \sum_{n=1}^\infty \frac{1}{n^s} \eta(n/N) \ \ \ \ \ (26)

where {N} is a large number. We let {A > 0} be a large number, and rewrite this as

\displaystyle  N^{A} \sum_{n=1}^\infty \frac{1}{n^{s+A}} f_A( \log(n/N) )

where

\displaystyle  f_A(x) := e^{Ax} \eta( e^x ).

The function {f_A} is in the Schwartz class. By the Fourier inversion formula, it has a Fourier representation

\displaystyle  f_A(x) = \int_{\bf R} \hat f_A(t) e^{-ixt}\ dt

where

\displaystyle  \hat f_A(x) := \frac{1}{2\pi} \int_{\bf R} f_A(x) e^{ixt}\ dx

and so (26) can be rewritten as

\displaystyle  N^{A} \sum_{n=1}^\infty \frac{1}{n^{s+A}} \int_{\bf R} \hat f_A(t) (n/N)^{-it}\ dt.

The function {\hat f_A} is also Schwartz. If {A} is large enough, we may then interchange the integral and sum and use (1) to rewrite (26) as

\displaystyle  \int_{\bf R} \hat f_A(t) N^{A+it} \zeta(s+A+it)\ dt.

Now we have

\displaystyle  \hat f_A(t) = \frac{1}{2\pi} \int_{\bf R} e^{(A+it)x} \eta(e^x)\ dx;

integrating by parts (which is justified when {A} is large enough) we have

\displaystyle  \hat f_A(t) = -\frac{1}{2\pi (A+it)} F(A+it)

where

\displaystyle  F(A+it) = \int_{\bf R} e^{(A+it+1)x} \eta'(e^x)\ dx = \int_0^\infty y^{A+it} \eta'(y)\ dy.

We can thus write (26) as a contour integral

\displaystyle  \frac{-1}{2\pi i} \int_{s+A-i\infty}^{s+A+i\infty} \zeta(z) \frac{N^{z-s} F(z-s)}{z-s}\ dz.

Note that {\eta'} is compactly supported away from zero, which makes {F(A+it)} an entire function of {A+it}, which is uniformly bounded whenever {A} is bounded. Furthermore, from repeated integration by parts we see that {F(A+it)} is rapidly decreasing as {t \rightarrow \infty}, uniformly for {A} in a compact set. Meanwhile, standard estimates show that {\zeta(\sigma+it)} is of polynomial growth in {t} for {\sigma} in a compact set. Finally, the meromorphic function {\zeta(z) \frac{N^{z-s} F(z-s)}{z-s}} has a simple pole at {z=1} (with residue {\frac{N^{1-s} F(1-s)}{1-s}}) and at {z-s} (with residue {\zeta(s) F(0)}). Applying the residue theorem, we can write (26) as

\displaystyle  \frac{-1}{2\pi i} \int_{s-B-i\infty}^{s-B+i\infty} \zeta(z) \frac{N^{z-s} F(z-s)}{z-s}\ dz - \frac{N^{1-s} F(1-s)}{1-s} - \zeta(s) F(0)

for any {B>0}. Using the various bounds on {\zeta} and {F}, we see that the integral is {O( N^{-B})}. From integration by parts we have {F(0)=-1} and

\displaystyle  F(1-s) = - (1-s) \int_0^\infty y^{-s} \eta(y)\ dy = -(1-s) c_{\eta,-s}

and thus we have

\displaystyle  \sum_{n=1}^\infty \frac{1}{n^s} \eta(n/N) = \zeta(s) + c_{\eta,-s} N^{1-s} + O(N^{-B})

for any {B>0}, which is (14) (with the refined error term indicated in Remark 2).

The above argument reveals that the simple pole of {\zeta(s)} at {s=1} is directly connected to the {c_{\eta,-s} N^{1-s}} term in the asymptotics of the smoothed partial sums. More generally, if a Dirichlet series

\displaystyle  D(s) = \sum_{n=1}^\infty \frac{a_n}{n^s}

has a meromorphic continuation to the entire complex plane, and does not grow too fast at infinity, then one (heuristically at least) has the asymptotic

\displaystyle  \sum_{n=1}^\infty \frac{a_n}{n^s} \eta(n/N) = D(s) + \sum_\rho c_{\eta,\rho-s-1} r_\rho N^{\rho-s} + \ldots

where {\rho} ranges over the poles of {D}, and {r_\rho} are the residues at those poles. For instance, one has the famous explicit formula

\displaystyle  \sum_{n=1}^\infty \Lambda(n) \eta(n/N) = c_{\eta,0} N - \sum_\rho c_{\eta,\rho-1} N^\rho + \ldots

where {\Lambda} is the von Mangoldt function, {\rho} are the non-trivial zeroes of the Riemann zeta function (counting multiplicity, if any), and {\ldots} is an error term (basically arising from the trivial zeroes of zeta); this ultimately reflects the fact that the Dirichlet series

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)}

has a simple pole at {s=1} (with residue {+1}) and simple poles at every zero of the zeta function with residue {-1} (weighted again by multiplicity, though it is not believed that multiple zeroes actually exist).

The link between poles of the zeta function (and its relatives) and asymptotics of (smoothed) partial sums of arithmetical functions can be used to compare elementary methods in analytic number theory with complex methods. Roughly speaking, elementary methods are based on leading term asymptotics of partial sums of arithmetical functions, and are mostly based on exploiting the simple pole of {\zeta} at {s=1} (and the lack of a simple zero of Dirichlet {L}-functions at {s=1}); in contrast, complex methods also take full advantage of the zeroes of {\zeta} and Dirichlet {L}-functions (or the lack thereof) in the entire complex plane, as well as the functional equation (which, in terms of smoothed partial sums, manifests itself through the Poisson summation formula). Indeed, using the above correspondences it is not hard to see that the prime number theorem (for instance) is equivalent to the lack of zeroes of the Riemann zeta function on the line {\hbox{Re}(s)=1}.

With this dictionary between elementary methods and complex methods, the Dirichlet hyperbola method in elementary analytic number theory corresponds to analysing the behaviour of poles and residues when multiplying together two Dirichlet series. For instance, by using the formula (11) and the hyperbola method, together with the asymptotic

\displaystyle  \sum_{n=1}^\infty \frac{1}{n} \eta(n/N) = \int_1^\infty \eta(x/N)\ \frac{dx}{x} + \gamma + O(1/N)

which can be obtained from the trapezoidal rule and the definition of {\gamma}, one can obtain the asymptotic

\displaystyle  \sum_{n=1}^\infty \tau(n) \eta(n/N) = \int_1^\infty \log x \eta(x/N)\ dx + 2\gamma c_{\eta,0} N + O(\sqrt{N})

where {\tau(n) := \sum_{d|n} 1} is the divisor function (and in fact one can improve the {O(\sqrt{N})} bound substantially by being more careful); this corresponds to the fact that the Dirichlet series

\displaystyle  \sum_{n=1}^\infty \frac{\tau(n)}{n^s} = \zeta(s)^2

has a double pole at {s=1} with expansion

\displaystyle  \zeta(s)^2 = \frac{1}{(s-1)^2} + 2 \gamma \frac{1}{s-1} + O(1)

and no other poles, which of course follows by multiplying (25) with itself.

Remark 3 In the literature, elementary methods in analytic number theorem often use sharply truncated sums rather than smoothed sums. However, as indicated earlier, the error terms tend to be slightly better when working with smoothed sums (although not much gain is obtained in this manner when dealing with sums of functions that are sensitive to the primes, such as {\Lambda}, as the terms arising from the zeroes of the zeta function tend to dominate any saving in this regard).