You are currently browsing the tag archive for the ‘analytic continuation’ tag.
The Riemann zeta function is defined in the region
by the absolutely convergent series
Thus, for instance, it is known that , and thus
For , the series on the right-hand side of (1) is no longer absolutely convergent, or even conditionally convergent. Nevertheless, the
function can be extended to this region (with a pole at
) by analytic continuation. For instance, it can be shown that after analytic continuation, one has
,
, and
, and more generally
for , where
are the Bernoulli numbers. If one formally applies (1) at these values of
, one obtains the somewhat bizarre formulae
Clearly, these formulae do not make sense if one stays within the traditional way to evaluate infinite series, and so it seems that one is forced to use the somewhat unintuitive analytic continuation interpretation of such sums to make these formulae rigorous. But as it stands, the formulae look “wrong” for several reasons. Most obviously, the summands on the left are all positive, but the right-hand sides can be zero or negative. A little more subtly, the identities do not appear to be consistent with each other. For instance, if one adds (4) to (5), one obtains
whereas if one subtracts from (5) one obtains instead
and the two equations seem inconsistent with each other.
However, it is possible to interpret (4), (5), (6) by purely real-variable methods, without recourse to complex analysis methods such as analytic continuation, thus giving an “elementary” interpretation of these sums that only requires undergraduate calculus; we will later also explain how this interpretation deals with the apparent inconsistencies pointed out above.
To see this, let us first consider a convergent sum such as (2). The classical interpretation of this formula is the assertion that the partial sums
converge to as
, or in other words that
where denotes a quantity that goes to zero as
. Actually, by using the integral test estimate
we have the sharper result
Thus we can view as the leading coefficient of the asymptotic expansion of the partial sums of
.
One can then try to inspect the partial sums of the expressions in (4), (5), (6), but the coefficients bear no obvious relationship to the right-hand sides:
For (7), the classical Faulhaber formula (or Bernoulli formula) gives
for , which has a vague resemblance to (7), but again the connection is not particularly clear.
The problem here is the discrete nature of the partial sum
which (if is viewed as a real number) has jump discontinuities at each positive integer value of
. These discontinuities yield various artefacts when trying to approximate this sum by a polynomial in
. (These artefacts also occur in (2), but happen in that case to be obscured in the error term
; but for the divergent sums (4), (5), (6), (7), they are large enough to cause real trouble.)
However, these issues can be resolved by replacing the abruptly truncated partial sums with smoothed sums
, where
is a cutoff function, or more precisely a compactly supported bounded function that equals
at
. The case when
is the indicator function
then corresponds to the traditional partial sums, with all the attendant discretisation artefacts; but if one chooses a smoother cutoff, then these artefacts begin to disappear (or at least become lower order), and the true asymptotic expansion becomes more manifest.
Note that smoothing does not affect the asymptotic value of sums that were already absolutely convergent, thanks to the dominated convergence theorem. For instance, we have
whenever is a cutoff function (since
pointwise as
and is uniformly bounded). If
is equal to
on a neighbourhood of the origin, then the integral test argument then recovers the
decay rate:
However, smoothing can greatly improve the convergence properties of a divergent sum. The simplest example is Grandi’s series
The partial sums
oscillate between and
, and so this series is not conditionally convergent (and certainly not absolutely convergent). However, if one performs analytic continuation on the series
and sets , one obtains a formal value of
for this series. This value can also be obtained by smooth summation. Indeed, for any cutoff function
, we can regroup
If is twice continuously differentiable (i.e.
), then from Taylor expansion we see that the summand has size
, and also (from the compact support of
) is only non-zero when
. This leads to the asymptotic
and so we recover the value of as the leading term of the asymptotic expansion.
Exercise 1 Show that if
is merely once continuously differentiable (i.e.
), then we have a similar asymptotic, but with an error term of
instead of
. This is an instance of a more general principle that smoother cutoffs lead to better error terms, though the improvement sometimes stops after some degree of regularity.
Remark 2 The most famous instance of smoothed summation is Cesáro summation, which corresponds to the cutoff function
. Unsurprisingly, when Cesáro summation is applied to Grandi’s series, one again recovers the value of
.
If we now revisit the divergent series (4), (5), (6), (7) with smooth summation in mind, we finally begin to see the origin of the right-hand sides. Indeed, for any fixed smooth cutoff function , we will shortly show that
for any fixed where
is the Archimedean factor
(which is also essentially the Mellin transform of ). Thus we see that the values (4), (5), (6), (7) obtained by analytic continuation are nothing more than the constant terms of the asymptotic expansion of the smoothed partial sums. This is not a coincidence; we will explain the equivalence of these two interpretations of such sums (in the model case when the analytic continuation has only finitely many poles and does not grow too fast at infinity) below the fold.
This interpretation clears up the apparent inconsistencies alluded to earlier. For instance, the sum consists only of non-negative terms, as does its smoothed partial sums
(if
is non-negative). Comparing this with (12), we see that this forces the highest-order term
to be non-negative (as indeed it is), but does not prohibit the lower-order constant term
from being negative (which of course it is).
Similarly, if we add together (12) and (11) we obtain
while if we subtract from (12) we obtain
These two asymptotics are not inconsistent with each other; indeed, if we shift the index of summation in (17), we can write
and so we now see that the discrepancy between the two sums in (8), (9) come from the shifting of the cutoff , which is invisible in the formal expressions in (8), (9) but become manifestly present in the smoothed sum formulation.
Exercise 3 By Taylor expanding
and using (11), (18) show that (16) and (17) are indeed consistent with each other, and in particular one can deduce the latter from the former.
Recent Comments