In Notes 2, the Riemann zeta function {\zeta} (and more generally, the Dirichlet {L}-functions {L(\cdot,\chi)}) were extended meromorphically into the region {\{ s: \hbox{Re}(s) > 0 \}} in and to the right of the critical strip. This is a sufficient amount of meromorphic continuation for many applications in analytic number theory, such as establishing the prime number theorem and its variants. The zeroes of the zeta function in the critical strip {\{ s: 0 < \hbox{Re}(s) < 1 \}} are known as the non-trivial zeroes of {\zeta}, and thanks to the truncated explicit formulae developed in Notes 2, they control the asymptotic distribution of the primes (up to small errors).

The {\zeta} function obeys the trivial functional equation

\displaystyle \zeta(\overline{s}) = \overline{\zeta(s)} \ \ \ \ \ (1)

 

for all {s} in its domain of definition. Indeed, as {\zeta(s)} is real-valued when {s} is real, the function {\zeta(s) - \overline{\zeta(\overline{s})}} vanishes on the real line and is also meromorphic, and hence vanishes everywhere. Similarly one has the functional equation

\displaystyle \overline{L(s, \chi)} = L(\overline{s}, \overline{\chi}). \ \ \ \ \ (2)

 

From these equations we see that the zeroes of the zeta function are symmetric across the real axis, and the zeroes of {L(\cdot,\chi)} are the reflection of the zeroes of {L(\cdot,\overline{\chi})} across this axis.

It is a remarkable fact that these functions obey an additional, and more non-trivial, functional equation, this time establishing a symmetry across the critical line {\{ s: \hbox{Re}(s) = \frac{1}{2} \}} rather than the real axis. One consequence of this symmetry is that the zeta function and {L}-functions may be extended meromorphically to the entire complex plane. For the zeta function, the functional equation was discovered by Riemann, and reads as follows:

Theorem 1 (Functional equation for the Riemann zeta function) The Riemann zeta function {\zeta} extends meromorphically to the entire complex plane, with a simple pole at {s=1} and no other poles. Furthermore, one has the functional equation

\displaystyle \zeta(s) = \alpha(s) \zeta(1-s) \ \ \ \ \ (3)

 

or equivalently

\displaystyle \zeta(1-s) = \alpha(1-s) \zeta(s) \ \ \ \ \ (4)

 

for all complex {s} other than {s=0,1}, where {\alpha} is the function

\displaystyle \alpha(s) := 2^s \pi^{s-1} \sin( \frac{\pi s}{2}) \Gamma(1-s). \ \ \ \ \ (5)

 

Here {\cos(z) := \frac{e^z + e^{-z}}{2}}, {\sin(z) := \frac{e^{-z}-e^{-z}}{2i}} are the complex-analytic extensions of the classical trigionometric functions {\cos(x), \sin(x)}, and {\Gamma} is the Gamma function, whose definition and properties we review below the fold.

The functional equation can be placed in a more symmetric form as follows:

Corollary 2 (Functional equation for the Riemann xi function) The Riemann xi function

\displaystyle \xi(s) := \frac{1}{2} s(s-1) \pi^{-s/2} \Gamma(\frac{s}{2}) \zeta(s) \ \ \ \ \ (6)

 

is analytic on the entire complex plane {{\bf C}} (after removing all removable singularities), and obeys the functional equations

\displaystyle \xi(\overline{s}) = \overline{\xi(s)}

and

\displaystyle \xi(s) = \xi(1-s). \ \ \ \ \ (7)

 

In particular, the zeroes of {\xi} consist precisely of the non-trivial zeroes of {\zeta}, and are symmetric about both the real axis and the critical line. Also, {\xi} is real-valued on the critical line and on the real axis.

Corollary 2 is an easy consequence of Theorem 1 together with the duplication theorem for the Gamma function, and the fact that {\zeta} has no zeroes to the right of the critical strip, and is left as an exercise to the reader (Exercise 19). The functional equation in Theorem 1 has many proofs, but most of them are related in on way or another to the Poisson summation formula

\displaystyle \sum_n f(n) = \sum_m \hat f(2\pi m) \ \ \ \ \ (8)

 

(Theorem 34 from Supplement 2, at least in the case when {f} is twice continuously differentiable and compactly supported), which can be viewed as a Fourier-analytic link between the coarse-scale distribution of the integers and the fine-scale distribution of the integers. Indeed, there is a quick heuristic proof of the functional equation that comes from formally applying the Poisson summation formula to the function {1_{x>0} \frac{1}{x^s}}, and noting that the functions {x \mapsto \frac{1}{x^s}} and {\xi \mapsto \frac{1}{\xi^{1-s}}} are formally Fourier transforms of each other, up to some Gamma function factors, as well as some trigonometric factors arising from the distinction between the real line and the half-line. Such a heuristic proof can indeed be made rigorous, and we do so below the fold, while also providing Riemann’s two classical proofs of the functional equation.

From the functional equation (and the poles of the Gamma function), one can see that {\zeta} has trivial zeroes at the negative even integers {-2,-4,-6,\dots}, in addition to the non-trivial zeroes in the critical strip. More generally, the following table summarises the zeroes and poles of the various special functions appearing in the functional equation, after they have been meromorphically extended to the entire complex plane, and with zeroes classified as “non-trivial” or “trivial” depending on whether they lie in the critical strip or not. (Exponential functions such as {2^{s-1}} or {\pi^{-s}} have no zeroes or poles, and will be ignored in this table; the zeroes and poles of rational functions such as {s(s-1)} are self-evident and will also not be displayed here.)

Function Non-trivial zeroes Trivial zeroes Poles
{\zeta(s)} Yes {-2,-4,-6,\dots} {1}
{\zeta(1-s)} Yes {3,5,\dots} {0}
{\sin(\pi s/2)} No Even integers No
{\cos(\pi s/2)} No Odd integers No
{\sin(\pi s)} No Integers No
{\Gamma(s)} No No {0,-1,-2,\dots}
{\Gamma(s/2)} No No {0,-2,-4,\dots}
{\Gamma(1-s)} No No {1,2,3,\dots}
{\Gamma((1-s)/2)} No No {2,4,6,\dots}
{\xi(s)} Yes No No

Among other things, this table indicates that the Gamma and trigonometric factors in the functional equation are tied to the trivial zeroes and poles of zeta, but have no direct bearing on the distribution of the non-trivial zeroes, which is the most important feature of the zeta function for the purposes of analytic number theory, beyond the fact that they are symmetric about the real axis and critical line. In particular, the Riemann hypothesis is not going to be resolved just from further analysis of the Gamma function!

The zeta function computes the “global” sum {\sum_n \frac{1}{n^s}}, with {n} ranging all the way from {1} to infinity. However, by some Fourier-analytic (or complex-analytic) manipulation, it is possible to use the zeta function to also control more “localised” sums, such as {\sum_n \frac{1}{n^s} \psi(\log n - \log N)} for some {N \gg 1} and some smooth compactly supported function {\psi: {\bf R} \rightarrow {\bf C}}. It turns out that the functional equation (3) for the zeta function localises to this context, giving an approximate functional equation which roughly speaking takes the form

\displaystyle \sum_n \frac{1}{n^s} \psi( \log n - \log N ) \approx \alpha(s) \sum_m \frac{1}{m^{1-s}} \psi( \log M - \log m )

whenever {s=\sigma+it} and {NM = \frac{|t|}{2\pi}}; see Theorem 38 below for a precise formulation of this equation. Unsurprisingly, this form of the functional equation is also very closely related to the Poisson summation formula (8), indeed it is essentially a special case of that formula (or more precisely, of the van der Corput {B}-process). This useful identity relates long smoothed sums of {\frac{1}{n^s}} to short smoothed sums of {\frac{1}{m^{1-s}}} (or vice versa), and can thus be used to shorten exponential sums involving terms such as {\frac{1}{n^s}}, which is useful when obtaining some of the more advanced estimates on the Riemann zeta function.

We will give two other basic uses of the functional equation. The first is to get a good count (as opposed to merely an upper bound) on the density of zeroes in the critical strip, establishing the Riemann-von Mangoldt formula that the number {N(T)} of zeroes of imaginary part between {0} and {T} is {\frac{T}{2\pi} \log \frac{T}{2\pi} - \frac{T}{2\pi} + O(\log T)} for large {T}. The other is to obtain untruncated versions of the explicit formula from Notes 2, giving a remarkable exact formula for sums involving the von Mangoldt function in terms of zeroes of the Riemann zeta function. These results are not strictly necessary for most of the material in the rest of the course, but certainly help to clarify the nature of the Riemann zeta function and its relation to the primes.

In view of the material in previous notes, it should not be surprising that there are analogues of all of the above theory for Dirichlet {L}-functions {L(\cdot,\chi)}. We will restrict attention to primitive characters {\chi}, since the {L}-function for imprimitive characters merely differs from the {L}-function of the associated primitive factor by a finite Euler product; indeed, if {\chi = \chi' \chi_0} for some principal {\chi_0} whose modulus {q_0} is coprime to that of {\chi'}, then

\displaystyle L(s,\chi) = L(s,\chi') \prod_{p|q_0} (1 - \frac{1}{p^s}) \ \ \ \ \ (9)

 

(cf. equation (45) of Notes 2).

The main new feature is that the Poisson summation formula needs to be “twisted” by a Dirichlet character {\chi}, and this boils down to the problem of understanding the finite (additive) Fourier transform of a Dirichlet character. This is achieved by the classical theory of Gauss sums, which we review below the fold. There is one new wrinkle; the value of {\chi(-1) \in \{-1,+1\}} plays a role in the functional equation. More precisely, we have

Theorem 3 (Functional equation for {L}-functions) Let {\chi} be a primitive character of modulus {q} with {q>1}. Then {L(s,\chi)} extends to an entire function on the complex plane, with

\displaystyle L(s,\chi) = \varepsilon(\chi) 2^s \pi^{s-1} q^{1/2-s} \sin(\frac{\pi}{2}(s+\kappa)) \Gamma(1-s) L(1-s,\overline{\chi})

or equivalently

\displaystyle L(1-s,\overline{\chi}) = \varepsilon(\overline{\chi}) 2^{1-s} \pi^{-s} q^{s-1/2} \sin(\frac{\pi}{2}(1-s+\kappa)) \Gamma(s) L(s,\chi)

for all {s}, where {\kappa} is equal to {0} in the even case {\chi(-1)=+1} and {1} in the odd case {\chi(-1)=-1}, and

\displaystyle \varepsilon(\chi) := \frac{\tau(\chi)}{i^\kappa \sqrt{q}} \ \ \ \ \ (10)

 

where {\tau(\chi)} is the Gauss sum

\displaystyle \tau(\chi) := \sum_{n \in {\bf Z}/q{\bf Z}} \chi(n) e(n/q). \ \ \ \ \ (11)

 

and {e(x) := e^{2\pi ix}}, with the convention that the {q}-periodic function {n \mapsto e(n/q)} is also (by abuse of notation) applied to {n} in the cyclic group {{\bf Z}/q{\bf Z}}.

From this functional equation and (2) we see that, as with the Riemann zeta function, the non-trivial zeroes of {L(s,\chi)} (defined as the zeroes within the critical strip {\{ s: 0 < \hbox{Re}(s) < 1 \}} are symmetric around the critical line (and, if {\chi} is real, are also symmetric around the real axis). In addition, {L(s,\chi)} acquires trivial zeroes at the negative even integers and at zero if {\chi(-1)=1}, and at the negative odd integers if {\chi(-1)=-1}. For imprimitive {\chi}, we see from (9) that {L(s,\chi)} also acquires some additional trivial zeroes on the left edge of the critical strip.

There is also a symmetric version of this equation, analogous to Corollary 2:

Corollary 4 Let {\chi,q,\varepsilon(\chi)} be as above, and set

\displaystyle \xi(s,\chi) := (q/\pi)^{(s+\kappa)/2} \Gamma((s+\kappa)/2) L(s,\chi),

then {\xi(\cdot,\chi)} is entire with {\xi(1-s,\chi) = \varepsilon(\chi) \xi(s,\chi)}.

For further detail on the functional equation and its implications, I recommend the classic text of Titchmarsh or the text of Davenport.

— 1. The Gamma function —

There are many ways to define the Gamma function, but we will use the following classical definition:

Definition 5 (Gamma function) For any complex number {s} with {\hbox{Re} s > 0}, the Gamma function {\Gamma(s)} is defined as

\displaystyle \Gamma(s) = \int_0^\infty t^{s-1} e^{-t}\ dt = \int_0^\infty t^s e^{-t} \frac{dt}{t}. \ \ \ \ \ (12)

 

It is easy to see that the integrals here are absolutely convergent. One can view {\Gamma} as the inner product between the multiplicative character {t \mapsto t^s} and the additive character {t \mapsto e^{-t}} with respect to multiplicative Haar measure {\frac{dt}{t}}. As such, the Gamma function often appears as a normalisation factor in integrals that involve both additive and multiplicative characters. For instance, by a simple change of variables we see that

\displaystyle \int_0^\infty t^{s-1} e^{-at}\ dt = \frac{\Gamma(s)}{a^s} \ \ \ \ \ (13)

 

whenever {\hbox{Re}(s)>0} and {a>0}; indeed, from a contour shift we see that the above identity also holds for complex {a} with {\hbox{Re}(a) > 0, \hbox{Re}(s)>0}, if we use the standard interpretation {t^{s-1} = \exp((s-1) \log t))} of the complex exponential with positive real base. Making the further substitution {t = x^2} and performing some additional manipulations, we see that the Gamma function is also related to integrals involving Gaussian functions, in that

\displaystyle \int_{\bf R} |t|^{s-1} e^{-a^2 t^2}\ dt = \frac{\Gamma(s/2)}{a^s} \ \ \ \ \ (14)

 

for {\hbox{Re}(a), \hbox{Re}(s) > 0}. Later on we will also need the variant

\displaystyle \int_{\bf R} a |t|^{s} e^{-a^2 t^2}\ dt = \frac{\Gamma((s+1)/2)}{a^s}, \ \ \ \ \ (15)

 

which follows from (14) by replacing {s} with {s+1}.

From Cauchy’s theorem and Fubini’s theorem one easily verifies that {\Gamma} has vanishing contour integral on any closed contour in the half-space {\{ s: \hbox{Re}(s) > 0 \}}, and thus by Morera’s theorem is holomorphic on this half-space.

From (12) and an integration by parts we see that

\displaystyle s \Gamma(s) = \Gamma(s+1) \ \ \ \ \ (16)

 

for any {s} with {\hbox{Re}(s) > 0}. Among other things, this allows us to extend {\Gamma} meromorphically to the entire complex plane, by repeatedly using the form

\displaystyle \Gamma(s) = \frac{1}{s} \Gamma(s+1)

of (16) as a definition to meromorphically extend the domain of definition of {\Gamma} leftwards by one unit.

Exercise 6 Show that {\Gamma(n) = (n-1)!} for any natural number {n} (thus {\Gamma(1)=\Gamma(2) = 1}, {\Gamma(3) = 2}, etc.), and that {\Gamma} has simple poles at {s = 0, -1, -2, \dots} and no further singularities. Thus one can view the Gamma function as a (shifted) generalisation of the factorial function.

By repeating the proof of (31), we obtain the conjugation symmetry

\displaystyle \Gamma(\overline{s}) = \overline{\Gamma(s)} \ \ \ \ \ (17)

 

for all {s} outside of the poles of {\Gamma}. Translating this to the {\alpha}-function (5), we see that {\alpha} is meromorphic, with a pole at {s=1}, and that

\displaystyle \alpha(\overline{s}) = \overline{\alpha(s)} \ \ \ \ \ (18)

 

for all {s} outside of this pole.

The Gamma function is also closely connected to the beta function:

Lemma 7 (Beta function identity) One has

\displaystyle \Gamma(s_1+s_2) \int_0^1 t^{s_1-1} (1-t)^{s_2-1}\ dt = \Gamma(s_1) \Gamma(s_2)

whenever {\hbox{Re}(s_1), \hbox{Re}(s_2) > 0}. (Note that this hypothesis makes the integral on the left-hand side absolutely integrable.)

Proof: From (12) and Fubini’s theorem one has

\displaystyle \Gamma(s_1) \Gamma(s_2) = \int_0^\infty \int_0^\infty t_1^{s_1-1} t_2^{s_2-1} e^{-t_1-t_2}\ dt_1 dt_2.

Making the change of variables {(t_1,t_2) = (ut, u(1-t))} for {u \in [0,\infty)} and {t \in [0,1]} (and using absolute integrability to justify this change of variables), the right-hand side becomes

\displaystyle \int_0^\infty \int_0^1 u^{s_1+s_2-1} e^{-u} t^{s_1-1} (1-t)^{s_2-1}\ dt du,

and the claim follows another appeal to (12) and Fubini’s theorem. \Box

This gives an important reflection formula:

Lemma 8 (Euler reflection formula) One has

\displaystyle \Gamma(s) \Gamma(1-s) = \frac{\pi}{\sin(\pi s)}

as meromorphic functions (that is to say, the identity holds outside of the poles of the left or right-hand sides, which occur at the integers). In particular, {\Gamma} has no zeroes in the complex plane.

Note that the reflection formula, when written in terms of the {\alpha}-function (5), is simply

\displaystyle \alpha(s) \alpha(1-s) = 1 \ \ \ \ \ (19)

 

after removing any singularities from the left-hand side. In particular, {\alpha} has zeroes at {s=0,-2,-4,\dots} and poles at {1,3,5,\dots}, with no further poles and zeroes. Note that (19) is consistent with the functional equations (4), (3).

Proof: By unique continuation of meromorphic functions, it suffices to verify this identity in the critical strip {\{ s: 0 < \hbox{Re}(s) < 1 \}}. By the beta function identity (and the value {\Gamma(1)=1}), it thus suffices to show that

\displaystyle \int_0^1 t^{s-1} (1-t)^{-s}\ dt = \frac{\pi}{\sin(\pi s)}

for {s} in the critical strip.

If we make the substitution {z := \frac{t}{1-t}}, so that {t = \frac{z}{z+1}}, we have

\displaystyle \int_0^1 t^{s-1} (1-t)^{-s}\ dt = \int_0^\infty \frac{z^{s-1}}{1+z}\ dz.

We extend the function {z \mapsto z^{s-1}} to the complex plane (excluding the origin) by the formula {z^{s-1} := \exp((s-1) \log_{[0,2\pi)} z)}, where {\log_{[0,2\pi)}z} is the branch of the complex logarithm whose imaginary part lies in the half-open interval {[0,2\pi)}. This agrees with the usual power function {z \mapsto z^{s-1}} at (or infinitesimally above) the positive real axis, but instead converges to {z \mapsto e^{2\pi i (s-1)} z^{s-1}} infinitesimally below this axis. Thus, if one lets {C} be a contour that loops clockwise around the positive real axis, and stays sufficiently close to this axis, we see (using Cauchy’s theorem to justify the passage from infinitesimal neighbourhoods of the real axis to non-infinitesimal ones, and using the hypothesis {0<\hbox{Re}(s) < 1} to handle the contributions near the origin and infinity) we have

\displaystyle \int_C \frac{z^{s-1}}{1+z}\ dz = (1 - e^{2\pi i(s-1)}) \int_0^\infty \frac{z^{s-1}}{1+z}\ dz.

On the other hand, outside of the non-negative real axis, {\frac{z^{s-1}}{1+z}} is meromorphic, with a simple pole at {z=-1} of residue {e^{\pi i(s-1)}}, and decays faster than {1/|z|} at infinity. From the residue theorem we then have

\displaystyle \int_C \frac{z^{s-1}}{1+z}\ dz = 2\pi i e^{\pi i(s-1)}

and the claim then follows by putting the above identities together. \Box

As a quick application of (8), if we set {s=1/2} and observe that {\Gamma(1/2)} is clearly positive, we have

\displaystyle \Gamma(1/2) = \sqrt{\pi}

and thus (by (14)) we recover the classical Gaussian identity

\displaystyle \int_{\bf R} e^{-a^2 t^2}\ dt = \frac{\sqrt{\pi}}{a} \ \ \ \ \ (20)

 

whenever {\hbox{Re} a > 0}.

Next, we give an alternate definition of the Gamma function:

Lemma 9 (Euler form of Gamma function) If {s} is not a pole of {\Gamma} (i.e., {s \neq 0, -1, -2, \dots}), then

\displaystyle \Gamma(s) = \lim_{n \rightarrow \infty} \frac{n! n^s}{s(s+1) \dots (s+n)} = \frac{1}{s} \prod_n \frac{(1+1/n)^s}{1+s/n}.

Proof: It is easy to verify the second identity, and that the product and limit are convergent. One also easily verifies that the expression {s \mapsto \lim_{n \rightarrow \infty} \frac{n! n^s}{s(s+1) \dots (s+n)}} obeys (16), so it will suffice to establish the claim when {\hbox{Re}(s) > 0}.

We use a trick previously employed to prove Lemma 40 of Notes 1. By (12) and the dominated convergence theorem, we have

\displaystyle \Gamma(s) = \lim_{n \rightarrow \infty} \int_0^n t^{s-1} (1-\frac{t}{n})^n\ dt.

But by Lemma 7 and a change of variables we have

\displaystyle \int_0^n t^{s-1} (1-\frac{t}{n})^n\ dt = n^s \frac{\Gamma(s) \Gamma(n+1)}{\Gamma(s+n+1)}.

From (16) one has {\Gamma(n+1)=n!} and {\Gamma(s+n+1) = \Gamma(s) s(s+1) \dots (s+n)}, and the claim follows (recall that {\Gamma(s)} is never zero). \Box

Exercise 10 (Weierstrass form of Gamma function) If {s} is not a pole of {\Gamma}, show that

\displaystyle \Gamma(s) = \frac{e^{-\gamma s}}{s} \prod_n \frac{e^{s/n}}{1+s/n}

where {\gamma} is the Euler constant, with the product being absolutely convergent. (Hint: you may need Lemma 40 from Notes 1.)

Exercise 11 (Digamma function) Define the digamma function to be the logarithmic derivative {\frac{\Gamma'}{\Gamma}} of the Gamma function. Show that the digamma function is a meromorphic function, with simple poles of residue {-1} at the non-positive integers {0, -1, -2, \dots} and no other poles, and that

\displaystyle \frac{\Gamma'}{\Gamma}(s) = \lim_{N \rightarrow \infty} \log N - \sum_{n=0}^N \frac{1}{s+n}

\displaystyle = -\gamma + \sum_{\rho = 0, -1, -2, \dots} (\frac{1}{1-\rho} - \frac{1}{s-\rho})

for {s} outside of the poles of {\frac{\Gamma'}{\Gamma}}, with the sum being absolutely convergent. Establish the reflection formula

\displaystyle \frac{\Gamma'}{\Gamma}(1-s) - \frac{\Gamma'}{\Gamma}(s) = \pi \cot(\pi s) \ \ \ \ \ (21)

 

or equivalently

\displaystyle \pi \cot(\pi s) = \sum_{\rho \in {\bf Z}} (\frac{1}{s-\rho} - \frac{1}{\rho})

for non-integer {s}.

Exercise 12 Show that {\Gamma'(1) = -\gamma}.

Exercise 13 (Legendre duplication formula) Show that

\displaystyle \Gamma(\frac{s}{2}) \Gamma(\frac{s+1}{2}) = \sqrt{\pi} 2^{1-s} \Gamma(s)

whenever {s} is not a pole of {\Gamma}. (Hint: using the digamma function, show that the logarithmic derivatives of both sides differ by a constant. Then test the formula at two values of {s} to verify that the normalising factor of {\sqrt{\pi} 2^{1-s}} is correct.)

Exercise 14 (Gauss multiplication theorem) For any natural number {k}, establish the multiplication theorem

\displaystyle \Gamma(\frac{s}{k}) \Gamma(\frac{s+1}{k}) \dots \Gamma(\frac{s+k-1}{k}) = (2\pi)^{(k-1)/2} k^{1/2 - s} \Gamma(s)

whenever {s} is not a pole of {\Gamma}.

Exercise 15 (Bohr-Mollerup theorem) Establish the Bohr-Mollerup theorem: the function {\Gamma: (0,+\infty) \rightarrow (0,+\infty)}, which is the Gamma function restricted to the positive reals, is the unique log-convex function {f :(0,+\infty) \rightarrow (0,+\infty)} on the positive reals with {f(1)=1} and {sf(s) = f(s+1)} for all {s>0}.

Now we turn to the question of asymptotics for {\Gamma}. We begin with the corresponding asymptotics for the digamma function {\frac{\Gamma'}{\Gamma}}. Recall (see Exercise 11 from Notes 1) that one has

\displaystyle \sum_{y \leq n \leq x} f(n) = \int_y^x f(t)\ dt + O( \int_x^y |f'(t)|\ dt + |f(y)| )

for any real {y \leq x} and any continuously differentiable functions {f: [y,x] \rightarrow {\bf C}}. This gives

\displaystyle \sum_{n=0}^N \frac{1}{s+n} = \hbox{Log}(N+s) - \hbox{Log}(s) + O( \frac{1}{|s|} )

for {s} in a sector of the form {\hbox{Arg}(z) < \pi - \varepsilon} for some fixed {\varepsilon>0} (that is, {s} makes at least a fixed angle with the negative real axis), where {\hbox{Arg}} and {\hbox{Log}} are the standard branches of the argument and logarithm respectively (with branch cut on the negative real axis). From Exercise 11, we obtain the asymptotic

\displaystyle \frac{\Gamma'}{\Gamma}(s) = \hbox{Log}(s) + O( \frac{1}{|s|} )

in this regime. (For the other values of {s}, one can use the reflection formula (21) to obtain an analogous asymptotic.) Actually, it will be convenient to sharpen this approximation a bit, using the following version of the trapezoid rule:

Exercise 16 (Trapezoid rule) Let {y < x} be distinct integers, and let {f: [y,x] \rightarrow {\bf C}} be a continuously twice differentiable function. Show that

\displaystyle \sum_{y \leq n \leq x} f(n) = \int_y^x f(t)\ dt + \frac{1}{2} f(x) + \frac{1}{2} f(y) + O( \int_x^y |f''(t)|\ dt ).

(Hint: first establish the case when {x=y+1}.)

From this exercise, we obtain a sharper estimate

\displaystyle \sum_{n=0}^N \frac{1}{s+n} = \hbox{Log}(N+s) - \hbox{Log}(s) + \frac{1}{2s} + \frac{1}{2(s+N)}  +O( \frac{1}{|s|^2} ),

and hence

\displaystyle \frac{\Gamma'}{\Gamma}(s) = \hbox{Log}(s) - \frac{1}{2s} + O( \frac{1}{|s|^2} ), \ \ \ \ \ (22)

 

in the region where {\hbox{Arg}(z) < \pi - \varepsilon}. Integrating this, we obtain a branch {\log \Gamma} of the logarithm of {\Gamma} with

\displaystyle \log \Gamma(s) = (s -\frac{1}{2}) \hbox{Log}(s) - s + C + O( \frac{1}{|s|} )

for some absolute constant {C}. To find this constant {C}, we apply the reflection formula (Lemma 8) and {s=\frac{1}{2}+it} and conclude that

\displaystyle \log \frac{\pi}{\sin(\pi (\frac{1}{2}+it))} = it ( \hbox{Log}(\frac{1}{2}+it) - \hbox{Log}(\frac{1}{2}-it) ) - 1 + 2C + O( \frac{1}{t} ).

for {t > 0}. Since (up to multiples of {2\pi i})

\displaystyle \log \frac{\pi}{\sin(\pi (\frac{1}{2}+it))} = -\pi t + \log 2\pi + O( \frac{1}{t} )

and

\displaystyle \hbox{Log}(\frac{1}{2}+it) - \hbox{Log}(\frac{1}{2}-it) = \pi i - \frac{i}{2t} + O( \frac{1}{t} ),

we conclude that {C} is equal to {\frac{1}{2} \log 2\pi} up to multiples of {\pi i}; but as {\Gamma} is positive on the positive reals, we can normalise {\log \Gamma} so that {C = \frac{1}{2} \log 2\pi}, thus we obtain the Stirling approximation

\displaystyle \log \Gamma(s) = (s -\frac{1}{2}) \hbox{Log}(s) - s + \frac{1}{2} \log 2\pi + O( \frac{1}{|s|} ).

In particular, we have the approximation

\displaystyle \log \Gamma(\sigma+it) = - \frac{\pi}{2} |t| + (\sigma-\frac{1}{2}) \log |t| + it (\log|t|-1)

\displaystyle + \frac{1}{2} \log 2\pi + \frac{\pi i}{2} (\sigma-\frac{1}{2}) \hbox{sgn}(t) + O( \frac{1}{|t|} )

when {|\sigma| \ll 1} and {|t| \gg 1}, which implies that

\displaystyle |\Gamma(\sigma+it)| \asymp e^{-\pi|t|/2} |t|^{\sigma-\frac{1}{2}} \ \ \ \ \ (23)

 

in this region. For sake of comparison, note that

\displaystyle |\sin( \pi(\sigma+it) )|, |\cos( \pi(\sigma+it) )| \asymp e^{\pi|t|} \ \ \ \ \ (24)

 

in this region (note this is consistent with the reflection formula, Lemma 8, as well as the duplication formula, Exercise 13).

Exercise 17 When {s=\sigma+it} with {|\sigma| \ll 1} and {|t| \gg 1}, show that

\displaystyle \frac{\alpha'}{\alpha}(s) = - \log \frac{|t|}{2\pi} + i \frac{\sigma-1/2}{t} + O( \frac{1}{|t|^2} )

and

\displaystyle \log \alpha(s) = -(\sigma-\frac{1}{2}) \log \frac{|t|}{2\pi} + 2\pi i(\frac{t}{2\pi}\log\frac{|t|}{2\pi}-\frac{t}{2\pi})

\displaystyle + \frac{\pi i}{4} \hbox{sgn}(t) + O(\frac{1}{|t|})

for a suitable choice of branch of {\log \alpha}; equivalently, using the notation {e(x) := e^{2\pi i x}}, one has

\displaystyle \alpha(s) = |t|^{\frac{1}{2}-\sigma} e( \frac{t}{2\pi}\log\frac{|t|}{2\pi}-\frac{t}{2\pi} ) e( \frac{\hbox{sgn}(t)}{8} + O( \frac{1}{|t|} ) ). \ \ \ \ \ (25)

 

Also show that the error {O(\frac{1}{|t|})} in (25) is real-valued when {\sigma=1/2}, so that

\displaystyle |\alpha(\frac{1}{2}+it)| = 1.

Exercise 18 Assume Theorem 1, and deduce Corollary 2.

Exercise 19 Assume Theorem 3, and deduce Corollary 4.

Exercise 20 Using the trapezoid rule, show that for any {s} in the region {\{ s: \hbox{Re}(s) > -1 \}} with {s \neq 1}, there exists a unique complex number {\zeta(s)} for which one has the asymptotic

\displaystyle \sum_{n=1}^N \frac{1}{n^s} = \zeta(s) + \frac{N^{1-s}}{1-s} + \frac{1}{2} N^{-s} + O( \frac{|s| |s+1|}{\sigma+1} N^{-s-1} )

for any natural number {N}, where {s=\sigma+it}. Use this to extend the Riemann zeta function meromorphically to the region {\{ s: \hbox{Re}(s) > -1 \}}. Conclude in particular that {\zeta(0)=-\frac{1}{2}}.

Exercise 21 Obtain the refinement

\displaystyle \sum_{y \leq n \leq x} f(n) = \int_y^x f(t)\ dt + \frac{1}{2} f(x) + \frac{1}{2} f(y)

\displaystyle + \frac{1}{12} (f'(y) - f'(x)) + O( \int_x^y |f'''(t)|\ dt )

to the trapezoid rule when {y < x} are integers and {f: [y,x] \rightarrow {\bf C}} is continuously three times differentiable. Then show that for any {s} in the region {\{ s: \hbox{Re}(s) > -2 \}} with {s \neq -1}, there exists a unique complex number {\zeta(s)} for which one has the asymptotic

\displaystyle \sum_{n=1}^N \frac{1}{n^s} = \zeta(s) + \frac{N^{1-s}}{1-s} + \frac{1}{2} N^{-s} + \frac{s}{12} N^{-s-1}

\displaystyle + O( \frac{|s| |s+1| |s+2|}{\sigma+2} N^{-s-2} )

for any natural number {N}, where {s=\sigma+it}. Use this to extend the Riemann zeta function meromorphically to the region {\{ s: \hbox{Re}(s) > -2 \}}. Conclude in particular that {\zeta(-1)=-\frac{1}{12}}; this is a rigorous interpretation of the infamous formula

\displaystyle 1 + 2 +3 + \dots = -\frac{1}{12}.

Remark 22 One can continue this procedure to extend {\zeta} meromorphically to the entire complex plane by using the Euler-Maclaurin formula; see this previous blog post. However, we will not pursue this approach to the meromorphic continuation of zeta further here.

— 2. The functional equation —

We now give three different (although not wholly unrelated) proofs of the functional equation, Theorem 1.

The first proof (due to Riemann) relies on a relationship between the Dirichlet series

\displaystyle {\mathcal D} f(s) = \sum_n \frac{f(n)}{n^s}

of an arithmetic function {f: {\bf N} \rightarrow {\bf C}}, and the Taylor series

\displaystyle F(z) := \sum_n f(n) z^n. \ \ \ \ \ (26)

 

Given that both of the transforms {f \mapsto {\mathcal D} f} and {f \mapsto F} are linear and (formally, at least) injective, it is not surprising that there should be some linear relationship between the two. It turns out that we can use the Gamma function to mediate such a relationship:

Lemma 23 (Dirichlet series from power series) Let {f: {\bf N} \rightarrow {\bf C}} be an arithmetic function such that {f(n) = O(n^{o(1)})} as {n \rightarrow \infty}. Then for any complex number {s} with {\hbox{Re}(s) > 1}, we have

\displaystyle \Gamma(s) {\mathcal D} f(s) = \int_0^\infty t^{s-1} F( e^{-t} )\ dt

where {F} is the Taylor series (26), which is absolutely convergent in the unit disk {\{ z: |z| < 1 \}}. The integral on the right-hand side is absolutely integrable.

Proof: From (13) we have

\displaystyle \frac{\Gamma(s)}{n^s} = \int_0^\infty t^{s-1} e^{-nt}\ dt

for any natural number {n}. Multiplying by {f(n)}, summing, and using Fubini’s theorem, we conclude that

\displaystyle \Gamma(s) {\mathcal D} f(s) = \int_0^\infty t^{s-1} \sum_n f(n) e^{-nt}\ dt,

and the claim follows. (By restricting to the case when {s} is real and {f} is non-negative, we can see that all integrals here are absolutely integrable.) \Box

Specialising to the case {f(n)=1}, so that {F(z) = \frac{z}{1-z}}, we obtain the identity

\displaystyle \Gamma(s) \zeta(s) = \int_0^\infty \frac{t^{s-1}}{e^t - 1}\ dt \ \ \ \ \ (27)

 

for {\hbox{Re}(s) > 1}, which can be compared with (12). Now we recall the contour {C} introduced in the proof of Lemma 8, which goes around the positive real axis in the clockwise direction. As in the proof of that lemma, we see that

\displaystyle \int_C \frac{z^{s-1}}{e^z - 1}\ dz = (1 - e^{2\pi i(s-1)}) \int_0^\infty \frac{t^{s-1}}{e^t-1}\ dt

for {C} sufficiently close to the real axis (specifically, it has to not wind around any of the zeroes of {e^z-1} other than {0}), where we use the branch {z^{s-1} = \exp( (s-1) \hbox{Log}_{[0,2\pi)}(z) )} as in the proof of Lemma 8. Thus we have

\displaystyle \zeta(s) = \frac{1}{\Gamma(s) (1-e^{2\pi i(s-1)})} \int_C \frac{z^{s-1}}{e^z-1}\ dz \ \ \ \ \ (28)

 

for {\hbox{Re}(s) > 1} with {s} non-integer (to avoid the zeroes of {1-e^{2\pi i(s-1)}}).

The contour integral {\int_C \frac{z^{s-1}}{e^z-1}\ dz} is in fact absolutely convergent for any {s \in {\bf C}}, and from the usual argument involving the Cauchy, Fubini, and Morera theorems we see that this integral depends holomorphically on {s}. Thus, we can use (28) as a definition for the Riemann zeta function that extends it meromorphically to the entire complex plane with no further poles (note that {\Gamma(s) (1-e^{2\pi i s})} has no zeroes to the left of the critical strip, after removing all singularities).

Now suppose that we are in the region {\{ s: \hbox{Re}(s) < 0 \}}, with {s} not an integer. For any natural number {N}, we shift the contour {C} to the rectangular contour {C_N}, which starts at {+\infty + 2\pi i (-N-\frac{1}{2})}, goes leftwards to {-1 + 2\pi i (-N-\frac{1}{2})}, then upwards to {1 + 2\pi (N+\frac{1}{2})}, then rightwards to {+\infty + 2\pi (N + \frac{1}{2} )}. As {\frac{z^{s-1}}{e^z-1}} has simple poles at {2\pi i n} for each non-zero integer {n} with residue {(2\pi i n)^{s-1}}, we see from the residue theorem (and the exponential decay of {\frac{z^{s-1}}{e^z-1}} as {z} goes to infinity to the right) that

\displaystyle \int_C \frac{z^{s-1}}{e^z-1}\ dz = 2\pi i \sum_{0 < |n| \leq N} (2\pi i n)^{s-1} + \int_{C_N} \frac{z^{s-1}}{e^z-1}\ dz.

If {\hbox{Re}(s) < 0}, then one can compute that the integral {\int_{C_N} \frac{z^{s-1}}{e^z-1}\ dz} goes to zero as {N \rightarrow \infty}, and thus

\displaystyle \int_C \frac{z^{s-1}}{e^z-1}\ dz = 2\pi i \sum_{|n| > 0} (2\pi i n)^{s-1}.

From the choice of branch for {z \mapsto z^{s-1}}, one sees that

\displaystyle \sum_{|n| > 0} (2\pi i n)^{s-1} = (2\pi)^{s-1} \zeta(1-s) ( e^{(s-1)\pi i/2} + e^{(s-1)3 \pi i/2} ).

Inserting these identities into (28), we obtain (4) after a brief calculation, at least in the region when {\hbox{Re}(s) < 0} and {s} is not an integer; the remaining cases then follow from unique continuation of meromorphic functions.

Remark 24 The Poisson summation formula was not explicitly used in the above proof of the functional equation. However, if one inspects the contour integration proof of the Poisson summation formula in Supplement 2, one sees an application of the residue theorem which is quite similar to that in the above argument, and so that formula is still present behind the scenes.

Now we give Riemann’s second proof of the functional equation. We again start in the region {\hbox{Re}(s) > 1}. If we repeat the derivation of (27), but use (14) in place of (13), we obtain the variant identity

\displaystyle \Gamma(\frac{s}{2}) \zeta(s) = \int_{\bf R} |t|^{s-1} \sum_n e^{-n^2 t^2}\ dt.

Introducing the theta function

\displaystyle \vartheta(z) := \sum_{n \in {\bf Z}} e^{\pi i n^2 z} \ \ \ \ \ (29)

 

in the half-plane {\hbox{Im}(z) > 0} and using symmetry, we thus see that

\displaystyle \Gamma(\frac{s}{2}) \zeta(s) = \int_0^\infty t^{s-1} (\vartheta( i t^2 / \pi ) - 1 )\ dt.

Making the change of variables {x = t^2/\pi}, this becomes

\displaystyle \pi^{-s/2} \Gamma(\frac{s}{2}) \zeta(s) = \frac{1}{2} \int_0^\infty x^{\frac{s}{2}} (\vartheta( i x ) - 1 )\ \frac{dx}{x}.

Recall from the Poisson summation formula that

\displaystyle \vartheta(z) = \frac{1}{\sqrt{-iz}} \vartheta(-\frac{1}{z}) \ \ \ \ \ (30)

 

for {z} in the upper half-plane, using the principal branch of the square root; see Exercise 36 of . In particular, {\vartheta(ix)} blows up like {x^{-1/2}} as {x \rightarrow 0^+}. We use this formula to transform the previous integral to an integral just on {[1,\infty)} rather than {[0,\infty)}. First observe that

\displaystyle \int_0^1 x^{\frac{s}{2}} (-1)\ \frac{dx}{x} = - \frac{2}{s}.

Next, from (30) (using the hypothesis {\hbox{Re}(s)>1} to ensure absolute convergence) and the change of variables {y=1/x} we have

\displaystyle \int_0^1 x^{\frac{s}{2}} \vartheta(ix)\ \frac{dx}{x} = \int_1^\infty y^{\frac{1-s}{2}} \vartheta(iy)\ \frac{dy}{y}.

Finally

\displaystyle \int_1^\infty y^{\frac{1-s}{2}} \ \frac{dy}{y} = -\frac{2}{1-s}.

Putting all this together, we see that

\displaystyle \pi^{-s/2} \Gamma(\frac{s}{2}) \zeta(s) = - \frac{1}{s} - \frac{1}{1-s} + \frac{1}{2} \int_1^\infty (x^{\frac{s}{2}} + x^{\frac{1-s}{2}}) (\vartheta(ix)-1)\ \frac{dx}{x} \ \ \ \ \ (31)

 

for {\hbox{Re}(s)>1}.

Note from the definition of the theta function that {\vartheta(ix)-1} decays exponentially fast as {x \rightarrow +\infty}. As such, the integral in the right-hand is absolutely convergent for any {s \in {\bf C}}, and by the usual Morera theorem argument is in fact holomorphic in {s}. Thus (31) may be used to give a meromorphic extension of {\zeta} to the entire complex plane. The right-hand side of (31) is also manifestly symmetric with respect to the reflection {s \mapsto 1-s}, giving the functional equation in the form (7).

Next, we give a short heuristic proof of the functional equation arising from formally applying the Poisson summation formula (8) to the function

\displaystyle f(x) := 1_{x>0} \frac{1}{x^s}

ignoring all infinite divergences. Formally, the Fourier transform {\hat f(y) = \int_{\bf R} f(x) e^{ixy}\ dx} is then given by

\displaystyle \hat f(y) = \Gamma(1-s) \frac{1}{(-iy)^{1-s}}

thanks to (13). The Poisson summation formula (8) then formally yields

\displaystyle \sum_n \frac{1}{n^s} = \Gamma(1-s) \sum_{m \in {\bf Z}} \frac{1}{(-2\pi i m)^{1-s}}

and the functional equation (3) formally follows after some routine calculation if we discard the divergent {m=0} term on the right-hand side, converts the sum over negative {m} to a sum over positive {m} by the change of variables {m \mapsto -m}, and formally identify {\sum_n \frac{1}{n^s}} and {\sum_m \frac{1}{m^{1-s}}} with {\zeta(s), \zeta(1-s)} respectively.

The above heuristic argument may be made rigorous by using suitable regularisations. This is the purpose of the exercise below.

Exercise 25 (Rigorous justification of functional equation) Let {s} be an element of the critical strip {\{s: 0 < \hbox{Re}(s) < 1 \}}.

  • (i) For any {\alpha, \beta > 0}, show that the function {f_{\alpha,\beta}: {\bf R} \rightarrow {\bf C}} defined by {f_{\alpha,\beta}(x) := 1_{x>0} \frac{1}{x^s} (e^{-\alpha x} - e^{-\beta x})} is continuous, absolutely integrable, and has Fourier transform

    \displaystyle \hat f_{\alpha,\beta}(y) = \Gamma(1-s) ( \frac{1}{(-iy+\alpha)^{1-s}} - \frac{1}{(-iy+\beta)^{1-s}} )

    for {y \in {\bf R}}, using the standard branch of the logarithm to define {z \mapsto z^{1-s}}.

  • (ii) Rigorously justify the Poisson summation formula

    \displaystyle \sum_{n \in {\bf Z}} f_{\alpha,\beta}(n) = \sum_{m \in {\bf Z}} \hat f_{\alpha,\beta}(2\pi m)

    for any {\alpha, \beta > 0}. (In Supplement 2, the Poisson summation formula was only established for continuously twice differentiable, compactly supported functions; {f_{\alpha,\beta}} is neither of these, but one can still recover the formula in this instance by an approximation argument.)

  • (iii) Show that {\sum_{n \in {\bf Z}} f_{\varepsilon,1/\varepsilon}(n) = \zeta(s) + \varepsilon^{s-1} \Gamma(1-s) + o(1)} as {\varepsilon \rightarrow 0^+}.
  • (iv) Show that

    \displaystyle \sum_{m \in {\bf N}} \frac{1}{(\pm 2\pi i m+\varepsilon)^{1-s}} - \frac{1}{(\pm 2\pi i m)^{1-s}} = o(1)

    and

    \displaystyle \sum_{m \in {\bf N}} \frac{1}{(\pm 2\pi i m)^{1-s}} - \frac{1}{(\pm 2\pi i m+1/\varepsilon)^{1-s}}

    \displaystyle = \frac{1}{(\pm 2\pi i)^{1-s}} \zeta(1-s) \mp \frac{\varepsilon^{-s}}{2\pi i s} + o(1)

    as {\varepsilon \rightarrow 0^+}, for either choice of sign {\pm}.

  • (v) Prove (3) for {s} in the critical strip, and then prove the rest of Theorem 1.

Remark 26 There are many further proofs of the functional equation than the three given above; see for instance the text of Titchmarsh for several further proofs. Most of the proofs can be connected in one form or another to the Poisson summation formula. One important proof worth mentioning is Tate’s adelic proof, discussed in this previous post, which is well suited for generalising the functional equation to many other zeta functions and {L}-functions, but will not be discussed further in this post.

Exercise 27 Use the formula {\zeta(-1)=\frac{-1}{12}} from Exercise 21, together with the functional equation, to show that {\zeta(2) = \frac{\pi^2}{6}}.

Exercise 28 (Relation between zeta function and Bernoulli numbers) In this exercise we give the classical connection between the zeta function and Bernoulli numbers; this connection is not so relevant for analytic number theory, as it only involves values of the zeta function that are far from the critical strip, but is of interest for some other applications.

  • (i) For any complex number {z} with {\hbox{Re}(z)>0}, use the Poisson summation formula (8) to establish the identity

    \displaystyle \sum_{n \in {\bf Z}} e^{-|n|z} = \sum_{m \in {\bf Z}} \frac{2z}{z^2+(2\pi m)^2}.

  • (ii) For {z} as above and sufficiently small, show that

    \displaystyle 2\sum_k (-1)^{k+1} \zeta(2k) (z/2\pi)^k = \frac{z}{1-e^{-z}} - 1 - \frac{z}{2}.

    Conclude that

    \displaystyle \zeta(2k) = \frac{(-1)^{k+1} (2\pi)^k}{2 (2k!)} B_{2k}

    for any natural number {k}, where the Bernoulli numbers {B_2, B_4, B_6, \dots} are defined through the Taylor expansion

    \displaystyle \frac{z}{1-e^{-z}} = 1 + \frac{z}{2} + \sum_k \frac{B_{2k}}{(2k)!} z^{2k}.

    Thus for instance {B_2 = 1/6}, {B_4 = -1/30}, and so forth.

  • (iii) Show that

    \displaystyle \zeta(-n) = -\frac{B_{n+1}}{n+1} \ \ \ \ \ (32)

    for any odd natural number {n}. (This identity can also be deduced from the Euler-Maclaurin formula, which generalises the approach in Exercise 21; see this previous post.)

  • (iv) Use (28) and the residue theorem (now working inside the contour {C}, rather than outside) to give an alternate proof of (32).

Exercise 29 Show that {\zeta'(0) = -\frac{1}{2} \log 2\pi}.

Remark 30 The functional equation is almost certainly not sufficient, by itself, to establish the Riemann hypothesis. For instance, there is a classical example of Davenport and Heilbronn of a finite linear combination of Dirichlet L-functions which obeys a functional equation very similar (though not quite identical) to (4), but which possesses zeroes off of the critical line; see e.g. this article for a recent analysis of the counterexample. Eisenstein series can also be used to construct a “natural” variant of a zeta function that has a Dirichlet series and a functional equation, but has zeroes off the critical line. For a “cheaper” counterexample, take two nearby non-trivial zeroes {\rho_1, \rho_2} of {\zeta} on the critical line, and “replace” them with two other nearby complex numbers {\rho_3, \rho_4} symmetric around the critical line, but not on the line, by introducing the modified zeta function

\displaystyle \tilde \zeta(s) := \frac{(s-\rho_3)(s-\rho_4)}{(s-\rho_1)(s-\rho_2)} \zeta(s).

This function also obeys the functional equation, and behaves very similarly (though not identically) to the Riemann zeta function in all the regions in which we have a good understanding of this function (in particular, it has similar behaviour to {\zeta} around {s=1} or around the edges of the critical strip), but clearly has zeroes off of the critical line. Such constructions would be particularly hard to exclude by analytic methods if there happened to be a repeated zero {\rho_1=\rho_2} of {\zeta} on the critical line, as one could then make {\rho_3,\rho_4} extremely close to both of {\rho_1,\rho_2}; it is conjectured that such a repeated zero does not occur, but we cannot exclude this possibility with current technology, which creates a family of “infinitesimal counterexamples” to the Riemann hypothesis which rules out a large number of potential approaches to this hypothesis.

On the other hand, unlike the Davenport-Heilbronn counterexample, {\tilde \zeta} does not arise from a Dirichlet series, and certainly does not have an Euler product. One can show (see Section 2.13 of Titchmarsh) that if one insists on the functional equation (4) on the nose (as opposed to, say, the modified functional equation that the Davenport-Heilbronn example obeys, or the functional equation obeyed by a Dirichlet {L}-function) as well as a Dirichlet series representation, then the only possible functions available are scalar multiples of the Riemann zeta function. It could well be that the analogue of the Riemann hypothesis is in fact obeyed by any function which obeys a suitable functional equation, together with a Dirichlet series representation (with appropriate size bounds on the coefficients) and an Euler product factorisation; a precise form of this statement is the Riemann hypothesis for the Selberg class. But one would somehow need to make essential use of all three of the above axioms to try to prove the Riemann hypothesis, as we have numerous counterexamples that show that zeroes can be produced off the critical line if one drops one or more of these axioms.

— 3. Approximate and localised forms of the functional equation —

In our construction of the Riemann zeta function in Notes 2, we had the asymptotic

\displaystyle \zeta(s) = \sum_{n \leq x} \frac{1}{n^s} - \frac{x^{1-s}}{1-s} + O( \frac{|s|}{\sigma} x^{-s} )

for {x > 0} and {s} in the region {\{s: \hbox{Re}(s) > 0; s \neq 1\}}. Thus, {\zeta(s)} is the limit of the functions {\sum_{n \leq x} \frac{1}{n^s} - \frac{x^{1-s}}{1-s}} as {x \rightarrow \infty}, locally uniformly for {s} in this region. We have an analogous limit for smoothed sums:

Exercise 31 (Smoothed sums) Let {\eta: {\bf R} \rightarrow {\bf C}} be a smooth function such that {\eta(x)} vanishes for {x>C} and equals {1} for {x<-C} for some constant {C>0}. Show that the functions

\displaystyle s \mapsto \sum_n \frac{1}{n^s} \eta( \log n - \log x) + x^{1-s} \int_{\bf R} \frac{e^{(1-s) u}}{1-s} \eta'(u)\ du.

converge locally uniformly to {\zeta} on the region {\{s: \hbox{Re}(s) > 0; s \neq 1\}}. In the critical strip {\{ s: 0 < \hbox{Re}(s) < 1 \}}, show that the second term may be replaced by {- x^{1-s} \int_{\bf R} e^{(1-s) u} \eta(u)\ du}.

It is of interest to understand the rate of convergence of these approximations to the zeta function. We restrict attention to {s} in the critical strip {\{ s: 0 < \hbox{Re}(s) < 1 \}}. The first observation is that the smoothed sums have negligible contribution once {n} is much larger than {|t|}:

Lemma 32 Let {\psi: {\bf R} \rightarrow {\bf C}} be a smooth, compactly supported function. Let {s = \sigma+it} lie in the critical strip. If {x \geq C (1+|t|)} for some sufficiently large {C} (depending on the support of {\psi}), then we have

\displaystyle \sum_n \frac{1}{n^s} \psi( \log n - \log x ) - x^{1-s} \int_{\bf R} e^{(1-s) u} \psi(u)\ du = O_{\psi,A}( x^{-A} )

for any {A > 0}.

Proof: By the Poisson summation formula (8), we have

\displaystyle \sum_n \frac{1}{n^s} \psi( \log n - \log x ) = \sum_{m \in {\bf Z}} \Psi(m)

where

\displaystyle \Psi(m) := \int_0^\infty \frac{1}{y^s} \psi(\log y - \log x) e^{2\pi i m y}\ dy.

If we write {s = \sigma+it} and {u = \log y - \log x}, then after a change of variables we have

\displaystyle \Psi(m) = x^{1-s} \int_{\bf R} \tilde \psi(u) e^{2 \pi i m x \phi_m(u)}\ du

where {\tilde \psi(u) := e^{(1-\sigma)u} \psi(u)} and {\phi_m(u) := e^u - \frac{t}{2\pi m x} u}. In particular we have

\displaystyle \Psi(0) = x^{1-s} \int_{\bf R} e^{(1-s) u} \psi(u)\ du

so it suffices by the triangle inequality to show that

\displaystyle \int_{\bf R} \tilde \psi(u) e^{2\pi i m x \phi_m(u)}\ du \ll_{A,\psi} (|m| x)^{-A}

for any {A>0} and any non-zero integer {m}. But by hypothesis on {x}, we see that we have the derivative bounds {\phi_m^{(k)}(u) = O_{\psi,k}(1)} on the support of the smooth compactly supported function {\tilde \psi}. If one repeatedly writes {e^{2\pi i m x \phi_m(u)} = (\frac{d}{du} e^{2\pi i m x \phi_m(u)}) \times \frac{1}{2\pi i mx \phi'_m(u)}} and integrates by parts to move the derivative off of the phase, one obtains the claim. \Box

This gives us a good approximation to {\zeta(\sigma+it)} in the critical strip, involving a smoothed sum consisting of {O(1+|t|)} terms:

Exercise 33 Let {\eta: {\bf R} \rightarrow {\bf C}} be a smooth function such that {\eta(x)} vanishes for {x>C} and equals {1} for {x<-C} for some constant {C>0}. Let {s = \sigma+it} be in the critical strip. Show that

\displaystyle \zeta(s) = \sum_n \frac{1}{n^s} \eta( \log n - \log x) - x^{1-s} \int_{\bf R} e^{(1-s) u} \eta(u)\ du \ \ \ \ \ (33)

 

\displaystyle + O_{\eta, A, \sigma}( x^{-A} )

for any {A>0}, if one has {x \geq C' (1+|t|)} for some sufficiently large {C'} depending on {C}. (Hint: use Lemma 5 of Notes 1, Exercise 31, Lemma 32, and dyadic decomposition.) Conclude in particular that

\displaystyle \zeta(s) = \sum_n \frac{1}{n^s} \eta( \log n - \log x) + O_{\eta,\sigma,A}( (1+|t|)^{-A} ) \ \ \ \ \ (34)

 

for any {A>0}, if {C'(1+|t|) \leq x \ll (1+|t|)}.

We remark that the asymptotic (34) is also valid (with a somewhat worse error term) for the ordinary partial sums {\sum_{n \leq x} \frac{1}{n^s}} (a classical result of Hardy and Littlewood); see Theorem 4.11 of Titchmarsh. However, it will be slightly more convenient for us here to work exclusively with smoothed sums.

From (34) and the triangle inequality, we have the crude bound

\displaystyle \zeta(s) \ll_\sigma (1+|t|)^{1-\sigma} \ \ \ \ \ (35)

 

in the interior of the critical strip. One can do better through the functional equation. Indeed, from (4), (23), (24) we see that

\displaystyle |\zeta(1-s)| \asymp_\sigma (1+|t|)^{\sigma-\frac{1}{2}} |\zeta(s)| \ \ \ \ \ (36)

 

in this region, and thus

\displaystyle \zeta(s) \ll_\sigma (1+|t|)^{1/2}. \ \ \ \ \ (37)

 

One can then use the Hadamard three lines theorem to interpolate between (35) and (37) to obtain the convexity bound

\displaystyle \zeta(s) \ll_{\sigma,\varepsilon} (1+|t|)^{\frac{1-\sigma}{2}+\varepsilon} \ \ \ \ \ (38)

 

for any {0 < \sigma < 1} and {\varepsilon > 0}; we leave the details to the interested reader (and we will reprove the convexity bound shortly). Further improvements to (38) for the zeta function and other {L}-functions are known as subconvexity bounds and have many applications in analytic number theory, though we will only discuss the simplest subconvexity bounds in this course.

Exercise 33 describes the zeta function in terms of smoothed sums of {\frac{1}{n^s}}. In the converse direction, one can use Fourier inversion to express smoothed sums of {\frac{1}{n^s}} in terms of the zeta function:

Lemma 34 (Fourier inversion) Let {\psi: {\bf R} \rightarrow {\bf C}} be a smooth, compactly supported function, and let {s = \sigma+it} lie in the critical strip. Then for any {1 \ll x \ll 1+|t|}, we have

\displaystyle \sum_n \frac{1}{n^s} \psi( \log n - \log x ) = \frac{1}{2\pi} \int_{\bf R} \hat \psi(y) x^{iy} \zeta(s+iy)\ dy \ \ \ \ \ (39)

 

\displaystyle + O_{A,\sigma}( (1+|t|)^{-A} )

for all {A>0}, where

\displaystyle \hat \psi(y) := \int_{\bf R} \psi(u) e^{iyu}\ du

is the Fourier transform of {\psi}.

Proof: We can write the left-hand side of (39) as {\sum_n \frac{1}{n} g(\log n)}, where {g(u) := e^{(1-s)u} \psi(u-\log x)}. By Proposition 7 of Notes 2, this can be rewritten as

\displaystyle \frac{1}{2\pi} \int_{\bf R} \zeta(2+it) \hat g( t - i )\ dt.

Noting that

\displaystyle \hat g(y) = x^{1-s+iy} \hat \psi( y - i(1-s) )

we thus rewrite the left-hand side of (39) as the contour integral

\displaystyle \frac{1}{2\pi i} \int_{2-i\infty}^{2+\infty} \zeta(z) x^{z-s} \hat \psi(-i(z-s))\ dz.

The function {z \mapsto \zeta(z) x^{z-s} \hat \psi(-i(z-s))} has a pole at {z=1} with residue {x^{1-s} \hat \psi( is )}, which by Exercise 28 of Supplement 2 is of size {O_A( (1+|t|)^{-A})} for any {A>0}. By another appeal to that exercise, together with (35), we see that {\zeta(z) x^{z-s} \hat \psi(-i(z-s))} goes to zero as {\hbox{Im}(z) \rightarrow \pm \infty} uniformly when {\hbox{Re}(z)} is bounded. By the residue theorem, we can thus shift the contour integral to

\displaystyle \frac{1}{2\pi i} \int_{s-i\infty}^{s+\infty} \zeta(z) x^{z-s} \hat \psi(-i(z-s))\ dz + O_A( (1+|t|)^{-A})

and the claim follows by performing the substitution {z = s + iy}. \Box

Exercise 35 Establish (39) directly from the Fourier inversion formula, without invoking contour integration methods.

Among other things, this lemma shows that growth bounds in the Riemann zeta function are equivalent to growth bounds on smooth exponential sums of {n^{-it}}:

Exercise 36 Let {0 < \sigma < 1} and {\theta \in {\bf R}}. Show that the following claims are equivalent:

  • (i) One has {\zeta(\sigma+it) \ll |t|^{\theta+o(1)}} as {|t| \rightarrow \pm \infty}.
  • (ii) One has the bound

    \displaystyle \sum_n n^{-it} \psi( \log n - \log x ) \ll_{\sigma,\alpha,\varepsilon,\psi} x^\sigma |t|^{\theta+\varepsilon}

    whenever {\varepsilon>0}, {1 \ll x \ll |t|}, and {\psi: {\bf R} \rightarrow {\bf C}} is a compactly supported smooth function.

Exercise 37 For any {\sigma \in {\bf R}}, let {\mu(\sigma)} denote the least exponent for which one has the asymptotic {\zeta(\sigma+it) \ll |t|^{\mu(\sigma)+o(1)}} as {|t| \rightarrow \pm \infty}.

  • (i) Show that {\mu} is convex and obeys the functional equation {\mu(1-\sigma) = \sigma-\frac{1}{2} + \mu(\sigma)} for {\sigma \in {\bf R}}.
  • (ii) Show that {\max( 1/2-\sigma, 0 ) \leq \mu(\sigma) \leq \frac{1-\sigma}{2}} for all {\sigma}, and that {\mu(\sigma) = \max(1/2-\sigma,0)} for {\sigma \leq 0} or {\sigma \geq 1}. (In particular, this reproves (38).)
  • (iii) Show that the Lindelöf hypothesis (Exercise 34 from Notes 2) is equivalent to the assertion that {\mu(\sigma) = \max(1/2-\sigma,0)} for all {\sigma \in {\bf R}}.

Lemma 34 and Theorem 1 suggest that there should be some approximate functional equation for the smoothed sums {\sum_n \frac{1}{n^s} \psi( \log n - \log x )}. This is indeed the case:

Theorem 38 (Approximate functional equation for smoothed sums) Let {s=\sigma+it} with {0 < \sigma < 1} and {|t| \gg 1}. Let {N, M \gg 1} be such that {NM = \frac{|t|}{2\pi}}. Let {\psi: {\bf R} \rightarrow {\bf C}} be a smooth compactly supported function. Then

\displaystyle \sum_n \frac{1}{n^s} \psi( \log n - \log N ) = \alpha(s) \sum_m \frac{1}{m^{1-s}} \psi( \log M - \log m ) \ \ \ \ \ (40)

 

\displaystyle + O_{\sigma,\varepsilon,\psi}( |t|^{-\sigma/2-1/2+\varepsilon} )

for any {\varepsilon>0}, where {\alpha} is as in (5).

This approximate functional equation can also be established directly from the Poisson summation formula using the method of stationary phase; see Chapter 4 of Titchmarsh. The error term of {O_{\sigma,\varepsilon}( |t|^{-\sigma/2-1/2+\varepsilon} )} can be improved further by using better growth bounds on {\zeta} (or by further Taylor expansion of {\alpha}), but the error term given here is adequate for applications. Note that the true functional equation (3) is formally the {\psi=1} case of (40) if one ignores the error term.

Proof: By (39), the left-hand side is

\displaystyle \frac{1}{2\pi} \int_{\bf R} \hat \psi(y) N^{iy} \zeta(s+iy)\ dy

up to negligible errors. Using the rapid decrease of {\hat \psi} (Exercise 28 of Supplement 2) and (35), we may restrict {y} to the range {|y| \leq |t|/2}, up to negligible error. Applying the functional equation (3), we rewrite this as

\displaystyle \frac{1}{2\pi} \int_{|y| \leq |t|/2} \hat \psi(y) N^{iy} \alpha(s+iy) \zeta(1-s-iy)\ dy.

For {|y| \leq |t|/2}, we see from Exercise 17 that

\displaystyle \frac{\alpha'}{\alpha}(s+iy) = -\log\frac{|t|}{2\pi} + O( \frac{1+|y|}{|t|})

and thus from the fundamental theorem of calculus we have

\displaystyle \log \alpha( s+iy ) - \log \alpha(s) = -iy \log\frac{|t|}{2\pi} + O( \frac{|y|+|y|^2}{|t|} )

or equivalently (using {NM = \frac{|t|}{2\pi})})

\displaystyle \alpha(s+iy) = \alpha(s) (NM)^{-iy} (1 + O( \frac{|y|+|y|^2}{|t|} )).

We can thus write the left-hand side of (40) up to acceptable errors as

\displaystyle \frac{\alpha(s)}{2\pi} \int_{|y| \leq |t|/2} \hat \psi(y) M^{-iy} \zeta(1-s-iy) (1 + O( \frac{|y|+|y|^2}{|t|} ))\ dy.

From Exercise 17 we have {\alpha(s) \ll |t|^{1/2-\sigma}}. From (38) and the rapid decrease of {\hat \psi}, the contribution of the error term can then be controlled by {O( |t|^{-\sigma/2-1/2+\varepsilon} )}. Thus, up to acceptable errors, (40) is equal to

\displaystyle \frac{\alpha(s)}{2\pi} \int_{|y| \leq |t|/2} \hat \psi(y) M^{-iy} \zeta(1-s-iy)\ dy.

By another appeal to (38) and the rapid decrease of {\psi} (and the growth bound on {F(s)} we may remove the constraint {|y| \leq |t|/2}. The claim then follows by changing {y} to {-y} and using (39) again. \Box

Let {s = \sigma+it} with {0 < \sigma < 1} and {|t| \gg 1}, and let {\eta} be as in Exercise 33. From (34) we have

\displaystyle \zeta(s) = \sum_n \frac{1}{n^s} \eta( \log n - \log x) + O_{\eta,\sigma,A}( |t|^{-A} )

for any {A}, and from the functional equation we have

\displaystyle \zeta(s) = \alpha(s) \sum_m \frac{1}{m^{1-s}} \eta( \log m - \log x) + O_{\eta,\sigma,A}( |t|^{-A} ).

Using Theorem 38, we may split the difference:

Exercise 39 (Approximate functional equation) Let {\eta: {\bf R} \rightarrow {\bf C}} be a smooth function such that {\eta(x)} vanishes for {x>C} and equals {1} for {x<-C} for some constant {C>0}, and let {\tilde \eta(u) := 1 - \eta(-u)}. Let {s = \sigma+it} with {0 < \sigma < 1} and {|t| \gg 1}. Let {N,M \gg 1} be such that {NM = \frac{t}{2\pi}}. Show that

\displaystyle \zeta(s) = \sum_n \frac{1}{n^s} \eta( \log n - \log N) + \alpha(s) \sum_m \frac{1}{m^{1-s}} \tilde \eta( \log m - \log M) \ \ \ \ \ (41)

 

\displaystyle + O_{\eta, \sigma, \varepsilon} ( |t|^{-\sigma/2-1/2+\varepsilon} )

for all {\varepsilon>0}.

One can also obtain a version of this equation using partial sums instead of smoothed sums, but with slightly worse error terms, known as the Riemann-Siegel formula; see e.g. Theorem 4.13 of Titchmarsh. Setting {N=M}, we see that we may now approximate {\zeta(\sigma+it)} by smoothed sums consisting of about {O( |t|^{1/2} )} terms, improving upon the sum with {O(|t|)} terms appearing in Exercise 33. Using the triangle inequality, this gives a slight improvement to (38), namely that

\displaystyle \zeta(\sigma+it) \ll_{\sigma} (1+|t|)^{\frac{1-\sigma}{2}}

whenever {0 < \sigma < 1} and {t \in {\bf R}}. The equation (41) is particularly useful for getting reasonably good bounds on {\zeta(\frac{1}{2}+it)}; we will see an example of this in subsequent notes.

— 4. Further applications of the functional equation —

One basic application of the functional equation is to improve the control on zeroes of the Riemann zeta function, beyond what was obtained in Notes 2.

From Exercise 37, we now have the crude bounds

\displaystyle \zeta(\sigma+it) = O( |t|^{O(1)} )

and in particular

\displaystyle \log|\zeta(\sigma+it)| \leq O(\log t)

whenever {\sigma = O(1)} and {|t| \gg 1}. The Jensen formula argument from Proposition 16 of Notes 2 is no longer restricted to the region {\{\hbox{Re}(s) > \varepsilon\}}, and shows that there are {O( \log(2+|t|) )} zeroes of the Riemann zeta function in any disk of the form {\{ z: z = it + O(1) \}}. Similarly, Proposition 19 of Notes 2 extends to give the formula

\displaystyle -\frac{\zeta'}{\zeta}(s) = - \sum_{\rho: \rho = s + O(1)} \frac{1}{s-\rho} + \frac{1}{s-1} + O( \log(2+|t|) ) \ \ \ \ \ (42)

 

whenever {s = \sigma+it} with {\sigma = O(1)}. Corollary 20 of Notes 2 also extends to show that

\displaystyle \int_{\sigma = O(1)} \int_{t = t_0 + O(1)} |\frac{\zeta'}{\zeta}(\sigma+it)|\ dt d\sigma \ll \log(2+|t_0|) \ \ \ \ \ (43)

 

for any {t_0 \in {\bf R}}.

We can say more about the zeroes. For any {T>0}, let {N(T)} denote the number of zeroes of {\zeta} in the rectangle {\{ \sigma+it: 0 < \sigma < 1; 0 < t \leq T \}}. (If there were zeroes of {\zeta} on the interval {[0,1]}, they should each count for {1/2} towards {N(T)}, but it turns out (as can be computationally verified) that there are no such zeroes.) Equivalently, {2N(T)} is the number of zeroes in {\{ \sigma+it: 0 < \sigma < 1; |t| \leq T \}}. We have the following asymptotic for {N(T)}, conjectured by Riemann and established by von Mangoldt:

Theorem 40 (Riemann-von Mangoldt formula) For {T \geq 2} (say), we have {N(T) = \frac{T}{2\pi} \log \frac{T}{2\pi} - \frac{T}{2\pi} + O( \log T )}.

Proof: We use the Riemann {\xi} function, whose zeroes are the non-trivial zeroes of {\zeta}. From (6), (43), (22) one has

\displaystyle \int_{\sigma = O(1)} \int_{t = T + O(1)} |\frac{\xi'}{\xi}(\sigma+it)|\ dt d\sigma \ll \log T

and so by the pigeonhole principle we can find {T_-, T_+} in {[T-1,T]} and {[T,T+1]} respectively such that

\displaystyle \int_{-1}^2 |\frac{\xi'}{\xi}(\sigma+iT_\pm)|\ d\sigma \ll \log T; \ \ \ \ \ (44)

 

in particular, the line segments {\{\sigma+iT_\pm: -1 \leq \sigma \leq 2\}} does not meet any of the zeroes of {\xi} or {\zeta}. We will show for either choice of sign {\pm} that the rectangle {\{ \sigma+it: -1 \leq \sigma \leq 2; |t| \leq T_\pm \}} contains {\frac{T_\pm}{\pi} \log \frac{T_\pm}{2\pi} - \frac{T_\pm}{\pi} + O( \log T )} zeroes, which gives the claim since {\frac{T_\pm}{\pi} \log \frac{T_\pm}{2\pi} - \frac{T_\pm}{\pi}} and {\frac{T}{\pi} \log \frac{T}{2\pi} - \frac{T}{\pi}} only differ by {O(\log T)}.

By the residue theorem (or the argument principle), the number of zeroes in this rectangle is equal to {\frac{1}{2\pi i}} times the contour integral of {\frac{\xi'}{\xi}} anticlockwise around the boundary of the rectangle. By (44), the contribution of the upper and lower edges of this contour are {O(\log T)}; from the functional equation (Corollary 2) we see that the contribution of the left and right edges of the contour are the same, and from conjugation symmetry we see that the contribution of the upper half of the right edge is the complex conjugate of that of the lower half. Putting all this together, we see that it suffices to show that

\displaystyle \hbox{Re} \frac{2}{\pi i} \int_2^{2+iT_\pm} \frac{\xi'}{\xi}(s)\ ds = \frac{T_\pm}{\pi} \log \frac{T_\pm}{2\pi} - \frac{T_\pm}{\pi} + O( \log T ),

or equivalently (after removing the integral from {2} to {2+2i}, which is {O(1)}), that

\displaystyle \hbox{Re} \int_2^{T_\pm} \frac{\xi'}{\xi}(2+it)\ dt = \frac{1}{2}( T_\pm \log T_\pm - T_\pm - \frac{T_\pm}{2} \log(2\pi)) + O( \log T ).

From (6), (22) we have

\displaystyle \frac{\xi'}{\xi}(2+it) = \frac{\zeta'}{\zeta}(2+it) - \frac{1}{2} \log 2\pi + \frac{1}{2} \log t + \frac{\pi i}{4} + O( \frac{1}{t} )

for {t \geq 2}. Since {\log \zeta} is given by a Dirichlet series that is uniformly bounded on {\{ 2+it: t \in {\bf R}\}}, we have

\displaystyle \int_2^{T_\pm} \frac{\zeta'}{\zeta}(2+it) = O(1),

and the claim follows. \Box

Exercise 41 Establish the more precise formula

\displaystyle N(T) = \frac{T}{2\pi} \log \frac{T}{2\pi} - \frac{T}{2\pi} + \frac{7}{8} + S(T) + O( \frac{1}{T} )

whenever {T \geq 2} and the line {\{ \sigma+iT: \sigma \in {\bf R}\}} avoids all zeroes of {\zeta}, where {S(T) = \frac{1}{\pi} \hbox{Im} \log \zeta(\frac{1}{2}+iT)}, and the logarithm is extended leftwards from the region {\{ s: \hbox{Re}(s) > 1 \}}, thus

\displaystyle S(T) = \frac{1}{\pi} \hbox{Im} \log \zeta(2+iT) - \frac{1}{\pi} \hbox{Im} \int_{1/2}^2 \frac{\zeta'}{\zeta}(\sigma+iT)\ d\sigma.

Theorem 40 then asserts that {S(T) = O(\log T)} for all {T \geq 2}. It is in fact conjectured that {S(T) = o(\log T)} as {T \rightarrow +\infty}, but this problem has resisted solution for over a century (although it is known that this bound would follow from powerful hypotheses such as the Lindelöf hypothesis).

Remark 42 In principle, {N(T)} can be numerically computed exactly for any {T}, as long as the line {\{ \sigma+iT: \sigma \in {\bf R}\}} has no zeroes of {\zeta}, by evaluating the contour integral of {\xi'/\xi} to sufficiently high accuracy. Similarly, one can also obtain a numerical lower bound for the number of zeroes of {\zeta} on the critical line by finding sign changes for the function {\xi}, which is real-valued on the critical line. If the zeroes are all simple and on the critical line, then this (in principle) allows one to numerically verify the Riemann hypothesis up to height {T}. In practice, faster methods for numerically verifying the Riemann hypothesis (e.g. based on the Riemann-Siegel formula) are available.

We can now obtain global explicit formulae for the log-derivatives of {\xi} and {\zeta}. Since {\xi} has zeroes only at the non-trivial zeroes of {\zeta}, and no poles, one heuristically expects a relationship of the form

\displaystyle \frac{\xi'}{\xi}(s) = \sum_\rho \frac{1}{s-\rho}.

Unfortunately, the right-hand side is divergent; but we can normalise it by considering the sum

\displaystyle \sum_\rho (\frac{1}{s-\rho} + \frac{1}{\rho}).

Exercise 43 Show that the sum {F(s) = \sum_\rho \frac{1}{s-\rho} + \frac{1}{\rho}} converges locally uniformly to a meromorphic function away from the non-trivial zeroes of {\zeta}, with {\frac{\xi'}{\xi}(s) - F(s)} an entire function (after removing all singularities). Also establish the bounds

\displaystyle F(s) = \sum_{\rho: |\rho-s| \leq 1} \frac{1}{s-\rho} + O( \log(2+|s|) )

for all {s} in the complex plane. (Hint: You will need to use the Riemann-von Mangoldt formula.)

From (6), (42), (22) one also has

\displaystyle \frac{\xi'}{\xi}(s) = \sum_{\rho: |\rho-s| \leq 1} \frac{1}{s-\rho} + O( \log(2+|s|) )

for all {s = \sigma+it} in the strip {\sigma = O(1), t \in {\bf R}}; from (22) and the boundedness of {\frac{\zeta'}{\zeta}} in the region {\sigma > 2} one sees that this bound also holds with {\sigma > 2}, and from the functional equation we see that it also holds for {\sigma < -1}. In particular, with {F} as in the preceding exercise, we see that the entire function {\frac{\xi'}{\xi}(s) - F(s)} is bounded by {O( \log(2+|s|))} on the entire complex plane. By the generalised Cauchy integral formula (Exercise 9 of Supplement 2) applied to a disk of radius {R} we conclude that {\frac{\xi'}{\xi}(s) - F(s)} has derivative {O( \log(2+|s|+R) / R )} for any {R>0}, and by sending {R \rightarrow \infty} we conclude that this function is constant; thus we have the representation

\displaystyle \frac{\xi'}{\xi}(s) = B + \sum_\rho \frac{1}{s-\rho} + \frac{1}{\rho}

for some absolute constant {B}, and all {s} away from the non-trivial zeroes of {\zeta}. From (6) and Exercise 11, we conclude the representation

\displaystyle -\frac{\zeta'}{\zeta}(s) = B' + \frac{1}{s-1} - \sum_\rho (\frac{1}{s-\rho} + \frac{1}{\rho}) - \sum_n (\frac{1}{s+2n} - \frac{1}{2n}) \ \ \ \ \ (45)

 

where

\displaystyle B' = -B - \frac{1}{2} \log \pi - \frac{1}{2} \gamma.

The exact values of {B, B'} are not terribly important for applications, but can be computed explicitly:

Exercise 44 By inspecting both sides of the above equations as {s \rightarrow 0}, show that {B' = - \log 2\pi + 1}, and hence {B = \log 2 + \frac{1}{2} \log \pi - 1 - \frac{1}{2} \gamma}.

By inserting (45) into Perron’s formula (Exercise 11 of Notes 2), we obtain the Riemann-von Mangoldt explicit formula for the von Mangoldt summatory function:

Exercise 45 (Riemann-von Mangoldt explicit formula) For any non-integer {x > 1}, show that

\displaystyle \sum_{n \leq x} \Lambda(n) = x - 1 - \lim_{T \rightarrow \infty} \sum_{\rho: |\hbox{Im}(\rho)| \leq T} \frac{x^\rho}{\rho} - \sum_n \frac{x^{-2n}}{-2n} + B'.

Conclude that

\displaystyle \sum_{n \leq x} \Lambda(n) = x - \lim_{T \rightarrow \infty} \sum_{\rho: |\hbox{Im}(\rho)| \leq T} \frac{x^\rho}{\rho} - \log(2\pi) - \frac{1}{2} \log( 1 - x^{-2} ).

This is an exact counterpart of the truncated explicit formula (Theorem 21 of Notes 2), although in many applications the truncated formula is a little bit more convenient to use; the untruncated formula supplies all of the “lowest order terms”, but these terms are destined to be absorbed into error terms in most applications anyway.

We similarly have a global smoothed explicit formula, refining Exercise 22 from Notes 2:

Exercise 46 (Smoothed explicit formula) Let {g: {\bf R} \rightarrow {\bf C}} be a smooth function, compactly supported on the positive real axis. Show that

\displaystyle \sum_n \Lambda(n) g(\log n) = \hat g(-i) - \sum_\rho \hat g(-i\rho)

\displaystyle - \sum_n \hat g(2in)

with the sums being absolutely convergent. Conclude that

\displaystyle \sum_n \Lambda(n) \eta(n) = \int_{\bf R} (1 - \frac{1}{y^3-y}) \eta(y)\ dy - \sum_\rho \int_{\bf R} \eta(y) y^{\rho-1}\ dy

whenever {\eta: {\bf R} \rightarrow {\bf C}} is a smooth function, supported in {(1,+\infty)}.

The following variant of the smoothed explicit formula is particularly useful for studying the behaviour of zeroes of the zeta function on the critical line {\hbox{Re} s = \frac{1}{2}}.

Exercise 47 (Riemann-Weil explicit formula) Let {g: {\bf R} \rightarrow {\bf C}} be a smooth compactly supported function. Show that

\displaystyle \sum_\gamma \hat g(\gamma) = \hat g(i/2) + \hat g(-i/2) - \sum_n \frac{\Lambda(n)}{\sqrt{n}} (g(\log n) + g(-\log n))

\displaystyle + \frac{1}{2\pi} \int_{\bf R} \hat g(t) (\frac{\Gamma'_\infty}{\Gamma_\infty}(\frac{1}{2}+it) + \frac{\Gamma'_\infty}{\Gamma_\infty}(\frac{1}{2}-it))\ dt

where the non-trivial zeroes {\rho} of the zeta function are parameterised as {\rho = \frac{1}{2}+i\rho}, and {\Gamma_\infty(s) := \pi^{-s/2} \Gamma(\frac{s}{2})}, with the sum over {\gamma} being absolutely convergent. (Hint: first reduce to the case when {g} (and hence {\hat g}) is an even function, by eliminating the case when {g} is odd. Then, integrate the meromorphic function {\frac{\xi'}{\xi}(s) \hat g(i(s-\frac{1}{2}))} around a rectangle with vertices {2 \pm iT}, {-1\pm iT} (say) for some large {T} and apply the residue theorem and the contour integration. It is also possible to derive this formula from Exercise 46 by starting with the case when {g} is supported on the positive real axis, then the negative axis, then for {g} supported in an infinitesimally small neighbourhood of the origin.)

— 5. The functional equation for Dirichlet {L}-functions —

Now we turn to the functional equation for Dirichlet {L}-functions {L(\cdot,\chi)}, Theorem 3. Henceforth {\chi} is a primitive character of modulus {q}. To obtain the functional equation for {L(s,\chi)}, we will need a twisted version of the Poisson summation formula (8). The key to performing the twist is the following expression for the additive Fourier coefficients of {\chi} in the group {{\bf Z}/q{\bf Z}}:

Lemma 48 Let {e(x) := e^{2\pi i x}}, and let {\chi} be a primitive character of modulus {q}. Then for any {a \in {\bf Z}/q{\bf Z}}, we have

\displaystyle \sum_{n \in {\bf Z}/q{\bf Z}} \chi(n) e(an/q) = \overline{\chi(a)} \tau(\chi) \ \ \ \ \ (46)

 

where we abuse notation by viewing the {q}-periodic functions {n \mapsto \chi(n)} and {n \mapsto e(n/q)} as functions on {{\bf Z}/q{\bf Z}}, and the Gauss sum {\tau(\chi)} is defined by the formula (11).

Proof: The formula (46) is trivial when {a=1}, and by making the substitution {m=an} we see that it is also true when {a} is coprime to {q}. Now suppose that {a} shares a common factor {q_1>1} with {q}, then the right-hand side of (46) vanishes. Writing {q = q_1 q_2}, we see that {n \mapsto e(an/q)} is periodic with period {q_2}, and in particular is invariant with respect to multiplication by any invertible element {m} of {{\bf Z}/q{\bf Z}} whose projection to {{\bf Z}/q_2{\bf Z}} is one; thus the left-hand side of (46) is invariant under multiplication by {\chi(m)}. We are thus done unless {\chi(m)=1} for all invertible {m} that projects down to one on {{\bf Z}/q_2{\bf Z}}. But then {\chi} factors as the product of a character of modulus {q_2} and a principal character, contradicting the hypothesis that {\chi} is primitive. \Box

The Gauss sum {\tau(\chi)}, being an inner product between a multiplicative character {n \mapsto \chi(n)} and an additive character {n \mapsto e(n/q)}, is analogous to the Gamma function {\Gamma(s)}, which is also an inner product between a multiplicative character {t \mapsto t^s} and an additive character {t \mapsto e^{-t}}. This analogy can be deepened by working in Tate’s adelic formalism, but we will not do so here. But we will give one further demonstration of the analogy between Gauss sums and Gamma functions:

Exercise 49 (Jacobi sum identity) Let {\chi, \chi'} be Dirichlet characters modulo a prime {p}, such that {\chi,\chi',\chi\chi'} are all non-principal. By computing the sum {\sum_{n,n' \in {\bf Z}/p{\bf Z}} \chi(n) \chi'(n') e( (n+n')/p )} in two different ways, establish the Jacobi sum identity

\displaystyle \sum_{n \in {\bf Z}/p{\bf Z}} \chi(n) \chi'(1-n) = \frac{\tau(\chi) \tau(\chi')}{\tau(\chi\chi')}.

This should be compared with the beta function identity, Lemma 7.

From (46) and the Fourier inversion formula on {{\bf Z}/q{\bf Z}} (Theorem 69 from Notes 1), we have

\displaystyle \chi(n) = \frac{\tau(\chi)}{q} \sum_{a \in {\bf Z}/q{\bf Z}} \overline{\chi(a)} e(-an/q); \ \ \ \ \ (47)

 

setting {n=-1}, we conclude in particular that

\displaystyle \tau(\chi) \tau(\overline{\chi}) = q \chi(-1), \ \ \ \ \ (48)

 

which is somewhat analogous to the reflection formula for the Gamma function (Lemma 8). On the other hand, from (46) with {a=-1} we have

\displaystyle \tau(\overline{\chi}) = \overline{\tau(\chi)} \chi(-1)

and thus

\displaystyle |\tau(\chi)|^2 = q. \ \ \ \ \ (49)

 

This determines the magnitude of {\tau(\chi)}; in particular, the quantity {\varepsilon(\chi)} defined in (10) has magnitude one, and

\displaystyle \varepsilon(\chi) \varepsilon(\overline{\chi}) = 1. \ \ \ \ \ (50)

 

The phase of {\tau(\chi)} (or {\varepsilon(\chi)}) is harder to compute, except when {\chi} is a real primitive character, where we have the remarkable discovery of Gauss that {\varepsilon(\chi)=+1}; see the appendix below.

Now suppose that {f: {\bf R} \rightarrow {\bf C}} is a twice continuously differentiable, compactly supported function. We can expand the sum {\sum_n f(n) \chi(n)} using (47) (and identifying {{\bf Z}/q{\bf Z}} with {\{0,\dots,q-1\}} as

\displaystyle \sum_n f(n) \chi(n) = \frac{\tau(\chi)}{q} \sum_{a=0}^{q-1} \overline{\chi(a)} \sum_n f(n) e(-an/q);

applying the Poisson summation formula (8) to the modulated function {x \mapsto f(x) e(-ax/q)}, we conclude that

\displaystyle \sum_n f(n) \chi(n) = \frac{\tau(\chi)}{q} \sum_{a=0}^{q-1} \overline{\chi(a)} \sum_m \hat f(2\pi m - \frac{2\pi a}{q}),

which on making the change of variables {\tilde m := qm - a}, and then relabeling {\tilde m} as {m}, becomes the twisted Poisson summation formula

\displaystyle \sum_n f(n) \chi(n) = \frac{(-1)^\kappa \tau(\chi)}{q} \sum_m \hat f(\frac{2\pi m}{q}) \overline{\chi}(m) \ \ \ \ \ (51)

 

where {\kappa:=0} when {\chi(-1)=+1} and {\kappa:=1} when {\chi(-1)=-1}.

Remark 50 One can view both the ordinary Poisson summation formula (8) and its twisted analogue (51) as special cases of the adelic Poisson summation formula; see this previous blog post. However, we will not explicitly adopt the adelic viewpoint here.

We now adapt the proofs of the functional equation for {\zeta} to prove Theorem 3, using (51) as a replacement for (8). One can use any of the three proofs for this purpose, but I found it easiest to work with the third proof. We first work heuristically. As before, we formally apply (51) with {f(x) := 1_{x>0} \frac{1}{x^s}}, ignoring all infinite divergences. Again, the Fourier transform is given by

\displaystyle \hat f(y) = \Gamma(1-s) \frac{1}{(-iy)^{1-s}}

and so (51) formally yields

\displaystyle \sum_n \frac{\chi(n)}{n^s} = \frac{\chi(-1) \tau(\chi)}{q} \Gamma(1-s)  \sum_m \frac{\overline{\chi}(m)}{(-2\pi i m/q)^{1-s}}.

The {m=0} term is formally absent since {\chi(0)=0}. If one converts the sum over negative {m} to a sum over positive {m} by the change of variables {m \mapsto -m}, and formally identifies {\sum_n \frac{\chi(n)}{n^s}} and {\sum_m \frac{\overline{\chi}(m)}{m^{1-s}}} with {L(s,\chi)} and {L(1-s,\overline{\chi})}, one formally obtains Theorem 3 after some routine calculation.

Exercise 51 Make the above argument rigorous by adapting the argument in Exercise 25.

The following exercise develops the analogue of Riemann’s second proof of the functional equation, which is the proof of the functional equation for Dirichlet {L}-functions that is found in most textbooks.

Exercise 52 Let {\chi} be a primitive character of conductor {q>1}, and let {s} be such that {\hbox{Re}(s) > 1}. Define the theta-type function

\displaystyle \vartheta_\kappa(z,\chi) := \sum_{n \in {\bf Z}} n^\kappa \chi(n) e^{\pi i n^2 z/q}.

in the half-plane {\hbox{Im} z > 0}. The purpose of the {n^\kappa} factor is to make the summand even in {n}, rather than odd (as the theta-function would be trivial if the summand were odd).

  • (i) Establish the functional equation

    \displaystyle \vartheta_\kappa( z, \chi ) = \varepsilon(\chi) \frac{1}{(-iz)^{\kappa+1/2}} \vartheta_\kappa( -\frac{1}{z}, \overline{\chi} ) \ \ \ \ \ (52)

    for {\hbox{Im}(z) > 0}, using the standard branch of the square root.

  • (ii) Show that

    \displaystyle (q/\pi)^{(s+\kappa)/2} \Gamma(\frac{s+\kappa}{2}) L(s,\chi) = \frac{1}{2} \int_0^\infty x^{\frac{s+\kappa}{2}} \vartheta_\kappa(ix, \chi)\ \frac{dx}{x}.

    (Hint: use either (14) or (15), depending on the value of {\kappa}.)

  • (iii) Show that

    \displaystyle (q/\pi)^{(s+\kappa)/2} \Gamma(\frac{s+\kappa}{2}) L(s,\chi) = \frac{1}{2} \int_1^\infty [ x^{\frac{s+\kappa}{2}} \vartheta_\kappa(ix, \chi)

    \displaystyle + \varepsilon(\chi) x^{\frac{1-s+\kappa}{2}} \vartheta_\kappa(ix, \overline{\chi}) ] \frac{dx}{x}.

  • (iv) Prove Corollary 4.

One can also adapt the first proof of Riemann to the {L}-function setting, as was done in this paper of Berndt:

Exercise 53 Let {\chi} be a primitive character of conductor {q>1}.

  • (i) Show that the function {z \mapsto \sum_n \chi(n) e^{-nz}}, initially defined for {\hbox{Re}(z) > 0}, extends to a rational function on the complex plane with poles at {2\pi i m/q} for {m \in {\bf Z}} coprime to {q}, and a residue of {\frac{\overline{\chi}(-m) \tau(\chi)}{q}} at {2\pi i m/q}.
  • (ii) Prove Theorem 3 by expressing {L(s,\chi)} for {\hbox{Re}(s) > 1} as a contour integral over the contour {C} used in Section 2.

The next exercise extends the approximate functional equations for the zeta function to Dirichlet {L}-functions for primitive characters of some conductor {q>1}. As may be expected from Notes 2, the role of {1+|t|} in the error terms will be replaced with {q(1+|t|)}.

Exercise 54 (Approximate functional equation) Let {\chi} be a primitive Dirichlet character of conductor {q>1}. Let {s = \sigma+it} in the critical strip.

  • (i) Show that

    \displaystyle L(s,\chi) = \sum_n \frac{\chi(n)}{n^s} \eta( \log n - \log x) + O_{\eta, A, \sigma}( x^{-A} ) \ \ \ \ \ (53)

    for any {A>0}, if {\eta: {\bf R} \rightarrow {\bf C}} is a smooth function such that {\eta(x)} vanishes for {x>C} and equals {1} for {x<-C} for some constant {C>0}, and {x \geq C' q (1+|t|)} for some sufficiently large {C'} depending on {C}.

  • (ii) Show that

    \displaystyle \sum_n \frac{\chi(n)}{n^s} \psi( \log n - \log x ) = \frac{1}{2\pi} \int_{\bf R} \hat \psi(y) x^{iy} L(s+iy,\chi)\ dy \ \ \ \ \ (54)

    for any smooth, compactly supported function {\psi: {\bf R} \rightarrow {\bf C}} and any {x > 0}.

  • (iii) Suppose that {|t| \gg 1}, and let {N,M \gg 1} be such that {NM = \frac{q|t|}{2\pi}}. Let {\psi: {\bf R} \rightarrow {\bf C}} be a smooth compactly supported function. Then

    \displaystyle \sum_n \frac{\chi(n)}{n^s} \psi( \log n - \log N ) = \ \ \ \ \ (55)

    \displaystyle \alpha(s,\chi) \sum_m \frac{\overline{\chi(m)}}{m^{1-s}} \psi( \log M - \log m ) + O_{\sigma,\varepsilon,\psi}( (q|t|)^{-\sigma/2-1/2+\varepsilon} )

    for any {\varepsilon>0} and any smooth, compactly supported {\psi: {\bf R} \rightarrow {\bf C}}, where

    \displaystyle \alpha(s,\chi) := \varepsilon(\chi) 2^s \pi^{s-1} q^{1/2-s} \sin(\frac{\pi}{2}(s+\kappa)) \Gamma(1-s).

  • (iv) With {t,N,M} as in (iii), and {\eta} as in (i), show that

    \displaystyle L(s,\chi) = \sum_n \frac{\chi(n)}{n^s} \eta( \log n - \log N)

    \displaystyle + \alpha(s,\chi) \sum_m \frac{\overline{\chi(m)}}{m^{1-s}} \tilde \eta( \log m - \log M)

    \displaystyle + O_{\eta, \sigma, \varepsilon} ( (q|t|)^{-\sigma/2-1/2+\varepsilon} ),

    where {\tilde \eta(u) := 1 - \eta(-u)}.

There are useful variants of the approximate functional equation for {L}-functions that are valid in the low-lying regime {t=O(1)}, but we will not detail them here.

Exercise 55 Let {\chi} be a primitive character of modulus {q > 1}, and let {T \geq 2}. Let {N(T,\chi)} denote the number of zeroes of {L(s,\chi)} in the rectangle {\{ \sigma+it: 0 < \sigma < 1; |t| \leq T \}} (note that we are now including the lower half-plane as well as the upper half-plane, as the zeroes of {L(s,\chi)} need not be symmetric around the real axis when {\chi} is complex). Show that

\displaystyle \frac{1}{2} N(T,\chi) = \frac{T}{2\pi} \log \frac{qT}{2\pi} - \frac{T}{2\pi} + O( \log(qT) ).

Exercise 56 (Riemann-von Mangoldt explicit formula for {L}-functions) Let {x > 1} be a non-integer, and let {\chi} be a primitive Dirichlet character of conductor {q > 1}.

  • (i) If {\chi(-1)=-1}, show that

    \displaystyle \sum_{n \leq x} \Lambda(n) \chi(n) = - \lim_{T \rightarrow \infty} \sum_{\rho: |\hbox{Im}(\rho)| \leq T} \frac{x^\rho}{\rho} - \frac{L'(0,\chi)}{L(0,\chi)}

    \displaystyle + \sum_n \frac{x^{1-2n}}{2n-1}.

  • (ii) If {\chi(-1)=+1}, show that

    \displaystyle \sum_{n \leq x} \Lambda(n) \chi(n) = - \lim_{T \rightarrow \infty} \sum_{\rho: |\hbox{Im}(\rho)| \leq T} \frac{x^\rho}{\rho} - \log x + b(\chi)

    \displaystyle + \sum_n \frac{x^{-2n}}{2n}

    for some quantity {b(\chi)} depending only on {\chi}.

Exercise 57 Let {f: (0,+\infty) \rightarrow {\bf R}} be the function {f(t) := \frac{1}{e^t-1}-\frac{1}{t}} from Exercise 67 of Notes 2. Let {\chi} be a non-principal Dirichlet character of conductor {q}. Use the twisted Poisson formula to show that

\displaystyle \sum_n \chi(n) (f(\frac{n}{2x}) - f(\frac{n}{x})) \ll \sqrt{q}

for any {x \geq q}, where the sum is in the conditionally convergent sense. (You may need to smoothly truncate the function {f(\frac{t}{2}) - f(t)} before applying the twisted Poisson formula.) Use this and the argument from the previous exercise to establish the bound {L(1,\chi) \gg q^{-1/2}}.

— 6. Appendix: Gauss sum for real primitive characters —

The material here is not needed elsewhere in this course, or even in this set of notes, but I am including it because it is a very pretty piece of mathematics; it is also the very first hint of a much deeper connection between automorphic forms and Galois theory known as the Langlands program, which I will not be able to discuss further here.

Suppose that {\chi} is a real primitive character. Then from (50), {\varepsilon(\chi)} must be either {+1} or {-1}. Applying Corollary 4 with {s=1/2}, we see that

\displaystyle \xi(1/2,\chi) = \varepsilon(\chi) \xi(1/2,\chi)

which strongly suggests that {\varepsilon(\chi)=+1}, but one has to prevent vanishing of {\xi(1/2,\chi)} (or equivalently, {L(1/2,\chi)}). (A similar argument using (52) would also give {\varepsilon(\chi)=+1} if one could somehow prevent vanishing of {\theta_\kappa(i)}.) While it has been conjectured (by Chowla) that {L(1/2,\chi)} is never zero (and should in fact be positive), this is still not proven unconditionally; see this preprint of Fiorilli for recent work in this direction. (Indeed, understanding the vanishing of {L}-functions at the central point {s=1/2} is a very deep problem, as attested to by the difficulty of the Birch and Swinnerton-Dyer conjecture.) Nevertheless we still have the following result:

Theorem 58 (Gauss sum for real primitive characters) One has {\varepsilon(\chi)=+1} for any real primitive character {\chi}.

This result is surprisingly tricky to prove. The exercises below give one such proof, essentially due to Dirichlet. First we reduce to quadratic characters modulo a prime:

Exercise 59 (Classification of real primitive characters)

  • (i) Let {q_1,q_2} be coprime natural numbers. Show that if {\chi_1, \chi_2} are real primitive characters of conductors {q_1, q_2} respectively, then {\chi_1 \chi_2} is a real primitive character of conductor {q_1 q_2}, and that {\varepsilon(\chi_1 \chi_2) = \varepsilon(\chi_1) \varepsilon(\chi_2)}.
  • (ii) Conversely, if {q_1,q_2} are coprime natural numbers and {\chi} is a real primitive character of conductor {q_1 q_2}, show that there exist unique real primitive characters {\chi_1, \chi_2} of conductors {q_1, q_2} respectively such that {\chi = \chi_1 \chi_2}. (Hint: use the Chinese remainder theorem to identify {({\bf Z}/q_1q_2{\bf Z})^\times} with {({\bf Z}/q_1)^\times} and {({\bf Z}/q_2)^\times}.
  • (iii) If {p} is an odd prime and {j} is a natural number, show that an element of {({\bf Z}/p^j{\bf Z})^\times} is a quadratic residue if and only if it is a quadratic residue after reduction to {({\bf Z}/p{\bf Z})^\times}. Conclude that there are no real primitive characters of conductor {p^j} if {j > 1}, and the only real primitive character of conductor {p} is the quadratic character {n \mapsto (\frac{n}{p})}.
  • (iv) If {j \geq 3}, show that an element of {({\bf Z}/2^j{\bf Z})^\times} is a quadratic residue if and only if it is a quadratic residue after reduction to {({\bf Z}/8{\bf Z})^\times}. Conclude that there are no real primitive characters of conductor {2^j} if {j > 3}, and show that there is one such character of conductor {4}, two characters of conductor {8}, and no characters of conductor {2}. Verify that {\varepsilon(\chi)=1} for each of these characters.

In view of this exercise and the fundamental theorem of arithmetic, we see that to verify Theorem 58, it suffices to do so when {\chi(n) = (\frac{n}{p})} for some odd prime {p}. This can be achieved using the functional equation (30) for the theta function {\vartheta} defined in (29):

Exercise 60 (Landsberg-Schaar relation and its consequences) Let {p} be an odd prime, and let {\chi(n) = ( \frac{n}{p} )} be the quadratic character to modulus {p}.

  • (i) Show that {\tau(\chi) = \sum_{n \in {\bf Z}/p{\bf Z}} e( n^2 / p )}. (Hint: rewrite both sides in terms of the sum of {e(n/p)} over quadratic residues or non-residues {n}.)
  • (ii) For any natural numbers {p, q}, establish the Landsberg-Schaar relation

    \displaystyle \frac{1}{\sqrt{p}} \sum_{n \in {\bf Z}/p{\bf Z}} e( n^2 q / p ) = \frac{e^{\pi i/4}}{\sqrt{2q}} \sum_{m \in {\bf Z}/2q{\bf Z}} e( - n^2 p / 4q ) \ \ \ \ \ (56)

    by using the functional equation (30) with {z = 2q/p + i \varepsilon} and sending {\varepsilon \rightarrow 0^+}.

  • (iii) By using the {q=1} case of the Landsberg-Schaar relation, show that {\chi(-1)} is equal to {1} when {p = 1\ (4)} and {-1} when {p = 3\ (4)}, and that {\varepsilon(\chi)=+1}.
  • (iv) By applying the Landsberg-Schaar relation with {q} an odd prime distinct from {p}, establish the law of quadratic reciprocity

    \displaystyle (\frac{q}{p}) (\frac{p}{q}) = (-1)^{(p-1)(q-1)/4}.

Exercise 61 Define a fundamental discriminant to be an integer {D} that is the discriminant of a quadratic number field; recall from Supplement 1 that such {D} take the form {-4d} if {d} is a squarefree integer with {d=1,2\ (4)}, or {-d} if {d} is a squarefree integer with {d=3\ (4)}. (Supplement 1 focused primarily on the negative discriminant case when {d>0}, but the above statement also holds for positive discriminant.) Show that if {D} is a fundamental discriminant, then {n \mapsto ( \frac{D}{n} )} is a primitive real character, where {( \frac{D}{n} )} is the Kronecker symbol. Conversely, show that all primitive real characters arise in this fashion.