You are currently browsing the tag archive for the ‘Euler product’ tag.

Mertens’ theorems are a set of classical estimates concerning the asymptotic distribution of the prime numbers:

Theorem 1 (Mertens’ theorems) In the asymptotic limit ${x \rightarrow \infty}$, we have

$\displaystyle \sum_{p\leq x} \frac{\log p}{p} = \log x + O(1), \ \ \ \ \ (1)$

$\displaystyle \sum_{p\leq x} \frac{1}{p} = \log \log x + O(1), \ \ \ \ \ (2)$

and

$\displaystyle \sum_{p\leq x} \log(1-\frac{1}{p}) = -\log \log x - \gamma + o(1) \ \ \ \ \ (3)$

where ${\gamma}$ is the Euler-Mascheroni constant, defined by requiring that

$\displaystyle 1 + \frac{1}{2} + \ldots + \frac{1}{n} = \log n + \gamma + o(1) \ \ \ \ \ (4)$

in the limit ${n \rightarrow \infty}$.

The third theorem (3) is usually stated in exponentiated form

$\displaystyle \prod_{p \leq x} (1-\frac{1}{p}) = \frac{e^{-\gamma}+o(1)}{\log x},$

but in the logarithmic form (3) we see that it is strictly stronger than (2), in view of the asymptotic ${\log(1-\frac{1}{p}) = -\frac{1}{p} + O(\frac{1}{p^2})}$.

Remarkably, these theorems can be proven without the assistance of the prime number theorem

$\displaystyle \sum_{p \leq x} 1 = \frac{x}{\log x} + o( \frac{x}{\log x} ),$

which was proven about two decades after Mertens’ work. (But one can certainly use versions of the prime number theorem with good error term, together with summation by parts, to obtain good estimates on the various errors in Mertens’ theorems.) Roughly speaking, the reason for this is that Mertens’ theorems only require control on the Riemann zeta function ${\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}}$ in the neighbourhood of the pole at ${s=1}$, whereas (as discussed in this previous post) the prime number theorem requires control on the zeta function on (a neighbourhood of) the line ${\{ 1+it: t \in {\bf R} \}}$. Specifically, Mertens’ theorem is ultimately deduced from the Euler product formula

$\displaystyle \zeta(s) = \prod_p (1-\frac{1}{p^s})^{-1}, \ \ \ \ \ (5)$

valid in the region ${\hbox{Re}(s) > 1}$ (which is ultimately a Fourier-Dirichlet transform of the fundamental theorem of arithmetic), and following crude asymptotics:

Proposition 2 (Simple pole) For ${s}$ sufficiently close to ${1}$ with ${\hbox{Re}(s) > 1}$, we have

$\displaystyle \zeta(s) = \frac{1}{s-1} + O(1) \ \ \ \ \ (6)$

and

$\displaystyle \zeta'(s) = \frac{-1}{(s-1)^2} + O(1).$

Proof: For ${s}$ as in the proposition, we have ${\frac{1}{n^s} = \frac{1}{t^s} + O(\frac{1}{n^2})}$ for any natural number ${n}$ and ${n \leq t \leq n+1}$, and hence

$\displaystyle \frac{1}{n^s} = \int_n^{n+1} \frac{1}{t^s}\ dt + O( \frac{1}{n^2} ).$

Summing in ${n}$ and using the identity ${\int_1^\infty \frac{1}{t^s}\ dt = \frac{1}{s-1}}$, we obtain the first claim. Similarly, we have

$\displaystyle \frac{-\log n}{n^s} = \int_n^{n+1} \frac{-\log t}{t^s}\ dt + O( \frac{\log n}{n^2} ),$

and by summing in ${n}$ and using the identity ${\int_1^\infty \frac{-\log t}{t^s}\ dt = \frac{-1}{(s-1)^2}}$ (the derivative of the previous identity) we obtain the claim. $\Box$

The first two of Mertens’ theorems (1), (2) are relatively easy to prove, and imply the third theorem (3) except with ${\gamma}$ replaced by an unspecified absolute constant. To get the specific constant ${\gamma}$ requires a little bit of additional effort. From (4), one might expect that the appearance of ${\gamma}$ arises from the refinement

$\displaystyle \zeta(s) = \frac{1}{s-1} + \gamma + O(|s-1|) \ \ \ \ \ (7)$

that one can obtain to (6). However, it turns out that the connection is not so much with the zeta function, but with the Gamma function, and specifically with the identity ${\Gamma'(1) = - \gamma}$ (which is of course related to (7) through the functional equation for zeta, but can be proven without any reference to zeta functions). More specifically, we have the following asymptotic for the exponential integral:

Proposition 3 (Exponential integral asymptotics) For sufficiently small ${\epsilon}$, one has

$\displaystyle \int_\epsilon^\infty \frac{e^{-t}}{t}\ dt = \log \frac{1}{\epsilon} - \gamma + O(\epsilon).$

A routine integration by parts shows that this asymptotic is equivalent to the identity

$\displaystyle \int_0^\infty e^{-t} \log t\ dt = -\gamma$

which is the identity ${\Gamma'(1)=-\gamma}$ mentioned previously.

Proof: We start by using the identity ${\frac{1}{i} = \int_0^1 x^{i-1}\ dx}$ to express the harmonic series ${H_n := 1+\frac{1}{2}+\ldots+\frac{1}{n}}$ as

$\displaystyle H_n = \int_0^1 1 + x + \ldots + x^{n-1}\ dx$

or on summing the geometric series

$\displaystyle H_n = \int_0^1 \frac{1-x^n}{1-x}\ dx.$

Since ${\int_0^{1-1/n} \frac{1}{1-x} = \log n}$, we thus have

$\displaystyle H_n - \log n = \int_0^1 \frac{1_{[1-1/n,1]}(x) - x^n}{1-x}\ dx;$

making the change of variables ${x = 1-\frac{t}{n}}$, this becomes

$\displaystyle H_n - \log n = \int_0^n \frac{1_{[0,1]}(t) - (1-\frac{t}{n})^n}{t}\ dt.$

As ${n \rightarrow \infty}$, ${\frac{1_{[0,1]}(t) - (1-\frac{t}{n})^n}{t}}$ converges pointwise to ${\frac{1_{[0,1]}(t) - e^{-t}}{t}}$ and is pointwise dominated by ${O( e^{-t} )}$. Taking limits as ${n \rightarrow \infty}$ using dominated convergence, we conclude that

$\displaystyle \gamma = \int_0^\infty \frac{1_{[0,1]}(t) - e^{-t}}{t}\ dt.$

or equivalently

$\displaystyle \int_0^\infty \frac{e^{-t} - 1_{[0,\epsilon]}(t)}{t}\ dt = \log \frac{1}{\epsilon} - \gamma.$

The claim then follows by bounding the ${\int_0^\epsilon}$ portion of the integral on the left-hand side. $\Box$

Below the fold I would like to record how Proposition 2 and Proposition 3 imply Theorem 1; the computations are utterly standard, and can be found in most analytic number theory texts, but I wanted to write them down for my own benefit (I always keep forgetting, in particular, how the third of Mertens’ theorems is proven).

The Riemann zeta function $\zeta(s)$, defined for $\hbox{Re}(s) > 1$ by the formula

$\displaystyle \zeta(s) := \sum_{n \in {\Bbb N}} \frac{1}{n^s}$ (1)

where ${\Bbb N} = \{1,2,\ldots\}$ are the natural numbers, and extended meromorphically to other values of s by analytic continuation, obeys the remarkable functional equation

$\displaystyle \Xi(s) = \Xi(1-s)$ (2)

where

$\displaystyle \Xi(s) := \Gamma_\infty(s) \zeta(s)$ (3)

is the Riemann Xi function,

$\displaystyle \Gamma_\infty(s) := \pi^{-s/2} \Gamma(s/2)$ (4)

is the Gamma factor at infinity, and the Gamma function $\Gamma(s)$ is defined for $\hbox{Re}(s) > 1$ by

$\displaystyle \Gamma(s) := \int_0^\infty e^{-t} t^s\ \frac{dt}{t}$ (5)

and extended meromorphically to other values of s by analytic continuation.

There are many proofs known of the functional equation (2).  One of them (dating back to Riemann himself) relies on the Poisson summation formula

$\displaystyle \sum_{a \in {\Bbb Z}} f_\infty(a t_\infty) = \frac{1}{|t|_\infty} \sum_{a \in {\Bbb Z}} \hat f_\infty(a/t_\infty)$ (6)

for the reals $k_\infty := {\Bbb R}$ and $t \in k_\infty^*$, where $f$ is a Schwartz function, $|t|_\infty := |t|$ is the usual Archimedean absolute value on $k_\infty$, and

$\displaystyle \hat f_\infty(\xi_\infty) := \int_{k_\infty} e_\infty(-x_\infty \xi_\infty) f_\infty(x_\infty)\ dx_\infty$ (7)

is the Fourier transform on $k_\infty$, with $e_\infty(x_\infty) := e^{2\pi i x_\infty}$ being the standard character $e_\infty: k_\infty \to S^1$ on $k_\infty$.  (The reason for this rather strange notation for the real line and its associated structures will be made clearer shortly.)  Applying this formula to the (Archimedean) Gaussian function

$\displaystyle g_\infty(x_\infty) := e^{-\pi |x_\infty|^2}$, (8)

which is its own (additive) Fourier transform, and then applying the multiplicative Fourier transform (i.e. the Mellin transform), one soon obtains (2).  (Riemann also had another proof of the functional equation relying primarily on contour integration, which I will not discuss here.)  One can “clean up” this proof a bit by replacing the Gaussian by a Dirac delta function, although one now has to work formally and “renormalise” by throwing away some infinite terms.  (One can use the theory of distributions to make this latter approach rigorous, but I will not discuss this here.)  Note how this proof combines the additive Fourier transform with the multiplicative Fourier transform.  [Continuing with this theme, the Gamma function (5) is an inner product between an additive character $e^{-t}$ and a multiplicative character $t^s$, and the zeta function (1) can be viewed both additively, as a sum over n, or multiplicatively, as an Euler product.]

In the famous thesis of Tate, the above argument was reinterpreted using the language of the adele ring ${\Bbb A}$, with the Poisson summation formula (4) on $k_\infty$ replaced by the Poisson summation formula

$\displaystyle \sum_{a \in k} f(a t) = \sum_{a \in k} \hat f(t/a)$ (9)

on ${\Bbb A}$, where $k = {\Bbb Q}$ is the rationals, $t \in {\Bbb A}$, and f is now a Schwartz-Bruhat function on ${\Bbb A}$.  Applying this formula to the adelic (or global) Gaussian function $g(x) := g_\infty(x_\infty) \prod_p 1_{{\mathbb Z}_p}(x_p)$, which is its own Fourier transform, and then using the adelic Mellin transform, one again obtains (2).  Again, the proof can be cleaned up by replacing the Gaussian with a Dirac mass, at the cost of making the computations formal (or requiring the theory of distributions).

In this post I will write down both Riemann’s proof and Tate’s proof together (but omitting some technical details), to emphasise the fact that they are, in some sense, the same proof.  However, Tate’s proof gives a high-level clarity to the situation (in particular, explaining more adequately why the Gamma factor at infinity (4) fits seamlessly with the Riemann zeta function (1) to form the Xi function (2)), and allows one to generalise the functional equation relatively painlessly to other zeta-functions and L-functions, such as Dedekind zeta functions and Hecke L-functions.

[Note: the material here is very standard in modern algebraic number theory; the post here is partially for my own benefit, as most treatments of this topic in the literature tend to operate in far higher levels of generality than I would prefer.]

### Recent Comments

 Terence Tao on Polymath15, ninth thread: goin… Anonymous on Polymath15, ninth thread: goin… KM on Polymath15, ninth thread: goin… Anonymous on Polymath15, ninth thread: goin… “The beauty of… on Maryam Mirzakhani A short proof of the… on 245A, Notes 5: Differentiation… I think my mathemati… on There’s more to mathematics th… masoudkomi on Books Terence Tao on Tricks Wiki article: The tenso… Paulin Ndoj on Does one have to be a genius t… Anonymous on Polymath15, ninth thread: goin… Anonymous on Which universities should one… Anonymous on Polymath15, ninth thread: goin… Anonymous on The theorems of Frobenius and… Vivek Sharma on 246C notes 3: Univalent functi…