You are currently browsing the tag archive for the ‘Vinogradov-Korobov bound’ tag.

We return to the study of the Riemann zeta function ${\zeta(s)}$, focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for ${\zeta}$, which in turn lead to improved estimates for the error term in the prime number theorem.

In equation (21) of Notes 2 we obtained the somewhat crude estimates

$\displaystyle \zeta(s) = \sum_{n \leq x} \frac{1}{n^s} - \frac{x^{1-s}}{1-s} + O( \frac{|s|}{\sigma} \frac{1}{x^\sigma} ) \ \ \ \ \ (1)$

for any ${x > 0}$ and ${s = \sigma+it}$ with ${\sigma>0}$ and ${s \neq 1}$. Setting ${x=1}$, we obtained the crude estimate

$\displaystyle \zeta(s) = \frac{1}{s-1} + O( \frac{|s|}{\sigma} )$

in this region. In particular, if ${0 < \varepsilon \leq \sigma \ll 1}$ and ${|t| \gg 1}$ then we had ${\zeta(s) = O_\varepsilon( |t| )}$. Using the functional equation and the Hadamard three lines lemma, we can improve this to ${\zeta(s) \ll_\varepsilon |t|^{\frac{1-\sigma}{2}+\varepsilon}}$; see Supplement 3.

Now we seek better upper bounds on ${\zeta}$. We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 33 of Supplement 3:

Proposition 1 Let ${s = \sigma+it}$ with ${0 < \varepsilon \leq \sigma \ll 1}$ and ${|t| \gg 1}$. Then

$\displaystyle \zeta(s) \ll_\varepsilon \log(2+|t|) \sup_{1 \leq M \leq N \ll |t|} N^{1-\sigma} |\frac{1}{N} \sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n)|$

where ${e(x) := e^{2\pi i x}}$.

Proof: We fix a smooth function ${\eta: {\bf R} \rightarrow {\bf C}}$ with ${\eta(t)=1}$ for ${t \leq -1}$ and ${\eta(t)=0}$ for ${t \geq 1}$, and allow implied constants to depend on ${\eta}$. Let ${s=\sigma+it}$ with ${\varepsilon \leq \sigma \ll 1}$. From Exercise 33 of Supplement 3, we have

$\displaystyle \zeta(s) = \sum_n \frac{1}{n^s} \eta( \log n - \log C|t| ) + O_\varepsilon( 1 )$

for some sufficiently large absolute constant ${C}$. By dyadic decomposition, we thus have

$\displaystyle \zeta(s) \ll_{\varepsilon} 1 + \log(2+|t|) \sup_{1 \leq N \ll |t|} |\sum_{N \leq n < 2N} \frac{1}{n^s} \eta( \log n - \log C|t| )|.$

We can absorb the first term in the second using the ${N=1}$ case of the supremum. Writing ${\frac{1}{n^s} \eta( \log n - \log|C| t ) = N^{-\sigma} e( - \frac{t}{2\pi} \log n ) F_N(n)}$, where

$\displaystyle F_N(n) := (N/n)^\sigma \eta(\log n - \log C|t| ),$

it thus suffices to show that

$\displaystyle \sum_{N \leq n < 2N} e(-\frac{t}{2\pi} \log N) F_N(n) \ll \sup_{1 \leq M \leq N} |\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)|$

for each ${N}$. But from the fundamental theorem of calculus, the left-hand side can be written as

$\displaystyle F_N(2N) \sum_{N \leq n < 2N} e(-\frac{t}{2\pi} \log n)$

$\displaystyle - \int_0^{N} (\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)) F'_N(M)\ dM$

and the claim then follows from the triangle inequality and a routine calculation. $\Box$

We are thus interested in getting good bounds on the sum ${\sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n )}$. More generally, we consider normalised exponential sums of the form

$\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ \ \ \ \ (2)$

where ${I \subset {\bf R}}$ is an interval of length at most ${N}$ for some ${N \geq 1}$, and ${f: {\bf R} \rightarrow {\bf R}}$ is a smooth function. We will assume smoothness estimates of the form

$\displaystyle |f^{(j)}(x)| = \exp( O(j^2) ) \frac{T}{N^j} \ \ \ \ \ (3)$

for some ${T>0}$, all ${x \in I}$, and all ${j \geq 1}$, where ${f^{(j)}}$ is the ${j}$-fold derivative of ${f}$; in the case ${f(x) := -\frac{t}{2\pi} \log x}$, ${I \subset [N,2N]}$ of interest for the Riemann zeta function, we easily verify that these estimates hold with ${T := |t|}$. (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative ${f^{(0)}=f}$ of ${f}$ directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to ${f(n)}$.

The trivial bound for (2) is

$\displaystyle \frac{1}{N} \sum_{n \in I} e(f(n)) \ll 1 \ \ \ \ \ (4)$

and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of ${O_\varepsilon(N^{-1/2+\varepsilon})}$ for (2) for any ${\varepsilon>0}$ if ${T = O(N^{O(1)})}$; this assertion (a special case of the exponent pair hypothesis) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form ${O(N^{1-c_K})}$ when ${T \ll N^K}$ and ${c_K > 0}$ depends on ${K}$. We present two such results here, which perform well for small and large values of ${K}$ respectively:

Theorem 2 Let ${2 \leq N \ll T}$, let ${I}$ be an interval of length at most ${N}$, and let ${f: I \rightarrow {\bf R}}$ be a smooth function obeying (3) for all ${j \geq 1}$ and ${x \in I}$.

• (i) (van der Corput estimate) For any natural number ${k \geq 2}$, one has

$\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll (\frac{T}{N^k})^{\frac{1}{2^k-2}} \log^{1/2} (2+T). \ \ \ \ \ (5)$

• (ii) (Vinogradov estimate) If ${k}$ is a natural number and ${T \leq N^{k}}$, then

$\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll N^{-c/k^2} \ \ \ \ \ (6)$

for some absolute constant ${c>0}$.

The factor of ${\log^{1/2} (2+T)}$ can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of ${\log T}$. The estimate (6) is superior to (5) when ${T \sim N^K}$ for ${K}$ large, since (after optimising in ${k}$) (5) gives a gain of the form ${N^{-c/2^{cK}}}$ over the trivial bound, while (6) gives ${N^{-c/K^2}}$. We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on ${\zeta}$, and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.

We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating ${f}$ like a polynomial of degree roughly ${k}$; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a bilinear function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about ${k}$ times, which is why the gain is only exponentially small in ${k}$. The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of ${k^2}$ – by using Hölder’s inequality in combination with the fact that the discrete curve ${\{ (n,n^2,\dots,n^k): n \in \{1,\dots,M\}\}}$ becomes roughly equidistributed in the box ${\{ (a_1,\dots,a_k): a_j = O( M^j ) \}}$ after taking the sumset of about ${k^2}$ copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “${p}$-adic” version of this equidistribution to reduce the claim at a given scale ${M}$ to a smaller scale ${M/p}$ with ${p \sim M^{1/k}}$, and then proceeding by induction.

One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:

Exercise 3 (Subconvexity bound)

• (i) Show that ${\zeta(\frac{1}{2}+it) \ll (1+|t|)^{1/6} \log^{O(1)}(1+|t|)}$ for all ${t \in {\bf R}}$. (Hint: use the ${k=3}$ case of the Van der Corput estimate.)
• (ii) For any ${0 < \sigma < 1}$, show that ${\zeta(\sigma+it) \ll (1+|t|)^{\max( \frac{1-\sigma}{3}, \frac{1}{2} - \frac{2\sigma}{3}) + o(1)}}$ as ${|t| \rightarrow \infty}$.

Exercise 4 Let ${t}$ be such that ${|t| \geq 100}$, and let ${\sigma \geq 1/2}$.

• (i) (Littlewood bound) Use the van der Corput estimate to show that ${\zeta(\sigma+it) \ll \log^{O(1)} |t|}$ whenever ${\sigma \geq 1 - O( \frac{(\log\log |t|)^2}{\log |t|} ))}$.
• (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that ${\zeta(\sigma+it) \ll \log^{O(1)} |t|}$ whenever ${\sigma \geq 1 - O( \frac{(\log\log |t|)^{2/3}}{\log^{2/3} |t|} )}$.

As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region ${\{ \sigma+it: \sigma > 1 - c \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}; |t| \geq 100 \}}$, which in turn leads to the prime number theorem with error term

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O\left( x \exp\left( - c \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)$

for ${x > 100}$. If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region

$\displaystyle \{ \sigma+it: \sigma > 1 - c \frac{\log\log|t|}{\log |t|}; |t| \geq 100 \}$

(which is only slightly wider than the classical zero-free region) and an error term

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x \exp( - c \sqrt{\log x \log\log x} ) )$

in the prime number theorem.

Exercise 5 (Vinogradov-Korobov in arithmetic progressions) Let ${\chi}$ be a non-principal character of modulus ${q}$.

• (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that ${L(\sigma+it,\chi) \ll \log^{O(1)}(q|t|)}$ whenever ${|t| \geq 100}$ and

$\displaystyle \sigma \geq 1 - O( \min( \frac{\log\log(q|t|)}{\log q}, \frac{(\log\log(q|t|))^{2/3}}{\log^{2/3} |t|} ) ).$

(Hint: use the Vinogradov estimate and a change of variables to control ${\sum_{n \in I: n = a\ (q)} \exp( -it \log n)}$ for various intervals ${I}$ of length at most ${N}$ and residue classes ${a\ (q)}$, in the regime ${N \geq q^2}$ (say). For ${N < q^2}$, do not try to capture any cancellation and just use the triangle inequality instead.)

• (ii) Obtain a zero-free region

$\displaystyle \{ \sigma+it: \sigma > 1 - c \min( \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}, \frac{1}{\log q} );$

$\displaystyle |t| \geq 100 \}$

for ${L(s,\chi)}$, for some (effective) absolute constant ${c>0}$.

• (iii) Obtain the prime number theorem in arithmetic progressions with error term

$\displaystyle \sum_{n \leq x: n = a\ (q)} \Lambda(n) = \frac{x}{\phi(q)} + O\left( x \exp\left( - c_A \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)$

whenever ${x > 100}$, ${q \leq \log^A x}$, ${a\ (q)}$ is primitive, and ${c_A>0}$ depends (ineffectively) on ${A}$.