We return to the study of the Riemann zeta function {\zeta(s)}, focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for {\zeta}, which in turn lead to improved estimates for the error term in the prime number theorem.

In equation (21) of Notes 2 we obtained the somewhat crude estimates

\displaystyle \zeta(s) = \sum_{n \leq x} \frac{1}{n^s} - \frac{x^{1-s}}{1-s} + O( \frac{|s|}{\sigma} \frac{1}{x^\sigma} ) \ \ \ \ \ (1)

for any {x > 0} and {s = \sigma+it} with {\sigma>0} and {s \neq 1}. Setting {x=1}, we obtained the crude estimate

\displaystyle \zeta(s) = \frac{1}{s-1} + O( \frac{|s|}{\sigma} )

in this region. In particular, if {0 < \varepsilon \leq \sigma \ll 1} and {|t| \gg 1} then we had {\zeta(s) = O_\varepsilon( |t| )}. Using the functional equation and the Hadamard three lines lemma, we can improve this to {\zeta(s) \ll_\varepsilon |t|^{\frac{1-\sigma}{2}+\varepsilon}}; see Supplement 3.

Now we seek better upper bounds on {\zeta}. We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 33 of Supplement 3:

Proposition 1 Let {s = \sigma+it} with {0 < \varepsilon \leq \sigma \ll 1} and {|t| \gg 1}. Then

\displaystyle \zeta(s) \ll_\varepsilon \log(2+|t|) \sup_{1 \leq M \leq N \ll |t|} N^{1-\sigma} |\frac{1}{N} \sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n)|

where {e(x) := e^{2\pi i x}}.

Proof: We fix a smooth function {\eta: {\bf R} \rightarrow {\bf C}} with {\eta(t)=1} for {t \leq -1} and {\eta(t)=0} for {t \geq 1}, and allow implied constants to depend on {\eta}. Let {s=\sigma+it} with {\varepsilon \leq \sigma \ll 1}. From Exercise 33 of Supplement 3, we have

\displaystyle \zeta(s) = \sum_n \frac{1}{n^s} \eta( \log n - \log C|t| ) + O_\varepsilon( 1 )

for some sufficiently large absolute constant {C}. By dyadic decomposition, we thus have

\displaystyle \zeta(s) \ll_{\varepsilon} 1 + \log(2+|t|) \sup_{1 \leq N \ll |t|} |\sum_{N \leq n < 2N} \frac{1}{n^s} \eta( \log n - \log C|t| )|.

We can absorb the first term in the second using the {N=1} case of the supremum. Writing {\frac{1}{n^s} \eta( \log n - \log|C| t ) = N^{-\sigma} e( - \frac{t}{2\pi} \log n ) F_N(n)}, where

\displaystyle F_N(n) := (N/n)^\sigma \eta(\log n - \log C|t| ),

it thus suffices to show that

\displaystyle \sum_{N \leq n < 2N} e(-\frac{t}{2\pi} \log N) F_N(n) \ll \sup_{1 \leq M \leq N} |\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)|

for each {N}. But from the fundamental theorem of calculus, the left-hand side can be written as

\displaystyle F_N(2N) \sum_{N \leq n < 2N} e(-\frac{t}{2\pi} \log n)

\displaystyle - \int_0^{N} (\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)) F'_N(M)\ dM

and the claim then follows from the triangle inequality and a routine calculation. \Box

We are thus interested in getting good bounds on the sum {\sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n )}. More generally, we consider normalised exponential sums of the form

\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ \ \ \ \ (2)

where {I \subset {\bf R}} is an interval of length at most {N} for some {N \geq 1}, and {f: {\bf R} \rightarrow {\bf R}} is a smooth function. We will assume smoothness estimates of the form

\displaystyle |f^{(j)}(x)| = \exp( O(j^2) ) \frac{T}{N^j} \ \ \ \ \ (3)

for some {T>0}, all {x \in I}, and all {j \geq 1}, where {f^{(j)}} is the {j}-fold derivative of {f}; in the case {f(x) := -\frac{t}{2\pi} \log x}, {I \subset [N,2N]} of interest for the Riemann zeta function, we easily verify that these estimates hold with {T := |t|}. (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative {f^{(0)}=f} of {f} directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to {f(n)}.

The trivial bound for (2) is

\displaystyle \frac{1}{N} \sum_{n \in I} e(f(n)) \ll 1 \ \ \ \ \ (4)

and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of {O_\varepsilon(N^{-1/2+\varepsilon})} for (2) for any {\varepsilon>0} if {T = O(N^{O(1)})}; this assertion (a special case of the exponent pair hypothesis) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form {O(N^{1-c_K})} when {T \ll N^K} and {c_K > 0} depends on {K}. We present two such results here, which perform well for small and large values of {K} respectively:

Theorem 2 Let {2 \leq N \ll T}, let {I} be an interval of length at most {N}, and let {f: I \rightarrow {\bf R}} be a smooth function obeying (3) for all {j \geq 1} and {x \in I}.

  • (i) (van der Corput estimate) For any natural number {k \geq 2}, one has

    \displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll (\frac{T}{N^k})^{\frac{1}{2^k-2}} \log^{1/2} (2+T). \ \ \ \ \ (5)

  • (ii) (Vinogradov estimate) If {k} is a natural number and {T \leq N^{k}}, then

    \displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll N^{-c/k^2} \ \ \ \ \ (6)

    for some absolute constant {c>0}.

The factor of {\log^{1/2} (2+T)} can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of {\log T}. The estimate (6) is superior to (5) when {T \sim N^K} for {K} large, since (after optimising in {k}) (5) gives a gain of the form {N^{-c/2^{cK}}} over the trivial bound, while (6) gives {N^{-c/K^2}}. We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on {\zeta}, and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.

We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating {f} like a polynomial of degree roughly {k}; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a bilinear function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about {k} times, which is why the gain is only exponentially small in {k}. The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of {k^2} – by using Hölder’s inequality in combination with the fact that the discrete curve {\{ (n,n^2,\dots,n^k): n \in \{1,\dots,M\}\}} becomes roughly equidistributed in the box {\{ (a_1,\dots,a_k): a_j = O( M^j ) \}} after taking the sumset of about {k^2} copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “{p}-adic” version of this equidistribution to reduce the claim at a given scale {M} to a smaller scale {M/p} with {p \sim M^{1/k}}, and then proceeding by induction.

One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:

Exercise 3 (Subconvexity bound)

  • (i) Show that {\zeta(\frac{1}{2}+it) \ll (1+|t|)^{1/6} \log^{O(1)}(1+|t|)} for all {t \in {\bf R}}. (Hint: use the {k=3} case of the Van der Corput estimate.)
  • (ii) For any {0 < \sigma < 1}, show that {\zeta(\sigma+it) \ll (1+|t|)^{\max( \frac{1-\sigma}{3}, \frac{1}{2} - \frac{2\sigma}{3}) + o(1)}} as {|t| \rightarrow \infty}.

Exercise 4 Let {t} be such that {|t| \geq 100}, and let {\sigma \geq 1/2}.

  • (i) (Littlewood bound) Use the van der Corput estimate to show that {\zeta(\sigma+it) \ll \log^{O(1)} |t|} whenever {\sigma \geq 1 - O( \frac{(\log\log |t|)^2}{\log |t|} ))}.
  • (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that {\zeta(\sigma+it) \ll \log^{O(1)} |t|} whenever {\sigma \geq 1 - O( \frac{(\log\log |t|)^{2/3}}{\log^{2/3} |t|} )}.

As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region {\{ \sigma+it: \sigma > 1 - c \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}; |t| \geq 100 \}}, which in turn leads to the prime number theorem with error term

\displaystyle \sum_{n \leq x} \Lambda(n) = x + O\left( x \exp\left( - c \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)

for {x > 100}. If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region

\displaystyle \{ \sigma+it: \sigma > 1 - c \frac{\log\log|t|}{\log |t|}; |t| \geq 100 \}

(which is only slightly wider than the classical zero-free region) and an error term

\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x \exp( - c \sqrt{\log x \log\log x} ) )

in the prime number theorem.

Exercise 5 (Vinogradov-Korobov in arithmetic progressions) Let {\chi} be a non-principal character of modulus {q}.

  • (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that {L(\sigma+it,\chi) \ll \log^{O(1)}(q|t|)} whenever {|t| \geq 100} and

    \displaystyle \sigma \geq 1 - O( \min( \frac{\log\log(q|t|)}{\log q}, \frac{(\log\log(q|t|))^{2/3}}{\log^{2/3} |t|} ) ).

    (Hint: use the Vinogradov estimate and a change of variables to control {\sum_{n \in I: n = a\ (q)} \exp( -it \log n)} for various intervals {I} of length at most {N} and residue classes {a\ (q)}, in the regime {N \geq q^2} (say). For {N < q^2}, do not try to capture any cancellation and just use the triangle inequality instead.)

  • (ii) Obtain a zero-free region

    \displaystyle \{ \sigma+it: \sigma > 1 - c \min( \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}, \frac{1}{\log q} );

    \displaystyle |t| \geq 100 \}

    for {L(s,\chi)}, for some (effective) absolute constant {c>0}.

  • (iii) Obtain the prime number theorem in arithmetic progressions with error term

    \displaystyle \sum_{n \leq x: n = a\ (q)} \Lambda(n) = x + O\left( x \exp\left( - c_A \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)

    whenever {x > 100}, {q \leq \log^A x}, {a\ (q)} is primitive, and {c_A>0} depends (ineffectively) on {A}.

— 1. Van der Corput estimates —

In this section we prove Theorem 2(i). To motivate the arguments, we will use an analogy between the sums {\sum_{n \in I} e(f(n))} and the integrals {\int_I e(f(t))\ dt} (cf. Exercise 11 from Notes 1). This analogy can be made rigorous by the Poisson summation formula (after applying some smoothing to the integral over {I} to truncate the frequency summation), but we will not need to do so here.

Write {I = [a,b]}. We can control the integral {\int_a^b e(f(t))\ dt} by integration by parts:

\displaystyle \int_a^b e(f(t))\ dt = \int_a^b \frac{1}{2\pi i f'(t)} \frac{d}{dt} e(f(t))\ dt

\displaystyle = \frac{1}{2\pi i f'(t)} e(f(t))|_a^b - \int_a^b \frac{d}{dt}( \frac{1}{2\pi i f'(t)} ) e(f(t))\ dt

\displaystyle \ll \frac{1}{|f'(a)|} + \frac{1}{|f'(b)|} + \int_a^b \frac{|f''(t)|}{|f'(t)|^2}\ dt.

If {f} obeys (3) for {j=1,2}, we thus have

\displaystyle \frac{1}{N} \int_a^b e(f(t))\ dt \ll \frac{1}{T}.

An analogous argument, using summation by parts instead of integration by parts, controls the sum {\sum_I e(f(n))}, but only in the regime when {T \ll N}:

Proposition 6 Let {N, A \geq 1} and {T>0}, let {I} be an interval of length at most {N}, and let {f: I \rightarrow {\bf R}} be a smooth function obeying the bounds

\displaystyle \frac{T}{AN^j} \leq |f^{(j)}(x)| \leq A \frac{T}{N^j}

for {j=1,2} and {x \in I}. If {T \leq \frac{N}{2A}}, then

\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll A^3 \frac{1}{T}.

Proof: We may assume that {I = [a,b]} for some integers {a \leq b}, and after deleting the right endpoint it suffices to show that

\displaystyle \sum_{a \leq n < b} e(f(n)) \ll \frac{N}{T}.

From the {j=1} case of (3) and the mean value theorem one has {\frac{T}{AN} \ll f(n+1)-f(n) \leq \frac{AT}{N} \leq \frac{1}{2}} for all {a \leq n < b}. Thus we have

\displaystyle |e(f(n+1)-f(n))-1| \gg \frac{T}{AN}

and we can write

\displaystyle \sum_{a \leq n < b} e(f(n)) = \sum_{a \leq n < b} [e(f(n+1)) - e(f(n))] \frac{1}{e(f(n+1)-f(n))-1}.

From the {j=1,2} cases of (3) and the mean value theorem we see that {x \mapsto \frac{1}{e(f(x+1)-f(x))-1}} has size {O(AT/N)} and derivative {O(A^3T/N^2)} on {[a,b-1]}. The claim then follows from a summation by parts. \Box

To use this proposition in the regime {T \gg N}, we will use the following inequality of van der Corput, which is a basic application of the Cauchy-Schwarz inequality:

Proposition 7 (van der Corput inequality) Let {N \geq H \geq 1}, let {I} be an interval of length {N}, and let {f: I \rightarrow {\bf R}} be a function. Then

\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll H^{-1/2}

\displaystyle + \left( \frac{1}{H} \sum_{1 \leq h \leq H} |\frac{1}{N} \sum_{n \in I \cap I-h} e( f(n+h)-f(n))| \right)^{1/2}.

The point of this proposition is that it effectively replaces the phase {f(n)} by the differenced phase {f_h(n) := f(n+h)-f(n)} for some medium-sized {h}; it is an oscillatory version of the trivial observation that if {f} is close to constant, then {f_h} is also close to constant. From the fundamental theorem of calculus, we see that if {f} obeys the estimates (3), then {f_h} obeys a variant of (3) in which {T} has been replaced by {\frac{h}{N} T}. Since {H \leq N}, this reduces {T}, and so one can hope to then iterate this proposition to the point where one can apply Proposition 6.

Proof: By rounding down, we may assume that {H} is an integer. For any {1 \leq h \leq H}, we have

\displaystyle \sum_{n \in I} e(f(n)) = \sum_{n \in I-h} e(f(n+h))

and thus on averaging in {h}

\displaystyle \sum_{n \in I} e(f(n)) = \frac{1}{H} \sum_n \sum_{1 \leq h \leq H} 1_{n \in I-h} e(f(n+h)).

There are only {O(N)} values of {n} for which the inner sum is non-vanishing. By the Cauchy-Schwarz inequality, we thus have

\displaystyle \sum_{n \in I} e(f(n)) \ll \frac{1}{H} N^{1/2} \left( \sum_n |\sum_{1 \leq h \leq H} 1_{n \in I-h} e(f(n+h))|^2 \right)^{1/2}

and so it will suffice to show that

\displaystyle \sum_n |\sum_{1 \leq h \leq H} 1_{n \in I-h} e(f(n+h))|^2 \ll H N

\displaystyle + H \sum_{1 \leq h \leq H} |\sum_{n \in I \cap I-h} e( f(n+h)-f(n))|.

The left-hand side may be expanded as

\displaystyle \sum_{1 \leq h_1,h_2 \leq H} \sum_{n \in I-h_1 \cap I-h_2} e( f(n+h_1) - f(n+h_2) ).

The contribution of the diagonal terms {h_1=h_2} is {O( HN )} which is acceptable. For the off-diagonal terms, we may use symmetry to restrict to the case {h_2 < h_1}, picking up a factor of {2}. After shifting {n} by {h_2}, we may thus bound this contribution by

\displaystyle 2\sum_{1 \leq h_2 < h_1 \leq H} |\sum_{n \in I-(h_1-h_2) \cap I} e( f(n+h_1-h_2) - f(n) )|.

Since each {1 \leq h \leq H} occurs {O(H)} times as a difference {h_1-h_2}, the claim follows. \Box

Exercise 8 (Qualitative Weyl exponential sum estimates) Let {P(n) = \alpha_k n^k + \dots + \alpha_0} be a polynomial with real coefficients {\alpha_0,\dots,\alpha_k}.

  • (i) If {k \geq 1} and {\alpha_k} is irrational, show that {\frac{1}{x} \sum_{n \leq x} e(P(n)) = o(1)} as {x \rightarrow \infty}. (Hint: induct on {k}, using geometric series for the base case {k=1} and Proposition 7 for the induction step.)
  • (ii) If {k \geq 1} and at least one of {\alpha_1,\dots,\alpha_k} is irrational, show that {\frac{1}{x} \sum_{n \leq x} e(P(n)) = o(1)} as {x \rightarrow \infty}.
  • (iii) If all of the {\alpha_1,\dots,\alpha_k} are rational, show that {\frac{1}{x} \sum_{n \leq x} e(P(n))} converges as {x \rightarrow \infty} to a limit that is not necessarily zero.

One can obtain more quantitative estimates on the decay rate of {\frac{1}{x} \sum_{n \leq x} e(P(n))} in terms of how badly approximable by rationals one or more of the coefficients {\alpha_j} are; see for instance Chapter 8 of Iwaniec-Kowalski for some estimates of this type.

If we combine one application of Proposition 7 with Proposition 6, we conclude

Proposition 9 Let {N, A \geq 1} and {T>0}, let {I} be an interval of length at most {N}, and let {f: I \rightarrow {\bf R}} be a smooth function obeying the bounds

\displaystyle \frac{T}{AN^j} \leq |f^{(j)}(x)| \leq A \frac{T}{N^j}

for {j=1,2,3} and {x \in I}. Then

\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll A^2 (\frac{1}{T^{1/2}} + \frac{T^{1/2}}{N} \log T ).

Proof: If {T \leq \frac{N}{2A}} then the claim follows from Proposition 6 (or from (4), if {T<1}), so we may assume that {T > \frac{N}{2A}}. We can also assume that {T \leq \frac{N^2}{2A}}, since otherwise the claim follows from the trivial bound (4).

Set {H := \frac{N^2}{2AT}}, then {1 \leq H \leq N}. By Proposition 7 we have

\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll H^{-1/2} + \left(\frac{1}{H} \sum_{1 \leq h \leq H} |\frac{1}{N} \sum_{n \in I \cap I-h} e( f_h(n))| \right)^{1/2}

where {f_h(n) := f(n+h) - f(n)}. From the fundamental theorem of calculus we have

\displaystyle \frac{h}{N} \frac{T}{AN^j} \leq |f_h^{(j)}(x)| \leq A \frac{h}{N} \frac{T}{N^j}

for {j=1,2}. By Proposition 6 we then have

\displaystyle \frac{1}{N} \sum_{n \in I \cap I-h} e( f_h(n)) \ll A^3 \frac{N}{Th}

so on summing in {h}

\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll H^{-1/2} + ( A^3 \frac{N}{TH} \log T )^{1/2}

and the claim follows from the choice of {H}. \Box

We can iterate this:

Proposition 10 Let {C>1} be a sufficiently large constant. Then for any {N, A \geq 1}, any {T>0}, any natural number {k \geq 2}, any interval {I} of length at most {N}, and any smooth {f: I \rightarrow {\bf R}} obeying the bounds

\displaystyle \frac{T}{AN^j} \leq |f^{(j)}(x)| \leq A \frac{T}{N^j} \ \ \ \ \ (7)

for {j=1,\dots,k+1} and {x \in I}, one has

\displaystyle |\frac{1}{N} \sum_{n \in I} e( f(n) )| \leq

\displaystyle C A^{1/2^{k-3}} \left( \frac{1}{N^{1/2^{k-2}}} \left( \frac{N^k}{T} \right)^{\frac{1}{2^k-2}} + \left( \frac{T}{N^k} \right)^{\frac{1}{2^k-2}} \log^{1/2^{k-2}}(2+T) \right).

We have avoided the use of asymptotic notation for this proposition because we need to use induction on {k}.

Proof: We induct on {k}. The {k=2} case follows from Proposition 9. Now suppose that {k \geq 3} and the claim has already been proven for {k-1}.

If {T > N^k} then the claim follows from (4) (for {C} large enough), so suppose that {T \leq N^k}. Then the quantity {H := (N^k/T)^{\frac{1}{2^{k-1}-1}}} is such that {1 \leq H \leq N}. By Proposition 7 we have

\displaystyle |\frac{1}{N} \sum_{n \in I} e( f(n) )| \ll H^{-1/2} + \left(\frac{1}{H} \sum_{1 \leq h \leq H} |\frac{1}{N} \sum_{n \in I \cap I-h} e( f_h(n))| \right)^{1/2}.

From the fundamental theorem of calculus, {f_h} obeys the bounds (7) for {j=1,\dots,k} with {T} replaced by {hT/N}. Thus by the induction hypothesis,

\displaystyle |\frac{1}{N} \sum_{n \in I \cap I-h} e( f_h(n))| \leq C A^{1/2^{k-4}}

\displaystyle \left( \frac{1}{N^{1/2^{k-3}}} \left( \frac{N^k}{Th} \right)^{\frac{1}{2^{k-1}-2}} + \left( \frac{hT}{N^k} \right)^{\frac{1}{2^{k-1}-2}} \log^{1/2^{k-3}}(2+T) \right).

Performing the sum over {h}, we obtain

\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll H^{-1/2} +

\displaystyle C A^{1/2^{k-4}} \left( \frac{1}{N^{1/2^{k-3}}} \left( \frac{N^k}{TH} \right)^{\frac{1}{2^{k-1}-2}} + \left( \frac{HT}{N^k} \right)^{\frac{1}{2^{k-1}-2}} \log^{1/2^{k-3}}(2+T) \right)

which by the choice of {H} simplifies to

\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll C^{1/2} A^{1/2^{k-3}}

\displaystyle \left( \frac{1}{N^{1/2^{k-2}}} \left( \frac{N^k}{T} \right)^{\frac{1}{2^k-2}} + \left( \frac{T}{N^k} \right)^{\frac{1}{2^k-2}} \log^{1/2^{k-2}}(2+T) \right)^{1/2},

and the claim follows for {C} large enough. \Box

Now we can prove part (i) of Theorem 2. We can assume that {T \leq N^k}, since the claim follows from (4) otherwise. For any natural number {j \geq 2}, we can apply Proposition 10 with {A = \exp(O(j^2))} and conclude that

\displaystyle |\frac{1}{N} \sum_{n \in I} e( f(n) )| \ll

\displaystyle \max\left( \frac{1}{N^{1/2^{j-2}}} \left( \frac{N^j}{T} \right)^{\frac{1}{2^j-2}}, \left( \frac{T}{N^j} \right)^{\frac{1}{2^j-2}} \right) \log^{1/2}(2+T).

Taking infima over {2 \leq j \leq k}, it then suffices to show that

\displaystyle \inf_{2 \leq j \leq k} \max\left( \left( \frac{1}{N^{1/2^{j-2}}} \left( \frac{N^j}{T} \right)^{\frac{1}{2^j-2}} , \left( \frac{T}{N^j} \right)^{\frac{1}{2^j-2}} \right) \leq \left( \frac{T}{N^k} \right)^{\frac{1}{2^k-2}} \right) \ \ \ \ \ (8)

whenever {N \leq T \leq N^k}. (The case when {N \ll T \leq N} can be easily derived from the {T=N} case, after conceding a multiplicative factor of {O(1)}.

We prove (8) by induction on {k}. The case {k=2} is clear, so suppose {k \ge 3} and that (8) has already been proven for {k-1}. If

\displaystyle T \geq N^{k-2 + 1/2^{k-2}}


\displaystyle \frac{1}{N^{1/2^{k-2}}} \left( \frac{N^k}{T} \right)^{\frac{1}{2^k-2}} \leq \left( \frac{T}{N^k} \right)^{\frac{1}{2^k-2}}

and the claim follows from the {j=k} term of the infimum. If instead

\displaystyle T \leq N^{k-2 + 1/2^{k-2}}


\displaystyle \left( \frac{T}{N^{k-1}} \right)^{\frac{1}{2^{k-1}-2}} \leq \left( \frac{T}{N^k} \right)^{\frac{1}{2^k-2}}

and the claim follows from the induction hypothesis.

— 2. Vinogradov estimate —

We now prove Theorem 2(ii), loosely following the treatment in Iwaniec and Kowalski. We first observe that for bounded values of {k}, part (ii) of this theorem follows from the already proven part (i) (replacing {k} by, say, {k+1}), so we may assume now that {k} is larger than any specified absolute constant.

The first step of Vinogradov’s argument is to use a Taylor expansion and a bilinear averaging to reduce the problem to a bilinear exponential sum estimate involving polynomials with medium-sized coefficients. More precisely, we will derive Theorem 2(ii) from

Theorem 11 (Bilinear estimate) Let {0 < c_0 < 1} be a constant, let {k} be a sufficiently large natural number, let {M \geq 1}, and let {\alpha_1,\dots,\alpha_k} be real numbers, with the property that

\displaystyle M^{-(2-c_0)j} \leq |\alpha_j| \leq M^{-c_0 j} \ \ \ \ \ (9)

for at least {c_0 k} values of {j=1,\dots,k}. Then

\displaystyle \frac{1}{M^2} \sum_{1 \leq a \leq M} \sum_{1 \leq b \leq M} e( \sum_{j=1}^k \alpha_j a^j b^j ) \ll_{c_0} M^{-c_1 / k^2} \ \ \ \ \ (10)

for some {c_1>0} depending only on {c_0}.

Let us see how Theorem 2(ii) follows from Theorem 11. By reducing {k} as necessary, we may assume that

\displaystyle N^{k-1} \leq T \leq N^k.

We may also assume that

\displaystyle N \geq \exp( C k^2 ) \ \ \ \ \ (11)

for some large constant {C}, as the claim is trivial otherwise.

Set {M := \lfloor N^{1/4}\rfloor}. For any {1 \leq a,b \leq M}, we have

\displaystyle \frac{1}{N} \sum_{n \in I} e(f(n)) = \frac{1}{N} \sum_{n \in I} e(f(n+ab)) + O( N^{-1/2} )

and hence on averaging in {a,b}

\displaystyle \frac{1}{N} \sum_{n \in I} e(f(n)) = \frac{1}{N} \sum_n \frac{1}{M^2} \sum_{1 \leq a,b \leq M} 1_{n+ab \in I} e(f(n+ab)) + O( N^{-1/2} ).

The condition {n+ab \in I} can only be satisfied if {n} lies within {O(N^{1/2})} of {I}, and if {n} lies in {I} and is further than {N^{1/2}} from the endpoints of {I} then the {1_{n+ab \in I}} constraint may be dropped. It thus suffices to show that

\displaystyle \frac{1}{M^2} \sum_{1 \leq a,b \leq M} e(f(n+ab)) \ll N^{-c/k^2}

for all {n} in {I} that are further than {N^{1/2}} from the endpoints of {I}.

Fix {n}. From (3) and Taylor expansion we see that

\displaystyle f(n+ab) = f(n) + \sum_{j=1}^{3k} \alpha_j a^j b^j + O\left( \exp( O(k^2) ) \frac{T}{N^{3k+1}} (ab)^{3k+1} \right)

where {\alpha_j := \frac{1}{j!} f^{(j)}(n)}. Since {ab \leq \sqrt{N}} and {T \leq N^k}, we see from (11) that the error term is {O( N^{-1} )} (say) if {C} is large enough, so

\displaystyle e(f(n+ab)) = e(f(n)) e( \sum_{j=1}^{3k} \alpha_j a^j b^j ) + O( N^{-1} )

and it thus suffices to show that

\displaystyle \frac{1}{M^2} \sum_{1 \leq a,b \leq M} e\left( \sum_{j=1}^{3k} \alpha_j a^j b^j\right) \ll N^{-c/k^2}.

From (3) we have

\displaystyle |\alpha_j| = \exp( O(j^2) ) \frac{T}{N^j}

which (since {N^{k-1} \leq T \leq N^k} and {k} is large) implies from (11) that

\displaystyle M^{-1.9 j} \leq |\alpha_j| \leq M^{-0.1j}

if {1.5 k \leq j \leq 1.6 k} (say). The claim now follows from Theorem 11 (with {k} replaced by {3k}).

It remains to prove Theorem 11. We fix {c_0} and allow all implied constants to depend on {c_0}. We may assume that

\displaystyle M \geq \exp( C k^2 ) \ \ \ \ \ (12)

for some large constant {C} (depending on {c_0}), as the claim is trivial otherwise.

We write the left-hand side of (10) as

\displaystyle S := \frac{1}{M^2} \sum_{1 \leq a \leq M} \sum_{1 \leq b \leq M} e( B( \gamma_k(a), \gamma_k(b) ) )

where {{\mathcal B}: {\bf R}^k \times {\bf R}^k \rightarrow {\bf R}} is the bilinear form

\displaystyle {\mathcal B}( (x_1,\dots,x_k), (y_1,\dots,y_k) ) := \sum_{j=1}^k \alpha_j x_j y_j

and {\gamma_k: {\bf R} \rightarrow {\bf R}^k} is the polynomial curve

\displaystyle \gamma_k(a) := (a, a^2, \dots, a^k).

The discrete curve

\displaystyle \gamma_k(\{1,\dots,M\}) = \{ \gamma_k(a): 1 \leq a \leq M \} \ \ \ \ \ (13)

lies in the box {\{ (x_1,\dots,x_k) \in {\bf Z}^k: 1 \leq x_j \leq M^j \hbox{ for all } j=1,\dots,k\}}, but occupies only a very sparse subset of this box. However, if one takes iterated sumsets

\displaystyle \{ \gamma_k(a_1)+\dots+\gamma_k(a_\ell): 1 \leq a_1,\dots,a_\ell \leq M \}

of this curve, one expects (if {\ell} is large compared with {k}) to fill out a much denser subset of this box (or more precisely, of a slightly larger box in which all sides have been multiplied by {\ell}), due to the “curvature” of (13). By using the device of Hölder’s inequality, we will be able to estimate the sparse sum {S} with a sum over such a sumset in a box, which will be significantly easier to estimate, provided one can establish the expected density property of the sumset, or at least something reasonably close to that density property. This latter claim will be accomplished by a deep result known as the Vinogradov mean value theorem.

We turn to the details. Let {\ell} be a large natural number (depending on {k}) to be chosen later; in fact we will eventually take {\ell = C_0 k^2} for a large absolute constant {C_0}. From Hölder’s inequality we have

\displaystyle |S|^\ell \leq \frac{1}{M^{\ell+1}} \sum_{1 \leq a \leq M} |\sum_{1 \leq b \leq M} e( {\mathcal B}( \gamma_k(a), \gamma_k(b) ) )|^\ell.

We remove the absolute values to write this as

\displaystyle |S|^\ell \leq \frac{1}{M^{\ell+1}} \sum_{1 \leq a \leq M} c_a (\sum_{1 \leq b \leq M} e( {\mathcal B}( \gamma_k(a), \gamma_k(b) ) ))^\ell

for some coefficients {c_a} of magnitude {1}. The right-hand side can be rearranged using the bilinearity of {{\mathcal B}} as

\displaystyle \frac{1}{M^{\ell+1}} \sum_{1 \leq b_1,\dots,b_\ell \leq M} \sum_{1 \leq a \leq M} c_a e( {\mathcal B}( \gamma_k(a), \gamma_k(b_1)+\dots+\gamma_k(b_\ell) ) ).

We collect some terms to obtain the inequality

\displaystyle |S|^\ell \leq \frac{1}{M^{\ell+1}} \sum_{y \in {\bf Z}^k} \nu(y) |\sum_{1 \leq a \leq M} c_a e( {\mathcal B}( \gamma_k(a), y) )|

where for each {y \in {\bf Z}^k}, {\nu(y) = \nu_{\ell,k,M}} is the number of representations of {y} of the form {y = \gamma_k(b_1)+\dots+\gamma_k(b_\ell)} with {1 \leq b_1,\dots,b_\ell \leq M}. Note that {\nu} is supported in the box

\displaystyle B := \{ (y_1,\dots,y_k) \in {\bf Z}^k: |y_j| \leq \ell M^j \hbox{ for all } 1 \leq j \leq k \}.

We now use Hölder’s inequality again to spread out the {\gamma_k(a)} vectors, and also to separate the {\nu} weight from the phase. Specifically, we have

\displaystyle (|S|^\ell)^{2\ell} \leq \frac{1}{M^{2\ell(\ell+1)}} (\sum_{y \in B} \nu(y))^{2\ell-2}

\displaystyle \times \sum_{y \in B} \nu(y)^2

\displaystyle \times \sum_{y \in B} |\sum_{1 \leq a \leq M} c_a e( {\mathcal B}( \gamma_k(a), y) )|^{2\ell}.

From double counting we have

\displaystyle \sum_{y \in B} \nu(y) = M^\ell \ \ \ \ \ (14)

so if we write

\displaystyle J_{\ell,k}(M) := \sum_{y \in B} \nu(y)^2 \ \ \ \ \ (15)

then we have

\displaystyle |S|^{2\ell^2} \leq M^{- 4 \ell} J_{\ell,k}(M)\sum_{y \in B} |\sum_{1 \leq a \leq M} c_a e( {\mathcal B}( \gamma_k(a), y) )|^{2\ell}. \ \ \ \ \ (16)

The quantity {J_{\ell,k}(M)} has a combinatorial interpretation as the number of solutions to the equation

\displaystyle \gamma_k(x_1)+\dots+\gamma_k(x_\ell) = \gamma_k(y_1)+\dots+\gamma_k(y_\ell) \ \ \ \ \ (17)

with {x_1,\dots,x_\ell,y_1,\dots,y_\ell \in \{1,\dots,M\}}; its estimation is the deepest and most difficult part of Vinogradov’s argument, and will be discussed presently. Leaving this aside for now, we return to (16) and expand out the right-hand side, using the triangle inequality and bilinearity to bound it by

\displaystyle M^{-4\ell} J_{\ell,k}(M) \sum_{1 \leq a_1,\dots,a_{2\ell} \leq M}

\displaystyle |\sum_{y \in B} e( {\mathcal B}( \gamma_k(a_1)+\dots+\gamma_k(a_\ell) - \gamma_k(a_{\ell+1})-\dots-\gamma_k(a_{2\ell}), y) )|.

The quantity {\gamma_k(a_1)+\dots+\gamma_k(a_\ell) - \gamma_k(a_{\ell+1})-\dots-\gamma_k(a_{2\ell})} lies in the box {B}. Furthermore, from (15) and the Cauchy-Schwarz inequality, every {x \in B} has at most {J_{\ell,k}(M)} representations of the form

\displaystyle x = \gamma_k(a_1)+\dots+\gamma_k(a_\ell) - \gamma_k(a_{\ell+1})-\dots-\gamma_k(a_{2\ell}).

Thus we arrive at the inequality

\displaystyle | S|^{2\ell^2} \leq M^{- 4 \ell} J_{\ell,k}(M)^2 \sum_{x \in B} |\sum_{y \in B} e( {\mathcal B}( x, y ) )|. \ \ \ \ \ (18)

The point here is that the phase is now a linear function of the variables {x,y}, in contrast to the polynomial function of the variables {a,b} in the original definition of {S}. In particular, the exponential sum is now fairly easy to estimate:

Lemma 12 If {\ell = O( k^2 )}, then we have

\displaystyle \sum_{x \in B} |\sum_{y \in B} e( {\mathcal B}( x, y ) )| \ll M^{-ck^2} |B|^2

for some {c>0} (depending only on {c_0}).

Proof: We can factor the left-hand side as

\displaystyle \prod_{j=1}^k \sum_{|x_j| \leq \ell M^j} |\sum_{|y_j| \leq \ell M^j} e( \alpha_j x_j y_j )|.

Since {|B| = \prod_{j=1}^k (2\ell M^j+1)} and we have the trivial bound

\displaystyle \sum_{|x_j| \leq \ell M^j} |\sum_{|y_j| \leq \ell M} e( \alpha_j x_j y_j )| \leq (2\ell M^j+1)^2

it will suffice to show that

\displaystyle \sum_{|x_j| \leq \ell M^j} |\sum_{|y_j| \leq \ell M} e( \alpha_j x_j y_j )| \leq M^{-ck} (2\ell M^j+1)^2

for at least {ck} choices of {1 \leq j \leq k}, for some {c>0}.

By (9), we can find at least {c_0 k/2} choices of {j} with {c_0k/2 \leq j \leq k} and

\displaystyle M^{-(2-c_0)j} \leq |\alpha_j| \leq M^{-c_0 j}.

Henceforth we restrict {j} to one of these choices. By summing the geometric series, or by using the trivial bound, we see that

\displaystyle |\sum_{|y_j| \leq \ell M^j} e( \alpha_j x_j y_j )| \ll \min( \ell M^j, \frac{1}{\| \alpha_j x_j \|} )

where {\|x\|} denotes the distance of {x} to the nearest integer. On any interval {I} of length {1/|\alpha_j|}, we see from the quantitative integral test (Lemma 2 of Notes 1) that

\displaystyle \sum_{x \in I} \min( \ell M^j, \frac{1}{\| \alpha_j x \|} ) \ll \ell M^j (1 + \frac{1}{\ell M^j |\alpha_j|}) \log \frac{1}{|\alpha_j|}

\displaystyle \ll (\ell M^j + \frac{1}{|\alpha_j|}) k \log M

and hence

\displaystyle \sum_{|x_j| \leq \ell M^j} |\sum_{|y_j| \leq \ell M^j} e( \alpha_j x_j y_j )| \ll (1 + \ell M^j |\alpha_j|) (\ell M^j + \frac{1}{|\alpha_j|}) k \log M

\displaystyle \ll (\ell M^j + (\ell M^j)^2 |\alpha_j| + \frac{1}{|\alpha_j|} ) k \log M

\displaystyle \ll k \ell^2 M^{-c_0 j} M^{2j} \log M

for all of the {j} under consideration. The claim follows since {\ell = O(k^2)}, {j \geq c_0k/2}, and {M \geq \exp( C k^2 )} for some large {C} (the latter hypothesis coming from (12)). \Box

From this lemma and (18) we have

\displaystyle |S|^{2\ell^2} \leq M^{-c_2 k^2} M^{- 4 \ell} J_{\ell,k}(M)^2 |B|^2

for some {c_2 >0} depending only on {c_0}. To conclude the desired bound {S \leq M^{-c/k^2}}, it thus suffices to establish the following result:

Theorem 13 (Vinogradov mean value theorem) Let {k,\ell} be natural numbers such that {\ell \geq k^2}. Then

\displaystyle J_{\ell,k}(M) \ll \exp( O(\ell^2) ) M^{O(k^2 \exp(-\ell/k^2))} \frac{M^{2\ell}}{|B|} \ \ \ \ \ (19)

for an absolute constant {c>0}.

If we apply this theorem with {\ell} equal to a sufficiently large multiple of {k^2}, we obtain the required bound {S \ll M^{-c/k^2}}.

Actually, Vinogradov proved a slightly weaker estimate than this; the claim above (in a sharper form) was obtained by later refinements of the argument due to Stechkin and Karatsuba. This result has a number of applications beyond that of controlling the Riemann zeta function; for instance it has application to the Waring problem of expressing large natural numbers as the sum of {k^{th}} powers, which we will not discuss further here. The reason for the name “mean value theorem” is that the quantity {J_{\ell,k}(M)} has a Fourier-analytic interpretation as a mean

\displaystyle J_{\ell,k}(M) = \int_{{\mathbb T}^k} |S_M(\alpha)|^{2\ell}\ d\alpha

of the exponential sum

\displaystyle S_M(\alpha_1,\dots,\alpha_k) := \sum_{n \leq M} e( \alpha_1 n + \dots + \alpha_k n^k ).

The estimate (19) should be compared with the lower bound

\displaystyle J_{\ell,k}(M) \gg M^\ell + \frac{M^{2\ell}}{|B|},

where the {M^\ell} term comes from the diagonal contribution {(x_1,\dots,x_k) = (y_1,\dots,y_k)} to (17), and the {\frac{M^{2\ell}}{|B|}} term coming from (14) and the Cauchy-Schwarz inequality. Informally, (19) is an assertion that the {\ell}-fold sum of the discrete curve (13) is somewhat close to being uniformly distributed on {B}. The main conjecture of Vinogradov asserts the near-optimal bound

\displaystyle J_{\ell,k}(M) \ll_{\varepsilon,\ell,k} M^{\varepsilon} (M^\ell + \frac{M^{2\ell}}{|B|}) \ \ \ \ \ (20)

for any choice of {\varepsilon>0} and {\ell,k,M}. In the recent work of Wooley and Ford-Wooley, an improved version of the congruencing method given below known as efficient congruencing has been developed and used to establish the main conjecture (20) in many cases. See this recent ICM proceedings article of Wooley for a survey of the latest developments in this direction.

We now turn to the proof of the Vinogradov mean value theorem. Since

\displaystyle |B| = \prod_{j=1}^k (2\ell M^j+1) = \exp( O( k \log \ell) ) M^{\frac{\ell(\ell+1)}{2}}

and {\ell \geq k^2}, we have {k \log \ell = O( \ell^2 )}, and so it will suffice to show that

\displaystyle j_{\ell,k}(M) \ll \exp( O(\ell^2) ) M^{O(k^2 \exp(-\ell/k^2))} \ \ \ \ \ (21)

whenever {\ell \geq k^2} and {j_{\ell,k}(M)} is the normalised quantity

\displaystyle j_{\ell,k}(M) := M^{\frac{k(k+1)}{2}-2\ell} J_{\ell,k}(M), \ \ \ \ \ (22)

which can be viewed as a measure of irregularity of distribution of the quantity {\gamma_k(x_1)+\dots+\gamma_k(x_\ell)} for {x_1,\dots,x_\ell \in \{1,\dots,M\}}.

We have the extremely crude bound

\displaystyle J_{\ell,k}(M) \leq M^{2\ell}

coming from the fact that there are {M^{2\ell}} choices of {x_1,\dots,x_\ell,y_1,\dots,y_\ell \in \{1,\dots,M\}}, so that

\displaystyle j_{\ell,k}(M) \leq M^{O(k^2)}. \ \ \ \ \ (23)


This already establishes the claim when {\ell \leq 10k^2} (say). For {\ell > 10k^2}, what we will do is establish the recursive inequality

\displaystyle j_{\ell,k}(M)+1 \ll \exp(O(\ell k)) ( j_{\ell-k,k}(q) + 1 ) \ \ \ \ \ (24)

whenever {\ell > 10k^2} and some {q \leq M^{1-1/k}}; iterating this {\lfloor \ell/k \rfloor} times and then using (23), we will obtain the claim. Note how it is important here that no powers of {M} are lost in the estimate (24). However, we can be quite generous in losing factors such as {k!}, {\ell^k}, or {k^\ell} as these are easily absorbed in the {\exp(O(\ell k))} term. To prove (24), we begin with a technical reduction. For a prime {p}, let us first define a restricted version {J^{(p)}_{\ell,k}(M)} of {J_{\ell,k}(M)} to be the number of solutions to

\displaystyle \gamma_k(x_1)+\dots+\gamma_k(x_\ell) = \gamma_k(y_1)+\dots+\gamma_k(y_\ell)

with {x_1,\dots,x_\ell,y_1,\dots,y_\ell \in \{1,\dots,M\}}, with {x_1,\dots,x_k} having distinct reductions mod {p}, and {y_1,\dots,y_k} also having distinct reductions mod {p}. As we are taking {p} to be rather large, one should think of the constraint of having distinct reductions mod {p} to be a mild condition, so that {J^{(p)}_{\ell,k}(M)} is morally of the same size as {J_{\ell,k}(M)}. This is confirmed by the following proposition:

Lemma 14 There exists a prime {p} with {k M^{1/k} < p \ll k^4 M^{1/k}} such that

\displaystyle J_{\ell,k}(M) \ll \exp( O( \ell k ) ) ( J^{(p)}_{\ell,k}(M) + M^{\ell+k-1} ).

The reason why we need the prime {p} to be somewhat comparable to {M^{1/k}} will be clearer later.

Proof: By definition, {J_{\ell,k}(M)} is the number of solutions to

\displaystyle \gamma_k(x_1)+\dots+\gamma_k(x_\ell) = \gamma_k(y_1)+\dots+\gamma_k(y_\ell) \ \ \ \ \ (25)

with {x_1,\dots,x_\ell,y_1,\dots,y_\ell \in \{1,\dots,M\}}. If the set {\{x_1,\dots,x_\ell\}} has cardinality less than {k}, then there are {O( k^\ell M^{k-1} )} ways to choose the {x_1,\dots,x_\ell}, and {O(M^\ell)} ways to choose {y_1,\dots,y_\ell}, leading to a contribution of at most {\exp( O( \ell k ) M^{\ell+k-1} )} to {J_{\ell,k}(M)}. Similarly if {\{y_1,\dots,y_\ell\}} has cardinality less than {k}. Thus we may restrict attention to the case when {\{x_1,\dots,x_\ell\}} and {\{y_1,\dots,y_\ell\}} have cardinality at least {k}. By paying a factor of {\binom{\ell}{k}^2 = \exp( O(\ell k) )}, we may then restrict to the case where {x_1,\dots,x_k} are all distinct, and {y_1,\dots,y_k} are all distinct. In particular, the quantity

\displaystyle \prod_{1 \leq i < j \leq k} (x_i-x_j)(y_i-y_j)

is non-zero and has cardinality at most {M^{k^2-k}}. In particular, there are at most {k^3-k^2} primes larger than {M^{1/k}} that define this quantity. On the other hand, from the prime number theorem we can find {k^3} distinct primes {p_1,\dots,p_{k^3}} in the range {k M^{1/k} < p_j \ll k^4 M^{1/k}}. We thus see that for each solution to (25) with {x_1,\dots,x_\ell,y_1,\dots,y_\ell \in \{1,\dots,M\}}, {x_1,\dots,x_k} distinct, and {y_1,\dots,y_k} distinct, there is a {1 \leq j \leq k^3} such that {x_1,\dots,x_k} has distinct reductions mod {p_j}, and {y_1,\dots,y_k} also has distinct reductions mod {p_j}, so that this tuple contributes to {J^{(p_j)}_{\ell,k}(M)}. Thus we have

\displaystyle J_{\ell,k}(M) \ll \sum_{j=1}^{k^3} J^{(p_j)}_{\ell,k}(M) + \exp(O(\ell k) ) M^{\ell+k-1}

and the claim follows. \Box

Introducing the normalised quantity

\displaystyle j^{(p)}_{\ell,k}(M) := M^{\frac{k(k+1)}{2}-2\ell} J^{(p)}_{\ell,k}(M),

and recalling that {\ell > 10k^2}, we conclude that

\displaystyle j_{\ell,k}(M) \ll \exp( O( \ell k ) ) j^{(p)}_{\ell,k}(M) + 1 \ \ \ \ \ (26)

for some prime {k M^{1/k} < p \ll k^4 M^{1/k}}.

The next step is to analyse the multiplicity properties of the {k}-fold sum {\gamma_k(y_1)+\dots+\gamma_k(y_k)}. Clearly we have

\displaystyle \gamma_k(y_1) + \dots + \gamma_k(y_k) = (p_1(y_1,\dots,y_k), \dots, p_k(y_1,\dots,y_k))

where the power sums {p_j(y_1,\dots,y_k)} are defined by

\displaystyle p_j(y_1,\dots,y_k) := y_1^j + \dots + y_k^j.

We recall a classical relation, known as Newton’s identities (or Newton-Girard identities), between these power sums and the elementary symmetric polynomials {e_j(y_1,\dots,y_k)}, defined for {j \geq 0} by the formula

\displaystyle e_j(y_1,\dots,y_k) := \sum_{1 \leq i_1 < \dots < i_j \leq k} y_{i_1} \dots y_{i_j}.

For instance {e_0=1}, {e_1 = p_1}, and {e_j=0} for all {j>k}. One can also view the {e_j} as essentially being the coefficients of the polynomial {(x-y_1)\dots(x-y_k)}:

\displaystyle (x-y_1) \dots (x-y_k) = \sum_{j=0}^k (-1)^j e_j(y_1,\dots,y_k) x^{k-j}. \ \ \ \ \ (27)


Lemma 15 (Newton’s identities) For any {1 \leq j \leq k}, one has the polynomial identity

\displaystyle j e_j = \sum_{i=0}^{j-1} (-1)^i e_{j-i-1} p_{i+1}.

Thus for instance

\displaystyle e_1 = p_1

\displaystyle 2e_2 = e_1 p_1 - p_2

\displaystyle 3e_3 = e_2 p_1 - e_1 p_2 + p_3.

Proof: We use the method of generating functions. We rewrite the polynomial identity (27) as

\displaystyle (1-y_1 x) \dots (1-y_k x) = \sum_{j=0}^k (-1)^j e_j(y_1,\dots,y_k) x^j.

On the other hand, taking logarithmic derivatives we have (as formal power series)

\displaystyle \frac{d}{dx} [(1-y_1 x) \dots (1-y_k x)] = - (1-y_1 x) \dots (1-y_k x) \sum_{a=1}^k \frac{y_a}{1-y_a x}.

From the geometric series formula we have (as formal power series)

\displaystyle \sum_{a=1}^k \frac{y_a}{1-y_a x} = \sum_{i=0}^\infty p_{i+1}(y_1,\dots,y_k) x^j

and the claim then follows by equating coefficients. \Box

Corollary 16 If {y_1,\dots,y_k \in {\bf Z}}, then the quantity {\gamma_k(y_1)+\dots+\gamma_k(y_k)} determines the multiset {\{y_1,\dots,y_k\}} up to permutations. In particular, any element {v} of {{\bf Z}^k} has at most {k!} representations of the form {v = \gamma_k(y_1)+\dots+\gamma_k(y_k)}.

Proof: The quantity {\gamma_k(y_1)+\dots+\gamma_k(y_k)} determines the quantities {p_j(y_1,\dots,y_k)} for {j=1,\dots,k}. By Newton’s identities and induction, this determines the quantities {e_j(y_1,\dots,y_k)} for {j=1,\dots,k}, and thus determines the polynomial {(x-y_1) \dots (x-y_k)}. The claim now follows from the unique factorisation of polynomials. \Box

This particular result turns out to not give particularly good bounds on {J_{\ell,k}(M)}; the sums {\gamma_k(y_1)+\dots+\gamma_k(y_k)} are so sparsely distributed that the number of representations of a given {v} (in, say, the box {B}) is typically zero. However, if we localise “{p}-adically”, by replacing the integers {{\bf Z}} with the ring {{\bf Z}/p^k{\bf Z}}, we get a more usable result:

Corollary 17 Let {p} be a prime with {p>k}. Then any {v \in ({\bf Z}/p^k)^k} has at most {k!} representations of the form

\displaystyle v = \gamma_k(y_1) + \dots + \gamma_k(y_k)

with {y_1,\dots,y_k \in {\bf Z}/p^k {\bf Z}} having distinct reductions mod {p}, where by abuse of notation we localise {\gamma_k: {\bf Z}/p^k{\bf Z} \rightarrow ({\bf Z}/p^k{\bf Z})^k} to the ring {{\bf Z}/p{\bf Z}} in the obvious fashion.

To put it another way, this corollary asserts that the map {(y_1,\dots,y_k) \rightarrow \gamma_k(y_1)+\dots\gamma_k(y_k)} from {({\bf Z}/p^k)^k} to {({\bf Z}/p^k)^k} is at most {k!}-to-one, if one excludes those {y_1,\dots,y_k} which do not have distinct reductions mod {p}.

Proof: Suppose we have two representations

\displaystyle \gamma_k(x_1)+\dots+\gamma_k(x_k) = \gamma_k(y_1)+\dots+\gamma_k(y_k)

with {y_1,\dots,y_k \in {\bf Z}/p^k{\bf Z}} and {x_1,\dots,x_k \in {\bf Z}/p^k{\bf Z}} each having distinct reductions mod {p}. By Newton’s identities as before, we have {e_j(x_1,\dots,x_k) = e_j(y_1,\dots,y_k)} for {j=1,\dots,k} (note that as {p>k} there is no difficulty dividing by {j} for {j=1,\dots,k}). Therefore we have the polynomial identity

\displaystyle (x-x_1) \dots (x-x_k) = (x-y_1) \dots (x-y_k)

as polynomials over {{\bf Z}/p^k {\bf Z}}. In particular, each {x_i} is a root of {(x-y_1) \dots (x-y_k)} and thus must equal one of the {y_j} since the reductions mod {p} are all distinct (and so all but one of the factors {(x_i-y_1),\dots,(x_i-y_k)} will be invertible in {{\bf Z}/p^k{\bf Z}}). This implies that {\{x_1,\dots,x_k\}} is a permutation of {\{y_1,\dots,y_k\}}, and the claim follows. \Box

Returning to the integers, and specialising to the range of primes {p} produced by Lemma 14, we conclude

Lemma 18 (Linnik’s lemma) Let {p} be a prime with {k M^{1/k} < p \ll k^4 M^{1/k}}, and let {v = (v_1,\dots,v_k) \in {\bf Z}^k}. Then the number of solutions to the system

\displaystyle x_1^j + \dots + x_k^j = v_j \hbox{ mod } p^j

with {j=1,\dots,k} and {x_1,\dots,x_k \in \{1,\dots,M\}}, with {x_1,\dots,x_k} having distinct reductions modulo {p}, is at most {\exp( O(\ell k) ) p^{\frac{k(k-1)}{2}} )}.

A version of this lemma also applies for primes {p} outside of the range {k M^{1/k} < p \ll k^4 M^{1/k}}, but this is basically the range where the lemma is the most efficient. Indeed, probabilistic heuristics suggest that the number of solutions here should be approximately {M^k / \prod_{j=1}^k p^j}, which equals {\exp( O(\ell k) ) p^{\frac{k(k-1)}{2}} )} in the range {k M^{1/k} < p \ll k^4 M^{1/k}}.

Proof: Each residue class mod {p^j} consists of {p^{k-j}} residue classes mod {p^k}. Since

\displaystyle \prod_{j=1}^k p^{k-j} = p^{k(k-1)/2},

it thus suffices (replacing {v} with {p^{k(k-1)/2}} different elements of {{\bf Z}^k}) to show that for any {v \in {\bf Z}^k}, the number of solutions to the lifted system

\displaystyle x_1^j + \dots + x_k^j = v_j \hbox{ mod } p^k

with {j=1,\dots,k} and {x_1,\dots,x_k \in \{1,\dots,M\}} is at most {\exp( O( \ell k) )}. But each residue class mod {p^k} meets {\{1,\dots,M\}} in at most {O(1)} representatives, so the claim follows from Corollary 17, crudely bounding {k!} by {\exp( O( \ell k) )}. \Box

Now we need to consider sums {\gamma_k(x_1)+\dots+\gamma_k(x_\ell)} of {\ell} terms, rather than just {k} terms. Recall that {J^{(p)}_{\ell,k}(M)} is the number of solutions to

\displaystyle \gamma_k(x_1)+\dots+\gamma_k(x_\ell) = \gamma_k(y_1)+\dots+\gamma_k(y_\ell)

with {x_1,\dots,x_\ell,y_1,\dots,y_\ell \in \{1,\dots,M\}}, with {x_1,\dots,x_k} having distinct reductions mod {p}, and {y_1,\dots,y_k} also having distinct reductions mod {p}. We also need an even more restricted version {\tilde J^{(p)}_{\ell,k}(M)} of {J_{\ell,k}(M)}, which is defined similarly to {J^{(p)}_{\ell,k}(M)} but with the additional constraint that

\displaystyle x_{k+1} = \dots = x_{\ell} = y_{k+1} = \dots = y_\ell \hbox{ mod } p.

Probabilistic heuristics suggest that {\tilde J^{(p)}_{\ell,k}(M)} should be about {p^{1-2(\ell-k)}} as large as {J^{(p)}_{\ell,k}(M)}. One side of this prediction is confirmed by the following application of Hölder’s inequality:

Lemma 19 (Hölder inequality) We have

\displaystyle J^{(p)}_{\ell,k}(M) \leq p^{2\ell - 2k - 1} \tilde J^{(p)}_{\ell,k}(M).

In particular, introducing the normalised quantity

\displaystyle \tilde j^{(p)}_{\ell,k}(M) := M^{\frac{k(k+1)}{2}-2\ell} p^{2\ell - 2k - 1} \tilde J^{(p)}_{\ell,k}(M) \ \ \ \ \ (28)

we have

\displaystyle j^{(p)}_{\ell,k}(M) \leq \tilde j^{(p)}_{\ell,k}(M) . \ \ \ \ \ (29)


Proof: It is easiest to prove this by Fourier-analytic means. Observe that we have the Fourier expansion

\displaystyle J^{(p)}_{\ell,k}(M) = \int_{({\bf R}/{\bf Z})^k} |S_1(\alpha)|^2 |S_2(\alpha)|^{2(\ell-k)}\ d\alpha


\displaystyle S_1(\alpha) := \sum_{x_1,\dots,x_k \in \{1,\dots,M\}}^* e( \alpha \cdot ( \gamma_k(x_1)+\dots+\gamma_k(x_k) ) )

where the asterisk denotes the restriction to those {x_1,\dots,x_k} with distinct reductions modulo {p} and {(\alpha_1,\dots,\alpha_k) \cdot (n_1,\dots,n_k)} is the usual dot product {\alpha_1 n_1 + \dots + \alpha_k n_k}, and

\displaystyle S_2(\alpha) := \sum_{x \in \{1,\dots,M\}} e( \alpha \cdot \gamma_k(x) ).

Similarly, we have

\displaystyle \tilde J^{(p)}_{\ell,k}(M) = \sum_{u \in {\bf Z}/p{\bf Z}} \int_{({\bf R}/{\bf Z})^k} |S_1(\alpha)|^2 |S_{2,u}(\alpha)|^{2(\ell-k)}\ d\alpha


\displaystyle S_{2,u}(\alpha) := \sum_{x \in \{1,\dots,M\}: x = u\hbox{ mod } p} e( \alpha \cdot \gamma_k(x) ).

Since {S_2(\alpha) = \sum_{u \in {\bf Z}/p{\bf Z}} S_{2,u}(\alpha)}, the claim now follows from Hölder’s inequality. \Box

Exercise 20 Find a non-Fourier-analytic proof of the above lemma, based on the Cauchy-Schwarz inequality, in the case when {\ell-k} is a power of two. (Hint: you may wish to generalise the Hölder inequality to one involving the number of solutions {E_\ell(A_1,\dots,A_\ell)} to systems {a_1+\dots+a_\ell = b_1 + \dots + b_\ell} where each {a_j,b_j} is drawn from a finite multiset {A_j} (such quantities are known as additive energies of order {k} in the additive combinatorics literature).) For an additional challenge: find a Cauchy-Schwarz proof that works for arbitrary values of {\ell-k}.

Next, we use the Linnik lemma and some elementary arguments to bound {\tilde J^{(p)}_{\ell,k}(M)}:

Lemma 21 If {k M^{1/k} < p \ll k^4 M^{1/k}}, then

\displaystyle \tilde J^{(p)}_{\ell,k}(M) \leq \exp( O(\ell k)) M^k p^{\frac{k(k+1)}{2}} J_{\ell-k,k}(q)

where {q := \lfloor M/p\rfloor+1}. In particular, from (22) and (28) we have

\displaystyle \tilde j^{(p)}_{\ell,k}(M) \ll \exp( O(\ell k)) J_{\ell-k,k}(q). \ \ \ \ \ (30)


Proof: By definition, {\tilde J^{(p)}_{\ell,k}(M)} is the number of solutions to the system

\displaystyle \gamma_k(x_1)+\dots+\gamma_k(x_\ell) = \gamma_k(y_1)+\dots+\gamma_k(y_\ell)

where {x_1,\dots,x_\ell,y_1,\dots,y_\ell \in\{1,\dots,M\}}, with {x_1,\dots,x_k} and {y_1,\dots,y_k} each having distinct reductions mod {p}, and also

\displaystyle x_{k+1}=\dots=x_{\ell} = y_{k+1} = \dots = y_{\ell} = u \hbox{ mod } p

for some {u \in \{1,\dots,p\}}. From the binomial theorem, {\gamma_k(\cdot-u)} is a linear transform of {\gamma_k(\cdot)}, so

\displaystyle \gamma_k(x_1-u)+\dots+\gamma_k(x_\ell-u) = \gamma_k(y_1-u)+\dots+\gamma_k(y_\ell-u).

Writing {x_j = u + p x'_j} and {y_j = u + p y'_j} for {k+1 \leq j \leq \ell} and some {x'_j,y'_j \in\{1,\dots, q \}}, we conclude that

\displaystyle \gamma_k(x_1-u)+\dots+\gamma_k(x_k-u) + \gamma_k(p x'_{k+1}) + \dots + \gamma_k( p x'_\ell ) = \gamma_k(y_1-u)+\dots+\gamma_k(y_k-u) + \gamma_k( p y'_{k+1} ) + \dots + \gamma_k( p y'_\ell ).

In particular, taking the {j^{th}} coefficient

\displaystyle (x_1-u)^j + \dots + (x_k-u)^j = (y_1-u)^j + \dots + (y_k-u)^j \hbox{ mod } p^j.

There are {p M^k} choices for {u,y_1,\dots,y_k}, and then by Linnik’s lemma once {u,y_1,\dots,y_k} are chosen, there are at most {\exp(O(\ell k)) p^{\frac{k(k-1)}{2}}} choices for {x_1,\dots,x_k}. Once these are chosen, we still have to select {x'_{k+1},\dots,x'_\ell,y'_{k+1},\dots,y'_\ell \in \{1,\dots,q\}} subject to a constraint of the form

\displaystyle \gamma_k( x'_{k+1} ) + \dots + \gamma_k(x'_\ell) = a + \gamma_k( y'_{k+1} ) + \dots + \gamma_k(y'_\ell)

for some {a} depending on {u, x_1,\dots,x_k,y_1,\dots,y_k}. The number of solutions to this system is

\displaystyle \sum_n \nu_{\ell-k,k,q}(n) \nu_{\ell-k,k,q}(n+a)

which by the Cauchy-Schwarz inequality is bounded by {J_{\ell-k,k}(q)}. The claim follows. \Box

Combining (26), (29), and (30) we obtain (24) as required.