You are currently browsing the tag archive for the ‘circle method’ tag.

Kaisa Matomaki, Maksym Radziwill, and I have uploaded to the arXiv our paper “Correlations of the von Mangoldt and higher divisor functions I. Long shift ranges“, submitted to Proceedings of the London Mathematical Society. This paper is concerned with the estimation of correlations such as

$\displaystyle \sum_{n \leq X} \Lambda(n) \Lambda(n+h) \ \ \ \ \ (1)$

for medium-sized ${h}$ and large ${X}$, where ${\Lambda}$ is the von Mangoldt function; we also consider variants of this sum in which one of the von Mangoldt functions is replaced with a (higher order) divisor function, but for sake of discussion let us focus just on the sum (1). Understanding this sum is very closely related to the problem of finding pairs of primes that differ by ${h}$; for instance, if one could establish a lower bound

$\displaystyle \sum_{n \leq X} \Lambda(n) \Lambda(n+2) \gg X$

then this would easily imply the twin prime conjecture.

The (first) Hardy-Littlewood conjecture asserts an asymptotic

$\displaystyle \sum_{n \leq X} \Lambda(n) \Lambda(n+h) = {\mathfrak S}(h) X + o(X) \ \ \ \ \ (2)$

as ${X \rightarrow \infty}$ for any fixed positive ${h}$, where the singular series ${{\mathfrak S}(h)}$ is an arithmetic factor arising from the irregularity of distribution of ${\Lambda}$ at small moduli, defined explicitly by

$\displaystyle {\mathfrak S}(h) := 2 \Pi_2 \prod_{p|h; p>2} \frac{p-2}{p-1}$

when ${h}$ is even, and ${{\mathfrak S}(h)=0}$ when ${h}$ is odd, where

$\displaystyle \Pi_2 := \prod_{p>2} (1-\frac{1}{(p-1)^2}) = 0.66016\dots$

is (half of) the twin prime constant. See for instance this previous blog post for a a heuristic explanation of this conjecture. From the previous discussion we see that (2) for ${h=2}$ would imply the twin prime conjecture. Sieve theoretic methods are only able to provide an upper bound of the form ${ \sum_{n \leq X} \Lambda(n) \Lambda(n+h) \ll {\mathfrak S}(h) X}$.

Needless to say, apart from the trivial case of odd ${h}$, there are no values of ${h}$ for which the Hardy-Littlewood conjecture is known. However there are some results that say that this conjecture holds “on the average”: in particular, if ${H}$ is a quantity depending on ${X}$ that is somewhat large, there are results that show that (2) holds for most (i.e. for ${1-o(1)}$) of the ${h}$ betwen ${0}$ and ${H}$. Ideally one would like to get ${H}$ as small as possible, in particular one can view the full Hardy-Littlewood conjecture as the endpoint case when ${H}$ is bounded.

The first results in this direction were by van der Corput and by Lavrik, who established such a result with ${H = X}$ (with a subsequent refinement by Balog); Wolke lowered ${H}$ to ${X^{5/8+\varepsilon}}$, and Mikawa lowered ${H}$ further to ${X^{1/3+\varepsilon}}$. The main result of this paper is a further lowering of ${H}$ to ${X^{8/33+\varepsilon}}$. In fact (as in the preceding works) we get a better error term than ${o(X)}$, namely an error of the shape ${O_A( X \log^{-A} X)}$ for any ${A}$.

Our arguments initially proceed along standard lines. One can use the Hardy-Littlewood circle method to express the correlation in (2) as an integral involving exponential sums ${S(\alpha) := \sum_{n \leq X} \Lambda(n) e(\alpha n)}$. The contribution of “major arc” ${\alpha}$ is known by a standard computation to recover the main term ${{\mathfrak S}(h) X}$ plus acceptable errors, so it is a matter of controlling the “minor arcs”. After averaging in ${h}$ and using the Plancherel identity, one is basically faced with establishing a bound of the form

$\displaystyle \int_{\beta-1/H}^{\beta+1/H} |S(\alpha)|^2\ d\alpha \ll_A X \log^{-A} X$

for any “minor arc” ${\beta}$. If ${\beta}$ is somewhat close to a low height rational ${a/q}$ (specifically, if it is within ${X^{-1/6-\varepsilon}}$ of such a rational with ${q = O(\log^{O(1)} X)}$), then this type of estimate is roughly of comparable strength (by another application of Plancherel) to the best available prime number theorem in short intervals on the average, namely that the prime number theorem holds for most intervals of the form ${[x, x + x^{1/6+\varepsilon}]}$, and we can handle this case using standard mean value theorems for Dirichlet series. So we can restrict attention to the “strongly minor arc” case where ${\beta}$ is far from such rationals.

The next step (following some ideas we found in a paper of Zhan) is to rewrite this estimate not in terms of the exponential sums ${S(\alpha) := \sum_{n \leq X} \Lambda(n) e(\alpha n)}$, but rather in terms of the Dirichlet polynomial ${F(s) := \sum_{n \sim X} \frac{\Lambda(n)}{n^s}}$. After a certain amount of computation (including some oscillatory integral estimates arising from stationary phase), one is eventually reduced to the task of establishing an estimate of the form

$\displaystyle \int_{t \sim \lambda X} (\sum_{t-\lambda H}^{t+\lambda H} |F(\frac{1}{2}+it')|\ dt')^2\ dt \ll_A \lambda^2 H^2 X \log^{-A} X$

for any ${X^{-1/6-\varepsilon} \ll \lambda \ll \log^{-B} X}$ (with ${B}$ sufficiently large depending on ${A}$).

The next step, which is again standard, is the use of the Heath-Brown identity (as discussed for instance in this previous blog post) to split up ${\Lambda}$ into a number of components that have a Dirichlet convolution structure. Because the exponent ${8/33}$ we are shooting for is less than ${1/4}$, we end up with five types of components that arise, which we call “Type ${d_1}$“, “Type ${d_2}$“, “Type ${d_3}$“, “Type ${d_4}$“, and “Type II”. The “Type II” sums are Dirichlet convolutions involving a factor supported on a range ${[X^\varepsilon, X^{-\varepsilon} H]}$ and is quite easy to deal with; the “Type ${d_j}$” terms are Dirichlet convolutions that resemble (non-degenerate portions of) the ${j^{th}}$ divisor function, formed from convolving together ${j}$ portions of ${1}$. The “Type ${d_1}$” and “Type ${d_2}$” terms can be estimated satisfactorily by standard moment estimates for Dirichlet polynomials; this already recovers the result of Mikawa (and our argument is in fact slightly more elementary in that no Kloosterman sum estimates are required). It is the treatment of the “Type ${d_3}$” and “Type ${d_4}$” sums that require some new analysis, with the Type ${d_3}$ terms turning to be the most delicate. After using an existing moment estimate of Jutila for Dirichlet L-functions, matters reduce to obtaining a family of estimates, a typical one of which (relating to the more difficult Type ${d_3}$ sums) is of the form

$\displaystyle \int_{t - H}^{t+H} |M( \frac{1}{2} + it')|^2\ dt' \ll X^{\varepsilon^2} H \ \ \ \ \ (3)$

for “typical” ordinates ${t}$ of size ${X}$, where ${M}$ is the Dirichlet polynomial ${M(s) := \sum_{n \sim X^{1/3}} \frac{1}{n^s}}$ (a fragment of the Riemann zeta function). The precise definition of “typical” is a little technical (because of the complicated nature of Jutila’s estimate) and will not be detailed here. Such a claim would follow easily from the Lindelof hypothesis (which would imply that ${M(1/2 + it) \ll X^{o(1)}}$) but of course we would like to have an unconditional result.

At this point, having exhausted all the Dirichlet polynomial estimates that are usefully available, we return to “physical space”. Using some further Fourier-analytic and oscillatory integral computations, we can estimate the left-hand side of (3) by an expression that is roughly of the shape

$\displaystyle \frac{H}{X^{1/3}} \sum_{\ell \sim X^{1/3}/H} |\sum_{m \sim X^{1/3}} e( \frac{t}{2\pi} \log \frac{m+\ell}{m-\ell} )|.$

The phase ${\frac{t}{2\pi} \log \frac{m+\ell}{m-\ell}}$ can be Taylor expanded as the sum of ${\frac{t_j \ell}{\pi m}}$ and a lower order term ${\frac{t_j \ell^3}{3\pi m^3}}$, plus negligible errors. If we could discard the lower order term then we would get quite a good bound using the exponential sum estimates of Robert and Sargos, which control averages of exponential sums with purely monomial phases, with the averaging allowing us to exploit the hypothesis that ${t}$ is “typical”. Figuring out how to get rid of this lower order term caused some inefficiency in our arguments; the best we could do (after much experimentation) was to use Fourier analysis to shorten the sums, estimate a one-parameter average exponential sum with a binomial phase by a two-parameter average with a monomial phase, and then use the van der Corput ${B}$ process followed by the estimates of Robert and Sargos. This rather complicated procedure works up to ${H = X^{8/33+\varepsilon}}$ it may be possible that some alternate way to proceed here could improve the exponent somewhat.

In a sequel to this paper, we will use a somewhat different method to reduce ${H}$ to a much smaller value of ${\log^{O(1)} X}$, but only if we replace the correlations ${\sum_{n \leq X} \Lambda(n) \Lambda(n+h)}$ by either ${\sum_{n \leq X} \Lambda(n) d_k(n+h)}$ or ${\sum_{n \leq X} d_k(n) d_l(n+h)}$, and also we now only save a ${o(1)}$ in the error term rather than ${O_A(\log^{-A} X)}$.

We have seen in previous notes that the operation of forming a Dirichlet series

$\displaystyle {\mathcal D} f(n) := \sum_n \frac{f(n)}{n^s}$

or twisted Dirichlet series

$\displaystyle {\mathcal D} \chi f(n) := \sum_n \frac{f(n) \chi(n)}{n^s}$

is an incredibly useful tool for questions in multiplicative number theory. Such series can be viewed as a multiplicative Fourier transform, since the functions ${n \mapsto \frac{1}{n^s}}$ and ${n \mapsto \frac{\chi(n)}{n^s}}$ are multiplicative characters.

Similarly, it turns out that the operation of forming an additive Fourier series

$\displaystyle \hat f(\theta) := \sum_n f(n) e(-n \theta),$

where ${\theta}$ lies on the (additive) unit circle ${{\bf R}/{\bf Z}}$ and ${e(\theta) := e^{2\pi i \theta}}$ is the standard additive character, is an incredibly useful tool for additive number theory, particularly when studying additive problems involving three or more variables taking values in sets such as the primes; the deployment of this tool is generally known as the Hardy-Littlewood circle method. (In the analytic number theory literature, the minus sign in the phase ${e(-n\theta)}$ is traditionally omitted, and what is denoted by ${\hat f(\theta)}$ here would be referred to instead by ${S_f(-\theta)}$, ${S(f;-\theta)}$ or just ${S(-\theta)}$.) We list some of the most classical problems in this area:

• (Even Goldbach conjecture) Is it true that every even natural number ${N}$ greater than two can be expressed as the sum ${p_1+p_2}$ of two primes?
• (Odd Goldbach conjecture) Is it true that every odd natural number ${N}$ greater than five can be expressed as the sum ${p_1+p_2+p_3}$ of three primes?
• (Waring problem) For each natural number ${k}$, what is the least natural number ${g(k)}$ such that every natural number ${N}$ can be expressed as the sum of ${g(k)}$ or fewer ${k^{th}}$ powers?
• (Asymptotic Waring problem) For each natural number ${k}$, what is the least natural number ${G(k)}$ such that every sufficiently large natural number ${N}$ can be expressed as the sum of ${G(k)}$ or fewer ${k^{th}}$ powers?
• (Partition function problem) For any natural number ${N}$, let ${p(N)}$ denote the number of representations of ${N}$ of the form ${N = n_1 + \dots + n_k}$ where ${k}$ and ${n_1 \geq \dots \geq n_k}$ are natural numbers. What is the asymptotic behaviour of ${p(N)}$ as ${N \rightarrow \infty}$?

The Waring problem and its asymptotic version will not be discussed further here, save to note that the Vinogradov mean value theorem (Theorem 13 from Notes 5) and its variants are particularly useful for getting good bounds on ${G(k)}$; see for instance the ICM article of Wooley for recent progress on these problems. Similarly, the partition function problem was the original motivation of Hardy and Littlewood in introducing the circle method, but we will not discuss it further here; see e.g. Chapter 20 of Iwaniec-Kowalski for a treatment.

Instead, we will focus our attention on the odd Goldbach conjecture as our model problem. (The even Goldbach conjecture, which involves only two variables instead of three, is unfortunately not amenable to a circle method approach for a variety of reasons, unless the statement is replaced with something weaker, such as an averaged statement; see this previous blog post for further discussion. On the other hand, the methods here can obtain weaker versions of the even Goldbach conjecture, such as showing that “almost all” even numbers are the sum of two primes; see Exercise 34 below.) In particular, we will establish the following celebrated theorem of Vinogradov:

Theorem 1 (Vinogradov’s theorem) Every sufficiently large odd number ${N}$ is expressible as the sum of three primes.

Recently, the restriction that ${n}$ be sufficiently large was replaced by Helfgott with ${N > 5}$, thus establishing the odd Goldbach conjecture in full. This argument followed the same basic approach as Vinogradov (based on the circle method), but with various estimates replaced by “log-free” versions (analogous to the log-free zero-density theorems in Notes 7), combined with careful numerical optimisation of constants and also some numerical work on the even Goldbach problem and on the generalised Riemann hypothesis. We refer the reader to Helfgott’s text for details.

We will in fact show the more precise statement:

Theorem 2 (Quantitative Vinogradov theorem) Let ${N \geq 2}$ be an natural number. Then

$\displaystyle \sum_{a,b,c: a+b+c=N} \Lambda(a) \Lambda(b) \Lambda(c) = G_3(N) \frac{N^2}{2} + O_A( N^2 \log^{-A} N )$

for any ${A>0}$, where

$\displaystyle G_3(N) = \prod_{p|N} (1-\frac{1}{(p-1)^2}) \times \prod_{p \not | N} (1 + \frac{1}{(p-1)^3}). \ \ \ \ \ (1)$

The implied constants are ineffective.

We dropped the hypothesis that ${N}$ is odd in Theorem 2, but note that ${G_3(N)}$ vanishes when ${N}$ is even. For odd ${N}$, we have

$\displaystyle 1 \ll G_3(N) \ll 1.$

Exercise 3 Show that Theorem 2 implies Theorem 1.

Unfortunately, due to the ineffectivity of the constants in Theorem 2 (a consequence of the reliance on the Siegel-Walfisz theorem in the proof of that theorem), one cannot quantify explicitly what “sufficiently large” means in Theorem 1 directly from Theorem 2. However, there is a modification of this theorem which gives effective bounds; see Exercise 32 below.

Exercise 4 Obtain a heuristic derivation of the main term ${G_3(N) \frac{N^2}{2}}$ using the modified Cramér model (Section 1 of Supplement 4).

To prove Theorem 2, we consider the more general problem of estimating sums of the form

$\displaystyle \sum_{a,b,c \in {\bf Z}: a+b+c=N} f(a) g(b) h(c)$

for various integers ${N}$ and functions ${f,g,h: {\bf Z} \rightarrow {\bf C}}$, which we will take to be finitely supported to avoid issues of convergence.

Suppose that ${f,g,h}$ are supported on ${\{1,\dots,N\}}$; for simplicity, let us first assume the pointwise bound ${|f(n)|, |g(n)|, |h(n)| \ll 1}$ for all ${n}$. (This simple case will not cover the case in Theorem 2, when ${f,g,h}$ are truncated versions of the von Mangoldt function ${\Lambda}$, but will serve as a warmup to that case.) Then we have the trivial upper bound

$\displaystyle \sum_{a,b,c \in {\bf Z}: a+b+c=N} f(a) g(b) h(c) \ll N^2. \ \ \ \ \ (2)$

A basic observation is that this upper bound is attainable if ${f,g,h}$ all “pretend” to behave like the same additive character ${n \mapsto e(\theta n)}$ for some ${\theta \in {\bf R}/{\bf Z}}$. For instance, if ${f(n)=g(n)=h(n) = e(\theta n) 1_{n \leq N}}$, then we have ${f(a)g(b)h(c) = e(\theta N)}$ when ${a+b+c=N}$, and then it is not difficult to show that

$\displaystyle \sum_{a,b,c \in {\bf Z}: a+b+c=N} f(a) g(b) h(c) = (\frac{1}{2}+o(1)) e(\theta N) N^2$

as ${N \rightarrow \infty}$.

The key to the success of the circle method lies in the converse of the above statement: the only way that the trivial upper bound (2) comes close to being sharp is when ${f,g,h}$ all correlate with the same character ${n \mapsto e(\theta n)}$, or in other words ${\hat f(\theta), \hat g(\theta), \hat h(\theta)}$ are simultaneously large. This converse is largely captured by the following two identities:

Exercise 5 Let ${f,g,h: {\bf Z} \rightarrow {\bf C}}$ be finitely supported functions. Then for any natural number ${N}$, show that

$\displaystyle \sum_{a,b,c: a+b+c=N} f(a) g(b) h(c) = \int_{{\bf R}/{\bf Z}} \hat f(\theta) \hat g(\theta) \hat h(\theta) e(\theta N)\ d\theta \ \ \ \ \ (3)$

and

$\displaystyle \sum_n |f(n)|^2 = \int_{{\bf R}/{\bf Z}} |\hat f(\theta)|^2\ d\theta.$

The traditional approach to using the circle method to compute sums such as ${\sum_{a,b,c: a+b+c=N} f(a) g(b) h(c)}$ proceeds by invoking (3) to express this sum as an integral over the unit circle, then dividing the unit circle into “major arcs” where ${\hat f(\theta), \hat g(\theta),\hat h(\theta)}$ are large but computable with high precision, and “minor arcs” where one has estimates to ensure that ${\hat f(\theta), \hat g(\theta),\hat h(\theta)}$ are small in both ${L^\infty}$ and ${L^2}$ senses. For functions ${f,g,h}$ of number-theoretic significance, such as truncated von Mangoldt functions, the “major arcs” typically consist of those ${\theta}$ that are close to a rational number ${\frac{a}{q}}$ with ${q}$ not too large, and the “minor arcs” consist of the remaining portions of the circle. One then obtains lower bounds on the contributions of the major arcs, and upper bounds on the contribution of the minor arcs, in order to get good lower bounds on ${\sum_{a,b,c: a+b+c=N} f(a) g(b) h(c)}$.

This traditional approach is covered in many places, such as this text of Vaughan. We will emphasise in this set of notes a slightly different perspective on the circle method, coming from recent developments in additive combinatorics; this approach does not quite give the sharpest quantitative estimates, but it allows for easier generalisation to more combinatorial contexts, for instance when replacing the primes by dense subsets of the primes, or replacing the equation ${a+b+c=N}$ with some other equation or system of equations.

From Exercise 5 and Hölder’s inequality, we immediately obtain

Corollary 6 Let ${f,g,h: {\bf Z} \rightarrow {\bf C}}$ be finitely supported functions. Then for any natural number ${N}$, we have

$\displaystyle |\sum_{a,b,c: a+b+c=N} f(a) g(b) h(c)| \leq (\sum_n |f(n)|^2)^{1/2} (\sum_n |g(n)|^2)^{1/2}$

$\displaystyle \times \sup_\theta |\sum_n h(n) e(n\theta)|.$

Similarly for permutations of the ${f,g,h}$.

In the case when ${f,g,h}$ are supported on ${[1,N]}$ and bounded by ${O(1)}$, this corollary tells us that we have ${\sum_{a,b,c: a+b+c=N} f(a) g(b) h(c)}$ is ${o(N^2)}$ whenever one has ${\sum_n h(n) e(n\theta) = o(N)}$ uniformly in ${\theta}$, and similarly for permutations of ${f,g,h}$. From this and the triangle inequality, we obtain the following conclusion: if ${f}$ is supported on ${[1,N]}$ and bounded by ${O(1)}$, and ${f}$ is Fourier-approximated by another function ${g}$ supported on ${[1,N]}$ and bounded by ${O(1)}$ in the sense that

$\displaystyle \sum_n f(n) e(n\theta) = \sum_n g(n) e(n\theta) + o(N)$

uniformly in ${\theta}$, then we have

$\displaystyle \sum_{a,b,c: a+b+c=N} f(a) f(b) f(c) = \sum_{a,b,c: a+b+c=N} g(a) g(b) g(c) + o(N^2). \ \ \ \ \ (4)$

Thus, one possible strategy for estimating the sum ${\sum_{a,b,c: a+b+c=N} f(a) f(b) f(c)}$ is, one can effectively replace (or “model”) ${f}$ by a simpler function ${g}$ which Fourier-approximates ${g}$ in the sense that the exponential sums ${\sum_n f(n) e(n\theta), \sum_n g(n) e(n\theta)}$ agree up to error ${o(N)}$. For instance:

Exercise 7 Let ${N}$ be a natural number, and let ${A}$ be a random subset of ${\{1,\dots,N\}}$, chosen so that each ${n \in \{1,\dots,N\}}$ has an independent probability of ${1/2}$ of lying in ${A}$.

• (i) If ${f := 1_A}$ and ${g := \frac{1}{2} 1_{[1,N]}}$, show that with probability ${1-o(1)}$ as ${N \rightarrow \infty}$, one has ${\sum_n f(n) e(n\theta) = \sum_n g(n) e(n\theta) + o(N)}$ uniformly in ${\theta}$. (Hint: for any fixed ${\theta}$, this can be accomplished with quite a good probability (e.g. ${1-o(N^{-2})}$) using a concentration of measure inequality, such as Hoeffding’s inequality. To obtain the uniformity in ${\theta}$, round ${\theta}$ to the nearest multiple of (say) ${1/N^2}$ and apply the union bound).
• (ii) Show that with probability ${1-o(1)}$, one has ${(\frac{1}{16}+o(1))N^2}$ representations of the form ${N=a+b+c}$ with ${a,b,c \in A}$ (with ${(a,b,c)}$ treated as an ordered triple, rather than an unordered one).

In the case when ${f}$ is something like the truncated von Mangoldt function ${\Lambda(n) 1_{n \leq N}}$, the quantity ${\sum_n |f(n)|^2}$ is of size ${O( N \log N)}$ rather than ${O( N )}$. This costs us a logarithmic factor in the above analysis, however we can still conclude that we have the approximation (4) whenever ${g}$ is another sequence with ${\sum_n |g(n)|^2 \ll N \log N}$ such that one has the improved Fourier approximation

$\displaystyle \sum_n f(n) e(n\theta) = \sum_n g(n) e(n\theta) + o(\frac{N}{\log N}) \ \ \ \ \ (5)$

uniformly in ${\theta}$. (Later on we will obtain a “log-free” version of this implication in which one does not need to gain a factor of ${\frac{1}{\log N}}$ in the error term.)

This suggests a strategy for proving Vinogradov’s theorem: find an approximant ${g}$ to some suitable truncation ${f}$ of the von Mangoldt function (e.g. ${f(n) = \Lambda(n) 1_{n \leq N}}$ or ${f(n) = \Lambda(n) \eta(n/N)}$) which obeys the Fourier approximation property (5), and such that the expression ${\sum_{a+b+c=N} g(a) g(b) g(c)}$ is easily computable. It turns out that there are a number of good options for such an approximant ${g}$. One of the quickest ways to obtain such an approximation (which is used in Chapter 19 of Iwaniec and Kowalski) is to start with the standard identity ${\Lambda = -\mu L * 1}$, that is to say

$\displaystyle \Lambda(n) = - \sum_{d|n} \mu(d) \log d,$

and obtain an approximation by truncating ${d}$ to be less than some threshold ${R}$ (which, in practice, would be a small power of ${N}$):

$\displaystyle \Lambda(n) \approx - \sum_{d \leq R: d|n} \mu(d) \log d. \ \ \ \ \ (6)$

Thus, for instance, if ${f(n) = \Lambda(n) 1_{n \leq N}}$, the approximant ${g}$ would be taken to be

$\displaystyle g(n) := - \sum_{d \leq R: d|n} \mu(d) \log d 1_{n \leq N}.$

One could also use the slightly smoother approximation

$\displaystyle \Lambda(n) \approx \sum_{d \leq R: d|n} \mu(d) \log \frac{R}{d} \ \ \ \ \ (7)$

in which case we would take

$\displaystyle g(n) := \sum_{d \leq R: d|n} \mu(d) \log \frac{R}{d} 1_{n \leq N}.$

The function ${g}$ is somewhat similar to the continuous Selberg sieve weights studied in Notes 4, with the main difference being that we did not square the divisor sum as we will not need to take ${g}$ to be non-negative. As long as ${z}$ is not too large, one can use some sieve-like computations to compute expressions like ${\sum_{a+b+c=N} g(a)g(b)g(c)}$ quite accurately. The approximation (5) can be justified by using a nice estimate of Davenport that exemplifies the Mobius pseudorandomness heuristic from Supplement 4:

Theorem 8 (Davenport’s estimate) For any ${A>0}$ and ${x \geq 2}$, we have

$\displaystyle \sum_{n \leq x} \mu(n) e(\theta n) \ll_A x \log^{-A} x$

uniformly for all ${\theta \in {\bf R}/{\bf Z}}$. The implied constants are ineffective.

This estimate will be proven by splitting into two cases. In the “major arc” case when ${\theta}$ is close to a rational ${a/q}$ with ${q}$ small (of size ${O(\log^{O(1)} x)}$ or so), this estimate will be a consequence of the Siegel-Walfisz theorem ( from Notes 2); it is the application of this theorem that is responsible for the ineffective constants. In the remaining “minor arc” case, one proceeds by using a combinatorial identity (such as Vaughan’s identity) to express the sum ${\sum_{n \leq x} \mu(n) e(\theta n)}$ in terms of bilinear sums of the form ${\sum_n \sum_m a_n b_m e(\theta nm)}$, and use the Cauchy-Schwarz inequality and the minor arc nature of ${\theta}$ to obtain a gain in this case. This will all be done below the fold. We will also use (a rigorous version of) the approximation (6) (or (7)) to establish Vinogradov’s theorem.

A somewhat different looking approximation for the von Mangoldt function that also turns out to be quite useful is

$\displaystyle \Lambda(n) \approx \sum_{q \leq Q} \sum_{a \in ({\bf Z}/q{\bf Z})^\times} \frac{\mu(q)}{\phi(q)} e( \frac{an}{q} ) \ \ \ \ \ (8)$

for some ${Q}$ that is not too large compared to ${N}$. The methods used to establish Theorem 8 can also establish a Fourier approximation that makes (8) precise, and which can yield an alternate proof of Vinogradov’s theorem; this will be done below the fold.

The approximation (8) can be written in a way that makes it more similar to (7):

Exercise 9 Show that the right-hand side of (8) can be rewritten as

$\displaystyle \sum_{d \leq Q: d|n} \mu(d) \rho_d$

where

$\displaystyle \rho_d := \frac{d}{\phi(d)} \sum_{m \leq Q/d: (m,d)=1} \frac{\mu^2(m)}{\phi(m)}.$

Then, show the inequalities

$\displaystyle \sum_{m \leq Q/d} \frac{\mu^2(m)}{\phi(m)} \leq \rho_d \leq \sum_{m \leq Q} \frac{\mu^2(m)}{\phi(m)}$

and conclude that

$\displaystyle \log \frac{Q}{d} - O(1) \leq \rho_d \leq \log Q + O(1).$

(Hint: for the latter estimate, use Theorem 27 of Notes 1.)

The coefficients ${\rho_d}$ in the above exercise are quite similar to optimised Selberg sieve coefficients (see Section 2 of Notes 4).

Another approximation to ${\Lambda}$, related to the modified Cramér random model (see Model 10 of Supplement 4) is

$\displaystyle \Lambda(n) \approx \frac{W}{\phi(W)} 1_{(n,W)=1} \ \ \ \ \ (9)$

where ${W := \prod_{p \leq w} p}$ and ${w}$ is a slowly growing function of ${N}$ (e.g. ${w = \log\log N}$); a closely related approximation is

$\displaystyle \frac{\phi(W)}{W} \Lambda(Wn+b) \approx 1 \ \ \ \ \ (10)$

for ${W,w}$ as above and ${1 \leq b \leq W}$ coprime to ${W}$. These approximations (closely related to a device known as the “${W}$-trick”) are not as quantitatively accurate as the previous approximations, but can still suffice to establish Vinogradov’s theorem, and also to count many other linear patterns in the primes or subsets of the primes (particularly if one injects some additional tools from additive combinatorics, and specifically the inverse conjecture for the Gowers uniformity norms); see this paper of Ben Green and myself for more discussion (and this more recent paper of Shao for an analysis of this approach in the context of Vinogradov-type theorems). The following exercise expresses the approximation (9) in a form similar to the previous approximation (8):

Exercise 10 With ${W}$ as above, show that

$\displaystyle \frac{W}{\phi(W)} 1_{(n,W)=1} = \sum_{q|W} \sum_{a \in ({\bf Z}/q{\bf Z})^\times} \frac{\mu(q)}{\phi(q)} e( \frac{an}{q} )$

for all natural numbers ${n}$.

One of the most basic methods in additive number theory is the Hardy-Littlewood circle method. This method is based on expressing a quantity of interest to additive number theory, such as the number of representations ${f_3(x)}$ of an integer ${x}$ as the sum of three primes ${x = p_1+p_2+p_3}$, as a Fourier-analytic integral over the unit circle ${{\bf R}/{\bf Z}}$ involving exponential sums such as

$\displaystyle S(x,\alpha) := \sum_{p \leq x} e( \alpha p) \ \ \ \ \ (1)$

where the sum here ranges over all primes up to ${x}$, and ${e(x) := e^{2\pi i x}}$. For instance, the expression ${f(x)}$ mentioned earlier can be written as

$\displaystyle f_3(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha. \ \ \ \ \ (2)$

The strategy is then to obtain sufficiently accurate bounds on exponential sums such as ${S(x,\alpha)}$ in order to obtain non-trivial bounds on quantities such as ${f_3(x)}$. For instance, if one can show that ${f_3(x)>0}$ for all odd integers ${x}$ greater than some given threshold ${x_0}$, this implies that all odd integers greater than ${x_0}$ are expressible as the sum of three primes, thus establishing all but finitely many instances of the odd Goldbach conjecture.

Remark 1 In practice, it can be more efficient to work with smoother sums than the partial sum (1), for instance by replacing the cutoff ${p \leq x}$ with a smoother cutoff ${\chi(p/x)}$ for a suitable choice of cutoff function ${\chi}$, or by replacing the restriction of the summation to primes by a more analytically tractable weight, such as the von Mangoldt function ${\Lambda(n)}$. However, these improvements to the circle method are primarily technical in nature and do not have much impact on the heuristic discussion in this post, so we will not emphasise them here. One can also certainly use the circle method to study additive combinations of numbers from other sets than the set of primes, but we will restrict attention to additive combinations of primes for sake of discussion, as it is historically one of the most studied sets in additive number theory.

In many cases, it turns out that one can get fairly precise evaluations on sums such as ${S(x,\alpha)}$ in the major arc case, when ${\alpha}$ is close to a rational number ${a/q}$ with small denominator ${q}$, by using tools such as the prime number theorem in arithmetic progressions. For instance, the prime number theorem itself tells us that

$\displaystyle S(x,0) \approx \frac{x}{\log x}$

and the prime number theorem in residue classes modulo ${q}$ suggests more generally that

$\displaystyle S(x,\frac{a}{q}) \approx \frac{\mu(q)}{\phi(q)} \frac{x}{\log x}$

when ${q}$ is small and ${a}$ is close to ${q}$, basically thanks to the elementary calculation that the phase ${e(an/q)}$ has an average value of ${\mu(q)/\phi(q)}$ when ${n}$ is uniformly distributed amongst the residue classes modulo ${q}$ that are coprime to ${q}$. Quantifying the precise error in these approximations can be quite challenging, though, unless one assumes powerful hypotheses such as the Generalised Riemann Hypothesis.

In the minor arc case when ${\alpha}$ is not close to a rational ${a/q}$ with small denominator, one no longer expects to have such precise control on the value of ${S(x,\alpha)}$, due to the “pseudorandom” fluctuations of the quantity ${e(\alpha p)}$. Using the standard probabilistic heuristic (supported by results such as the central limit theorem or Chernoff’s inequality) that the sum of ${k}$ “pseudorandom” phases should fluctuate randomly and be of typical magnitude ${\sim \sqrt{k}}$, one expects upper bounds of the shape

$\displaystyle |S(x,\alpha)| \lessapprox \sqrt{\frac{x}{\log x}} \ \ \ \ \ (3)$

for “typical” minor arc ${\alpha}$. Indeed, a simple application of the Plancherel identity, followed by the prime number theorem, reveals that

$\displaystyle \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2\ d\alpha \sim \frac{x}{\log x} \ \ \ \ \ (4)$

which is consistent with (though weaker than) the above heuristic. In practice, though, we are unable to rigorously establish bounds anywhere near as strong as (3); upper bounds such as ${x^{4/5+o(1)}}$ are far more typical.

Because one only expects to have upper bounds on ${|S(x,\alpha)|}$, rather than asymptotics, in the minor arc case, one cannot realistically hope to make much use of phases such as ${e(-x\alpha)}$ for the minor arc contribution to integrals such as (2) (at least if one is working with a single, deterministic, value of ${x}$, so that averaging in ${x}$ is unavailable). In particular, from upper bound information alone, it is difficult to avoid the “conspiracy” that the magnitude ${|S(x,\alpha)|^3}$ oscillates in sympathetic resonance with the phase ${e(-x\alpha)}$, thus essentially eliminating almost all of the possible gain in the bounds that could arise from exploiting cancellation from that phase. Thus, one basically has little option except to use the triangle inequality to control the portion of the integral on the minor arc region ${\Omega_{minor}}$:

$\displaystyle |\int_{\Omega_{minor}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha| \leq \int_{\Omega_{minor}} |S(x,\alpha)|^3\ d\alpha.$

Despite this handicap, though, it is still possible to get enough bounds on both the major and minor arc contributions of integrals such as (2) to obtain non-trivial lower bounds on quantities such as ${f(x)}$, at least when ${x}$ is large. In particular, this sort of method can be developed to give a proof of Vinogradov’s famous theorem that every sufficiently large odd integer ${x}$ is the sum of three primes; my own result that all odd numbers greater than ${1}$ can be expressed as the sum of at most five primes is also proven by essentially the same method (modulo a number of minor refinements, and taking advantage of some numerical work on both the Goldbach problems and on the Riemann hypothesis ). It is certainly conceivable that some further variant of the circle method (again combined with a suitable amount of numerical work, such as that of numerically establishing zero-free regions for the Generalised Riemann Hypothesis) can be used to settle the full odd Goldbach conjecture; indeed, under the assumption of the Generalised Riemann Hypothesis, this was already achieved by Deshouillers, Effinger, te Riele, and Zinoviev back in 1997. I am optimistic that an unconditional version of this result will be possible within a few years or so, though I should say that there are still significant technical challenges to doing so, and some clever new ideas will probably be needed to get either the Vinogradov-style argument or numerical verification to work unconditionally for the three-primes problem at medium-sized ranges of ${x}$, such as ${x \sim 10^{50}}$. (But the intermediate problem of representing all even natural numbers as the sum of at most four primes looks somewhat closer to being feasible, though even this would require some substantially new and non-trivial ideas beyond what is in my five-primes paper.)

However, I (and many other analytic number theorists) are considerably more skeptical that the circle method can be applied to the even Goldbach problem of representing a large even number ${x}$ as the sum ${x = p_1 + p_2}$ of two primes, or the similar (and marginally simpler) twin prime conjecture of finding infinitely many pairs of twin primes, i.e. finding infinitely many representations ${2 = p_1 - p_2}$ of ${2}$ as the difference of two primes. At first glance, the situation looks tantalisingly similar to that of the Vinogradov theorem: to settle the even Goldbach problem for large ${x}$, one has to find a non-trivial lower bound for the quantity

$\displaystyle f_2(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^2 e(-x\alpha)\ d\alpha \ \ \ \ \ (5)$

for sufficiently large ${x}$, as this quantity ${f_2(x)}$ is also the number of ways to represent ${x}$ as the sum ${x=p_1+p_2}$ of two primes ${p_1,p_2}$. Similarly, to settle the twin prime problem, it would suffice to obtain a lower bound for the quantity

$\displaystyle \tilde f_2(x) = \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2 e(-2\alpha)\ d\alpha \ \ \ \ \ (6)$

that goes to infinity as ${x \rightarrow \infty}$, as this quantity ${\tilde f_2(x)}$ is also the number of ways to represent ${2}$ as the difference ${2 = p_1-p_2}$ of two primes less than or equal to ${x}$.

In principle, one can achieve either of these two objectives by a sufficiently fine level of control on the exponential sums ${S(x,\alpha)}$. Indeed, there is a trivial (and uninteresting) way to take any (hypothetical) solution of either the asymptotic even Goldbach problem or the twin prime problem and (artificially) convert it to a proof that “uses the circle method”; one simply begins with the quantity ${f_2(x)}$ or ${\tilde f_2(x)}$, expresses it in terms of ${S(x,\alpha)}$ using (5) or (6), and then uses (5) or (6) again to convert these integrals back into a the combinatorial expression of counting solutions to ${x=p_1+p_2}$ or ${2=p_1-p_2}$, and then uses the hypothetical solution to the given problem to obtain the required lower bounds on ${f_2(x)}$ or ${\tilde f_2(x)}$.

Of course, this would not qualify as a genuine application of the circle method by any reasonable measure. One can then ask the more refined question of whether one could hope to get non-trivial lower bounds on ${f_2(x)}$ or ${\tilde f_2(x)}$ (or similar quantities) purely from the upper and lower bounds on ${S(x,\alpha)}$ or similar quantities (and of various ${L^p}$ type norms on such quantities, such as the ${L^2}$ bound (4)). Of course, we do not yet know what the strongest possible upper and lower bounds in ${S(x,\alpha)}$ are yet (otherwise we would already have made progress on major conjectures such as the Riemann hypothesis); but we can make plausible heuristic conjectures on such bounds. And this is enough to make the following heuristic conclusions:

• (i) For “binary” problems such as computing (5), (6), the contribution of the minor arcs potentially dominates that of the major arcs (if all one is given about the minor arc sums is magnitude information), in contrast to “ternary” problems such as computing (2), in which it is the major arc contribution which is absolutely dominant.
• (ii) Upper and lower bounds on the magnitude of ${S(x,\alpha)}$ are not sufficient, by themselves, to obtain non-trivial bounds on (5), (6) unless these bounds are extremely tight (within a relative error of ${O(1/\log x)}$ or better); but
• (iii) obtaining such tight bounds is a problem of comparable difficulty to the original binary problems.

I will provide some justification for these conclusions below the fold; they are reasonably well known “folklore” to many researchers in the field, but it seems that they are rarely made explicit in the literature (in part because these arguments are, by their nature, heuristic instead of rigorous) and I have been asked about them from time to time, so I decided to try to write them down here.

In view of the above conclusions, it seems that the best one can hope to do by using the circle method for the twin prime or even Goldbach problems is to reformulate such problems into a statement of roughly comparable difficulty to the original problem, even if one assumes powerful conjectures such as the Generalised Riemann Hypothesis (which lets one make very precise control on major arc exponential sums, but not on minor arc ones). These are not rigorous conclusions – after all, we have already seen that one can always artifically insert the circle method into any viable approach on these problems – but they do strongly suggest that one needs a method other than the circle method in order to fully solve either of these two problems. I do not know what such a method would be, though I can give some heuristic objections to some of the other popular methods used in additive number theory (such as sieve methods, or more recently the use of inverse theorems); this will be done at the end of this post.

I’ve just uploaded to the arXiv my paper “Every odd number greater than 1 is the sum of at most five primes“, submitted to Mathematics of Computation. The main result of the paper is as stated in the title, and is in the spirit of (though significantly weaker than) the even Goldbach conjecture (every even natural number is the sum of at most two primes) and odd Goldbach conjecture (every odd natural number greater than 1 is the sum of at most three primes). It also improves on a result of Ramaré that every even natural number is the sum of at most six primes. This result had previously also been established by Kaniecki under the additional assumption of the Riemann hypothesis, so one can view the main result here as an unconditional version of Kaniecki’s result.

The method used is the Hardy-Littlewood circle method, which was for instance also used to prove Vinogradov’s theorem that every sufficiently large odd number is the sum of three primes. Let’s quickly recall how this argument works. It is convenient to use a proxy for the primes, such as the von Mangoldt function ${\Lambda}$, which is mostly supported on the primes. To represent a large number ${x}$ as the sum of three primes, it suffices to obtain a good lower bound for the sum

$\displaystyle \sum_{n_1,n_2,n_3: n_1+n_2+n_3=x} \Lambda(n_1) \Lambda(n_2) \Lambda(n_3).$

By Fourier analysis, one can rewrite this sum as an integral

$\displaystyle \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha$

where

$\displaystyle S(x,\alpha) := \sum_{n \leq x} \Lambda(n) e(n\alpha)$

and ${e(\theta) :=e^{2\pi i \theta}}$. To control this integral, one then needs good bounds on ${S(x,\alpha)}$ for various values of ${\alpha}$. To do this, one first approximates ${\alpha}$ by a rational ${a/q}$ with controlled denominator (using a tool such as the Dirichlet approximation theorem) ${q}$. The analysis then broadly bifurcates into the major arc case when ${q}$ is small, and the minor arc case when ${q}$ is large. In the major arc case, the problem more or less boils down to understanding sums such as

$\displaystyle \sum_{n\leq x} \Lambda(n) e(an/q),$

which in turn is almost equivalent to understanding the prime number theorem in arithmetic progressions modulo ${q}$. In the minor arc case, the prime number theorem is not strong enough to give good bounds (unless one is using some extremely strong hypotheses, such as the generalised Riemann hypothesis), so instead one uses a rather different method, using truncated versions of divisor sum identities such as ${\Lambda(n) =\sum_{d|n} \mu(d) \log\frac{n}{d}}$ to split ${S(x,\alpha)}$ into a collection of linear and bilinear sums that are more tractable to bound, typical examples of which (after using a particularly simple truncated divisor sum identity known as Vaughan’s identity) include the “Type I sum”

$\displaystyle \sum_{d \leq U} \mu(d) \sum_{n \leq x/d} \log(n) e(\alpha dn)$

and the “Type II sum”

$\displaystyle \sum_{d > U} \sum_{w > V} \mu(d) (\sum_{b|w: b > V} \Lambda(b)) e(\alpha dw) 1_{dw \leq x}.$

After using tools such as the triangle inequality or Cauchy-Schwarz inequality to eliminate arithmetic functions such as ${\mu(d)}$ or ${\sum_{b|w: b>V}\Lambda(b)}$, one ends up controlling plain exponential sums such as ${\sum_{V < w < x/d} e(\alpha dw)}$, which can be efficiently controlled in the minor arc case.

This argument works well when ${x}$ is extremely large, but starts running into problems for moderate sized ${x}$, e.g. ${x \sim 10^{30}}$. The first issue is that of logarithmic losses in the minor arc estimates. A typical minor arc estimate takes the shape

$\displaystyle |S(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^3 x \ \ \ \ \ (1)$

when ${\alpha}$ is close to ${a/q}$ for some ${1\leq q\leq x}$. This only improves upon the trivial estimate ${|S(x,\alpha)| \ll x}$ from the prime number theorem when ${\log^6 x \ll q \ll x/\log^6 x}$. As a consequence, it becomes necessary to obtain an accurate prime number theorem in arithmetic progressions with modulus as large as ${\log^6 x}$. However, with current technology, the error term in such theorems are quite poor (terms such as ${O(\exp(-c\sqrt{\log x}) x)}$ for some small ${c>0}$ are typical, and there is also a notorious “Siegel zero” problem), and as a consequence, the method is generally only applicable for very large ${x}$. For instance, the best explicit result of Vinogradov type known currently is due to Liu and Wang, who established that all odd numbers larger than ${10^{1340}}$ are the sum of three odd primes. (However, on the assumption of the GRH, the full odd Goldbach conjecture is known to be true; this is a result of Deshouillers, Effinger, te Riele, and Zinoviev.)

In this paper, we make a number of refinements to the general scheme, each one of which is individually rather modest and not all that novel, but which when added together turn out to be enough to resolve the five primes problem (though many more ideas would still be needed to tackle the three primes problem, and as is well known the circle method is very unlikely to be the route to make progress on the two primes problem). The first refinement, which is only available in the five primes case, is to take advantage of the numerical verification of the even Goldbach conjecture up to some large ${N_0}$ (we take ${N_0=4\times 10^{14}}$, using a verification of Richstein, although there are now much larger values of ${N_0}$as high as ${2.6 \times 10^{18}}$ – for which the conjecture has been verified). As such, instead of trying to represent an odd number ${x}$ as the sum of five primes, we can represent it as the sum of three odd primes and a natural number between ${2}$ and ${N_0}$. This effectively brings us back to the three primes problem, but with the significant additional boost that one can essentially restrict the frequency variable ${\alpha}$ to be of size ${O(1/N_0)}$. In practice, this eliminates all of the major arcs except for the principal arc around ${0}$. This is a significant simplification, in particular avoiding the need to deal with the prime number theorem in arithmetic progressions (and all the attendant theory of L-functions, Siegel zeroes, etc.).

In a similar spirit, by taking advantage of the numerical verification of the Riemann hypothesis up to some height ${T_0}$, and using the explicit formula relating the von Mangoldt function with the zeroes of the zeta function, one can safely deal with the principal major arc ${\{ \alpha = O( T_0 / x ) \}}$. For our specific application, we use the value ${T_0= 3.29 \times 10^9}$, arising from the verification of the Riemann hypothesis of the first ${10^{10}}$ zeroes by van de Lune (unpublished) and Wedeniswki. (Such verifications have since been extended further, the latest being that the first ${10^{13}}$ zeroes lie on the line.)

To make the contribution of the major arc as efficient as possible, we borrow an idea from a paper of Bourgain, and restrict one of the three primes in the three-primes problem to a somewhat shorter range than the other two (of size ${O(x/K)}$ instead of ${O(x)}$, where we take ${K}$ to be something like ${10^3}$), as this largely eliminates the “Archimedean” losses coming from trying to use Fourier methods to control convolutions on ${{\bf R}}$. In our paper, we set the scale parameter ${K}$ to be ${10^3}$ (basically, anything that is much larger than ${1}$ but much less than ${T_0}$ will work), but we found that an additional gain (which we ended up not using) could be obtained by averaging ${K}$ over a range of scales, say between ${10^3}$ and ${10^6}$. This sort of averaging could be a useful trick in future work on Goldbach-type problems.

It remains to treat the contribution of the “minor arc” ${T_0/x \ll |\alpha| \ll 1/N_0}$. To do this, one needs good ${L^2}$ and ${L^\infty}$ type estimates on the exponential sum ${S(x,\alpha)}$. Plancherel’s theorem gives an ${L^2}$ estimate which loses a logarithmic factor, but it turns out that on this particular minor arc one can use tools from the theory of the large sieve (such as Montgomery’s uncertainty principle) to eliminate this logarithmic loss almost completely; it turns out that the most efficient way to do this is use an effective upper bound of Siebert on the number of prime pairs ${(p,p+h)}$ less than ${x}$ to obtain an ${L^2}$ bound that only loses a factor of ${8}$ (or of ${7}$, once one cuts out the major arc).

For ${L^\infty}$ estimates, it turns out that existing effective versions of (1) (in particular, the bound given by Chen and Wang) are insufficient, due to the three logarithmic factors of ${\log x}$ in the bound. By using a smoothed out version ${S_\eta(x,\alpha) :=\sum_{n}\Lambda(n) e(n\alpha) \eta(n/x)}$ of the sum ${S(\alpha,x)}$, for some suitable cutoff function ${\eta}$, one can save one factor of a logarithm, obtaining a bound of the form

$\displaystyle |S_\eta(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^2 x$

with effective constants. One can improve the constants further by restricting all summations to odd integers (which barely affects ${S_\eta(x,\alpha)}$, since ${\Lambda}$ was mostly supported on odd numbers anyway), which in practice reduces the effective constants by a factor of two or so. One can also make further improvements in the constants by using the very sharp large sieve inequality to control the “Type II” sums that arise from Vaughan’s identity, and by using integration by parts to improve the bounds on the “Type I” sums. A final gain can then be extracted by optimising the cutoff parameters ${U, V}$ appearing in Vaughan’s identity to minimise the contribution of the Type II sums (which, in practice, are the dominant term). Combining all these improvements, one ends up with bounds of the shape

$\displaystyle |S_\eta(x,\alpha)| \ll \frac{x}{q} \log^2 x + \frac{x}{\sqrt{q}} \log^2 q$

when ${q}$ is small (say ${1 < q < x^{1/3}}$) and

$\displaystyle |S_\eta(x,\alpha)| \ll \frac{x}{(x/q)^2} \log^2 x + \frac{x}{\sqrt{x/q}} \log^2(x/q)$

when ${q}$ is large (say ${x^{2/3} < q < x}$). (See the paper for more explicit versions of these estimates.) The point here is that the ${\log x}$ factors have been partially replaced by smaller logarithmic factors such as ${\log q}$ or ${\log x/q}$. Putting together all of these improvements, one can finally obtain a satisfactory bound on the minor arc. (There are still some terms with a ${\log x}$ factor in them, but we use the effective Vinogradov theorem of Liu and Wang to upper bound ${\log x}$ by ${3100}$, which ends up making the remaining terms involving ${\log x}$ manageable.)