I’ve just uploaded to the arXiv my paper “Every odd number greater than 1 is the sum of at most five primes“, submitted to Mathematics of Computation. The main result of the paper is as stated in the title, and is in the spirit of (though significantly weaker than) the even Goldbach conjecture (every even natural number is the sum of at most two primes) and odd Goldbach conjecture (every odd natural number greater than 1 is the sum of at most three primes). It also improves on a result of Ramaré that every even natural number is the sum of at most six primes. This result had previously also been established by Kaniecki under the additional assumption of the Riemann hypothesis, so one can view the main result here as an unconditional version of Kaniecki’s result.

The method used is the Hardy-Littlewood circle method, which was for instance also used to prove Vinogradov’s theorem that every sufficiently large odd number is the sum of three primes. Let’s quickly recall how this argument works. It is convenient to use a proxy for the primes, such as the von Mangoldt function ${\Lambda}$, which is mostly supported on the primes. To represent a large number ${x}$ as the sum of three primes, it suffices to obtain a good lower bound for the sum

$\displaystyle \sum_{n_1,n_2,n_3: n_1+n_2+n_3=x} \Lambda(n_1) \Lambda(n_2) \Lambda(n_3).$

By Fourier analysis, one can rewrite this sum as an integral

$\displaystyle \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha$

where

$\displaystyle S(x,\alpha) := \sum_{n \leq x} \Lambda(n) e(n\alpha)$

and ${e(\theta) :=e^{2\pi i \theta}}$. To control this integral, one then needs good bounds on ${S(x,\alpha)}$ for various values of ${\alpha}$. To do this, one first approximates ${\alpha}$ by a rational ${a/q}$ with controlled denominator (using a tool such as the Dirichlet approximation theorem) ${q}$. The analysis then broadly bifurcates into the major arc case when ${q}$ is small, and the minor arc case when ${q}$ is large. In the major arc case, the problem more or less boils down to understanding sums such as

$\displaystyle \sum_{n\leq x} \Lambda(n) e(an/q),$

which in turn is almost equivalent to understanding the prime number theorem in arithmetic progressions modulo ${q}$. In the minor arc case, the prime number theorem is not strong enough to give good bounds (unless one is using some extremely strong hypotheses, such as the generalised Riemann hypothesis), so instead one uses a rather different method, using truncated versions of divisor sum identities such as ${\Lambda(n) =\sum_{d|n} \mu(d) \log\frac{n}{d}}$ to split ${S(x,\alpha)}$ into a collection of linear and bilinear sums that are more tractable to bound, typical examples of which (after using a particularly simple truncated divisor sum identity known as Vaughan’s identity) include the “Type I sum”

$\displaystyle \sum_{d \leq U} \mu(d) \sum_{n \leq x/d} \log(n) e(\alpha dn)$

and the “Type II sum”

$\displaystyle \sum_{d > U} \sum_{w > V} \mu(d) (\sum_{b|w: b > V} \Lambda(b)) e(\alpha dw) 1_{dw \leq x}.$

After using tools such as the triangle inequality or Cauchy-Schwarz inequality to eliminate arithmetic functions such as ${\mu(d)}$ or ${\sum_{b|w: b>V}\Lambda(b)}$, one ends up controlling plain exponential sums such as ${\sum_{V < w < x/d} e(\alpha dw)}$, which can be efficiently controlled in the minor arc case.

This argument works well when ${x}$ is extremely large, but starts running into problems for moderate sized ${x}$, e.g. ${x \sim 10^{30}}$. The first issue is that of logarithmic losses in the minor arc estimates. A typical minor arc estimate takes the shape

$\displaystyle |S(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^3 x \ \ \ \ \ (1)$

when ${\alpha}$ is close to ${a/q}$ for some ${1\leq q\leq x}$. This only improves upon the trivial estimate ${|S(x,\alpha)| \ll x}$ from the prime number theorem when ${\log^6 x \ll q \ll x/\log^6 x}$. As a consequence, it becomes necessary to obtain an accurate prime number theorem in arithmetic progressions with modulus as large as ${\log^6 x}$. However, with current technology, the error term in such theorems are quite poor (terms such as ${O(\exp(-c\sqrt{\log x}) x)}$ for some small ${c>0}$ are typical, and there is also a notorious “Siegel zero” problem), and as a consequence, the method is generally only applicable for very large ${x}$. For instance, the best explicit result of Vinogradov type known currently is due to Liu and Wang, who established that all odd numbers larger than ${10^{1340}}$ are the sum of three odd primes. (However, on the assumption of the GRH, the full odd Goldbach conjecture is known to be true; this is a result of Deshouillers, Effinger, te Riele, and Zinoviev.)

In this paper, we make a number of refinements to the general scheme, each one of which is individually rather modest and not all that novel, but which when added together turn out to be enough to resolve the five primes problem (though many more ideas would still be needed to tackle the three primes problem, and as is well known the circle method is very unlikely to be the route to make progress on the two primes problem). The first refinement, which is only available in the five primes case, is to take advantage of the numerical verification of the even Goldbach conjecture up to some large ${N_0}$ (we take ${N_0=4\times 10^{14}}$, using a verification of Richstein, although there are now much larger values of ${N_0}$as high as ${2.6 \times 10^{18}}$ – for which the conjecture has been verified). As such, instead of trying to represent an odd number ${x}$ as the sum of five primes, we can represent it as the sum of three odd primes and a natural number between ${2}$ and ${N_0}$. This effectively brings us back to the three primes problem, but with the significant additional boost that one can essentially restrict the frequency variable ${\alpha}$ to be of size ${O(1/N_0)}$. In practice, this eliminates all of the major arcs except for the principal arc around ${0}$. This is a significant simplification, in particular avoiding the need to deal with the prime number theorem in arithmetic progressions (and all the attendant theory of L-functions, Siegel zeroes, etc.).

In a similar spirit, by taking advantage of the numerical verification of the Riemann hypothesis up to some height ${T_0}$, and using the explicit formula relating the von Mangoldt function with the zeroes of the zeta function, one can safely deal with the principal major arc ${\{ \alpha = O( T_0 / x ) \}}$. For our specific application, we use the value ${T_0= 3.29 \times 10^9}$, arising from the verification of the Riemann hypothesis of the first ${10^{10}}$ zeroes by van de Lune (unpublished) and Wedeniswki. (Such verifications have since been extended further, the latest being that the first ${10^{13}}$ zeroes lie on the line.)

To make the contribution of the major arc as efficient as possible, we borrow an idea from a paper of Bourgain, and restrict one of the three primes in the three-primes problem to a somewhat shorter range than the other two (of size ${O(x/K)}$ instead of ${O(x)}$, where we take ${K}$ to be something like ${10^3}$), as this largely eliminates the “Archimedean” losses coming from trying to use Fourier methods to control convolutions on ${{\bf R}}$. In our paper, we set the scale parameter ${K}$ to be ${10^3}$ (basically, anything that is much larger than ${1}$ but much less than ${T_0}$ will work), but we found that an additional gain (which we ended up not using) could be obtained by averaging ${K}$ over a range of scales, say between ${10^3}$ and ${10^6}$. This sort of averaging could be a useful trick in future work on Goldbach-type problems.

It remains to treat the contribution of the “minor arc” ${T_0/x \ll |\alpha| \ll 1/N_0}$. To do this, one needs good ${L^2}$ and ${L^\infty}$ type estimates on the exponential sum ${S(x,\alpha)}$. Plancherel’s theorem gives an ${L^2}$ estimate which loses a logarithmic factor, but it turns out that on this particular minor arc one can use tools from the theory of the large sieve (such as Montgomery’s uncertainty principle) to eliminate this logarithmic loss almost completely; it turns out that the most efficient way to do this is use an effective upper bound of Siebert on the number of prime pairs ${(p,p+h)}$ less than ${x}$ to obtain an ${L^2}$ bound that only loses a factor of ${8}$ (or of ${7}$, once one cuts out the major arc).

For ${L^\infty}$ estimates, it turns out that existing effective versions of (1) (in particular, the bound given by Chen and Wang) are insufficient, due to the three logarithmic factors of ${\log x}$ in the bound. By using a smoothed out version ${S_\eta(x,\alpha) :=\sum_{n}\Lambda(n) e(n\alpha) \eta(n/x)}$ of the sum ${S(\alpha,x)}$, for some suitable cutoff function ${\eta}$, one can save one factor of a logarithm, obtaining a bound of the form

$\displaystyle |S_\eta(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^2 x$

with effective constants. One can improve the constants further by restricting all summations to odd integers (which barely affects ${S_\eta(x,\alpha)}$, since ${\Lambda}$ was mostly supported on odd numbers anyway), which in practice reduces the effective constants by a factor of two or so. One can also make further improvements in the constants by using the very sharp large sieve inequality to control the “Type II” sums that arise from Vaughan’s identity, and by using integration by parts to improve the bounds on the “Type I” sums. A final gain can then be extracted by optimising the cutoff parameters ${U, V}$ appearing in Vaughan’s identity to minimise the contribution of the Type II sums (which, in practice, are the dominant term). Combining all these improvements, one ends up with bounds of the shape

$\displaystyle |S_\eta(x,\alpha)| \ll \frac{x}{q} \log^2 x + \frac{x}{\sqrt{q}} \log^2 q$

when ${q}$ is small (say ${1 < q < x^{1/3}}$) and

$\displaystyle |S_\eta(x,\alpha)| \ll \frac{x}{(x/q)^2} \log^2 x + \frac{x}{\sqrt{x/q}} \log^2(x/q)$

when ${q}$ is large (say ${x^{2/3} < q < x}$). (See the paper for more explicit versions of these estimates.) The point here is that the ${\log x}$ factors have been partially replaced by smaller logarithmic factors such as ${\log q}$ or ${\log x/q}$. Putting together all of these improvements, one can finally obtain a satisfactory bound on the minor arc. (There are still some terms with a ${\log x}$ factor in them, but we use the effective Vinogradov theorem of Liu and Wang to upper bound ${\log x}$ by ${3100}$, which ends up making the remaining terms involving ${\log x}$ manageable.)