You are currently browsing the tag archive for the ‘Goldbach conjecture’ tag.

Two of the most famous open problems in additive prime number theory are the twin prime conjecture and the binary Goldbach conjecture. They have quite similar forms:

  • Twin prime conjecture The equation {p_1 - p_2 = 2} has infinitely many solutions with {p_1,p_2} prime.
  • Binary Goldbach conjecture The equation {p_1 + p_2 = N} has at least one solution with {p_1,p_2} prime for any given even {N \geq 4}.

In view of this similarity, it is not surprising that the partial progress on these two conjectures have tracked each other fairly closely; the twin prime conjecture is generally considered slightly easier than the binary Goldbach conjecture, but broadly speaking any progress made on one of the conjectures has also led to a comparable amount of progress on the other. (For instance, Chen’s theorem has a version for the twin prime conjecture, and a version for the binary Goldbach conjecture.) Also, the notorious parity obstruction is present in both problems, preventing a solution to either conjecture by almost all known methods (see this previous blog post for more discussion).

In this post, I would like to note a divergence from this general principle, with regards to bounded error versions of these two conjectures:

  • Twin prime with bounded error The inequalities {0 < p_1 - p_2 < H} has infinitely many solutions with {p_1,p_2} prime for some absolute constant {H}.
  • Binary Goldbach with bounded error The inequalities {N \leq p_1+p_2 \leq N+H} has at least one solution with {p_1,p_2} prime for any sufficiently large {N} and some absolute constant {H}.

The first of these statements is now a well-known theorem of Zhang, and the Polymath8b project hosted on this blog has managed to lower {H} to {H=246} unconditionally, and to {H=6} assuming the generalised Elliott-Halberstam conjecture. However, the second statement remains open; the best result that the Polymath8b project could manage in this direction is that (assuming GEH) at least one of the binary Goldbach conjecture with bounded error, or the twin prime conjecture with no error, had to be true.

All the known proofs of Zhang’s theorem proceed through sieve-theoretic means. Basically, they take as input equidistribution results that control the size of discrepancies such as

\displaystyle  \Delta(f; a\ (q)) := \sum_{x \leq n \leq 2x; n=a\ (q)} f(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} f(n) \ \ \ \ \ (1)

for various congruence classes {a\ (q)} and various arithmetic functions {f}, e.g. {f(n) = \Lambda(n+h_i)} (or more generaly {f(n) = \alpha * \beta(n+h_i)} for various {\alpha,\beta}). After taking some carefully chosen linear combinations of these discrepancies, and using the trivial positivity lower bound

\displaystyle  a_n \geq 0 \hbox{ for all } n \implies \sum_n a_n \geq 0 \ \ \ \ \ (2)

one eventually obtains (for suitable {H}) a non-trivial lower bound of the form

\displaystyle  \sum_{x \leq n \leq 2x} \nu(n) 1_A(n) > 0

where {\nu} is some weight function, and {A} is the set of {n} such that there are at least two primes in the interval {[n,n+H]}. This implies at least one solution to the inequalities {0 < p_1 - p_2 < H} with {p_1,p_2 \sim x}, and Zhang’s theorem follows.

In a similar vein, one could hope to use bounds on discrepancies such as (1) (for {x} comparable to {N}), together with the trivial lower bound (2), to obtain (for sufficiently large {N}, and suitable {H}) a non-trivial lower bound of the form

\displaystyle  \sum_{n \leq N} \nu(n) 1_B(n) > 0 \ \ \ \ \ (3)

for some weight function {\nu}, where {B} is the set of {n} such that there is at least one prime in each of the intervals {[n,n+H]} and {[N-n-H,n]}. This would imply the binary Goldbach conjecture with bounded error.

However, the parity obstruction blocks such a strategy from working (for much the same reason that it blocks any bound of the form {H \leq 4} in Zhang’s theorem, as discussed in the Polymath8b paper.) The reason is as follows. The sieve-theoretic arguments are linear with respect to the {n} summation, and as such, any such sieve-theoretic argument would automatically also work in a weighted setting in which the {n} summation is weighted by some non-negative weight {\omega(n) \geq 0}. More precisely, if one could control the weighted discrepancies

\displaystyle  \Delta(f\omega; a\ (q)) = \sum_{x \leq n \leq 2x; n=a\ (q)} f(n) \omega(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} f(n) \omega(n)

to essentially the same accuracy as the unweighted discrepancies (1), then thanks to the trivial weighted version

\displaystyle  a_n \geq 0 \hbox{ for all } n \implies \sum_n a_n \omega(n) \geq 0

of (2), any sieve-theoretic argument that was capable of proving (3) would also be capable of proving the weighted estimate

\displaystyle  \sum_{n \leq N} \nu(n) 1_B(n) \omega(n) > 0. \ \ \ \ \ (4)

However, (4) may be defeated by a suitable choice of weight {\omega}, namely

\displaystyle  \omega(n) := \prod_{i=1}^H (1 + \lambda(n) \lambda(n+i)) \times \prod_{j=0}^H (1 - \lambda(n) \lambda(N-n-j))

where {n \mapsto \lambda(n)} is the Liouville function, which counts the parity of the number of prime factors of a given number {n}. Since {\lambda(n)^2 = 1}, one can expand out {\omega(n)} as the sum of {1} and a finite number of other terms, each of which consists of the product of two or more translates (or reflections) of {\lambda}. But from the Möbius randomness principle (or its analogue for the Liouville function), such products of {\lambda} are widely expected to be essentially orthogonal to any arithmetic function {f(n)} that is arising from a single multiplicative function such as {\Lambda}, even on very short arithmetic progressions. As such, replacing {1} by {\omega(n)} in (1) should have a negligible effect on the discrepancy. On the other hand, in order for {\omega(n)} to be non-zero, {\lambda(n+i)} has to have the same sign as {\lambda(n)} and hence the opposite sign to {\lambda(N-n-j)} cannot simultaneously be prime for any {0 \leq i,j \leq H}, and so {1_B(n) \omega(n)} vanishes identically, contradicting (4). This indirectly rules out any modification of the Goldston-Pintz-Yildirim/Zhang method for establishing the binary Goldbach conjecture with bounded error.

The above argument is not watertight, and one could envisage some ways around this problem. One of them is that the Möbius randomness principle could simply be false, in which case the parity obstruction vanishes. A good example of this is the result of Heath-Brown that shows that if there are infinitely many Siegel zeroes (which is a strong violation of the Möbius randomness principle), then the twin prime conjecture holds. Another way around the obstruction is to start controlling the discrepancy (1) for functions {f} that are combinations of more than one multiplicative function, e.g. {f(n) = \Lambda(n) \Lambda(n+2)}. However, controlling such functions looks to be at least as difficult as the twin prime conjecture (which is morally equivalent to obtaining non-trivial lower-bounds for {\sum_{x \leq n \leq 2x} \Lambda(n) \Lambda(n+2)}). A third option is not to use a sieve-theoretic argument, but to try a different method (e.g. the circle method). However, most other known methods also exhibit linearity in the “{n}” variable and I would suspect they would be vulnerable to a similar obstruction. (In any case, the circle method specifically has some other difficulties in tackling binary problems, as discussed in this previous post.)

One of the most basic methods in additive number theory is the Hardy-Littlewood circle method. This method is based on expressing a quantity of interest to additive number theory, such as the number of representations {f_3(x)} of an integer {x} as the sum of three primes {x = p_1+p_2+p_3}, as a Fourier-analytic integral over the unit circle {{\bf R}/{\bf Z}} involving exponential sums such as

\displaystyle  S(x,\alpha) := \sum_{p \leq x} e( \alpha p) \ \ \ \ \ (1)

where the sum here ranges over all primes up to {x}, and {e(x) := e^{2\pi i x}}. For instance, the expression {f(x)} mentioned earlier can be written as

\displaystyle  f_3(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha. \ \ \ \ \ (2)

The strategy is then to obtain sufficiently accurate bounds on exponential sums such as {S(x,\alpha)} in order to obtain non-trivial bounds on quantities such as {f_3(x)}. For instance, if one can show that {f_3(x)>0} for all odd integers {x} greater than some given threshold {x_0}, this implies that all odd integers greater than {x_0} are expressible as the sum of three primes, thus establishing all but finitely many instances of the odd Goldbach conjecture.

Remark 1 In practice, it can be more efficient to work with smoother sums than the partial sum (1), for instance by replacing the cutoff {p \leq x} with a smoother cutoff {\chi(p/x)} for a suitable chocie of cutoff function {\chi}, or by replacing the restriction of the summation to primes by a more analytically tractable weight, such as the von Mangoldt function {\Lambda(n)}. However, these improvements to the circle method are primarily technical in nature and do not have much impact on the heuristic discussion in this post, so we will not emphasise them here. One can also certainly use the circle method to study additive combinations of numbers from other sets than the set of primes, but we will restrict attention to additive combinations of primes for sake of discussion, as it is historically one of the most studied sets in additive number theory.

In many cases, it turns out that one can get fairly precise evaluations on sums such as {S(x,\alpha)} in the major arc case, when {\alpha} is close to a rational number {a/q} with small denominator {q}, by using tools such as the prime number theorem in arithmetic progressions. For instance, the prime number theorem itself tells us that

\displaystyle  S(x,0) \approx \frac{x}{\log x}

and the prime number theorem in residue classes modulo {q} suggests more generally that

\displaystyle  S(x,\frac{a}{q}) \approx \frac{\mu(q)}{\phi(q)} \frac{x}{\log x}

when {q} is small and {a} is close to {q}, basically thanks to the elementary calculation that the phase {e(an/q)} has an average value of {\mu(q)/\phi(q)} when {n} is uniformly distributed amongst the residue classes modulo {q} that are coprime to {q}. Quantifying the precise error in these approximations can be quite challenging, though, unless one assumes powerful hypotheses such as the Generalised Riemann Hypothesis.

In the minor arc case when {\alpha} is not close to a rational {a/q} with small denominator, one no longer expects to have such precise control on the value of {S(x,\alpha)}, due to the “pseudorandom” fluctuations of the quantity {e(\alpha p)}. Using the standard probabilistic heuristic (supported by results such as the central limit theorem or Chernoff’s inequality) that the sum of {k} “pseudorandom” phases should fluctuate randomly and be of typical magnitude {\sim \sqrt{k}}, one expects upper bounds of the shape

\displaystyle  |S(x,\alpha)| \lessapprox \sqrt{\frac{x}{\log x}} \ \ \ \ \ (3)

for “typical” minor arc {\alpha}. Indeed, a simple application of the Plancherel identity, followed by the prime number theorem, reveals that

\displaystyle  \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2\ d\alpha \sim \frac{x}{\log x} \ \ \ \ \ (4)

which is consistent with (though weaker than) the above heuristic. In practice, though, we are unable to rigorously establish bounds anywhere near as strong as (3); upper bounds such as {x^{4/5+o(1)}} are far more typical.

Because one only expects to have upper bounds on {|S(x,\alpha)|}, rather than asymptotics, in the minor arc case, one cannot realistically hope to make much use of phases such as {e(-x\alpha)} for the minor arc contribution to integrals such as (2) (at least if one is working with a single, deterministic, value of {x}, so that averaging in {x} is unavailable). In particular, from upper bound information alone, it is difficult to avoid the “conspiracy” that the magnitude {|S(x,\alpha)|^3} oscillates in sympathetic resonance with the phase {e(-x\alpha)}, thus essentially eliminating almost all of the possible gain in the bounds that could arise from exploiting cancellation from that phase. Thus, one basically has little option except to use the triangle inequality to control the portion of the integral on the minor arc region {\Omega_{minor}}:

\displaystyle  |\int_{\Omega_{minor}} |S(x,\alpha)|^3 e(-x\alpha)\ d\alpha| \leq \int_{\Omega_{minor}} |S(x,\alpha)|^3\ d\alpha.

Despite this handicap, though, it is still possible to get enough bounds on both the major and minor arc contributions of integrals such as (2) to obtain non-trivial lower bounds on quantities such as {f(x)}, at least when {x} is large. In particular, this sort of method can be developed to give a proof of Vinogradov’s famous theorem that every sufficiently large odd integer {x} is the sum of three primes; my own result that all odd numbers greater than {1} can be expressed as the sum of at most five primes is also proven by essentially the same method (modulo a number of minor refinements, and taking advantage of some numerical work on both the Goldbach problems and on the Riemann hypothesis ). It is certainly conceivable that some further variant of the circle method (again combined with a suitable amount of numerical work, such as that of numerically establishing zero-free regions for the Generalised Riemann Hypothesis) can be used to settle the full odd Goldbach conjecture; indeed, under the assumption of the Generalised Riemann Hypothesis, this was already achieved by Deshouillers, Effinger, te Riele, and Zinoviev back in 1997. I am optimistic that an unconditional version of this result will be possible within a few years or so, though I should say that there are still significant technical challenges to doing so, and some clever new ideas will probably be needed to get either the Vinogradov-style argument or numerical verification to work unconditionally for the three-primes problem at medium-sized ranges of {x}, such as {x \sim 10^{50}}. (But the intermediate problem of representing all even natural numbers as the sum of at most four primes looks somewhat closer to being feasible, though even this would require some substantially new and non-trivial ideas beyond what is in my five-primes paper.)

However, I (and many other analytic number theorists) are considerably more skeptical that the circle method can be applied to the even Goldbach problem of representing a large even number {x} as the sum {x = p_1 + p_2} of two primes, or the similar (and marginally simpler) twin prime conjecture of finding infinitely many pairs of twin primes, i.e. finding infinitely many representations {2 = p_1 - p_2} of {2} as the difference of two primes. At first glance, the situation looks tantalisingly similar to that of the Vinogradov theorem: to settle the even Goldbach problem for large {x}, one has to find a non-trivial lower bound for the quantity

\displaystyle  f_2(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^2 e(-x\alpha)\ d\alpha \ \ \ \ \ (5)

for sufficiently large {x}, as this quantity {f_2(x)} is also the number of ways to represent {x} as the sum {x=p_1+p_2} of two primes {p_1,p_2}. Similarly, to settle the twin prime problem, it would suffice to obtain a lower bound for the quantity

\displaystyle  \tilde f_2(x) = \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2 e(-2\alpha)\ d\alpha \ \ \ \ \ (6)

that goes to infinity as {x \rightarrow \infty}, as this quantity {\tilde f_2(x)} is also the number of ways to represent {2} as the difference {2 = p_1-p_2} of two primes less than or equal to {x}.

In principle, one can achieve either of these two objectives by a sufficiently fine level of control on the exponential sums {S(x,\alpha)}. Indeed, there is a trivial (and uninteresting) way to take any (hypothetical) solution of either the asymptotic even Goldbach problem or the twin prime problem and (artificially) convert it to a proof that “uses the circle method”; one simply begins with the quantity {f_2(x)} or {\tilde f_2(x)}, expresses it in terms of {S(x,\alpha)} using (5) or (6), and then uses (5) or (6) again to convert these integrals back into a the combinatorial expression of counting solutions to {x=p_1+p_2} or {2=p_1-p_2}, and then uses the hypothetical solution to the given problem to obtain the required lower bounds on {f_2(x)} or {\tilde f_2(x)}.

Of course, this would not qualify as a genuine application of the circle method by any reasonable measure. One can then ask the more refined question of whether one could hope to get non-trivial lower bounds on {f_2(x)} or {\tilde f_2(x)} (or similar quantities) purely from the upper and lower bounds on {S(x,\alpha)} or similar quantities (and of various {L^p} type norms on such quantities, such as the {L^2} bound (4)). Of course, we do not yet know what the strongest possible upper and lower bounds in {S(x,\alpha)} are yet (otherwise we would already have made progress on major conjectures such as the Riemann hypothesis); but we can make plausible heuristic conjectures on such bounds. And this is enough to make the following heuristic conclusions:

  • (i) For “binary” problems such as computing (5), (6), the contribution of the minor arcs potentially dominates that of the major arcs (if all one is given about the minor arc sums is magnitude information), in contrast to “ternary” problems such as computing (2), in which it is the major arc contribution which is absolutely dominant.
  • (ii) Upper and lower bounds on the magnitude of {S(x,\alpha)} are not sufficient, by themselves, to obtain non-trivial bounds on (5), (6) unless these bounds are extremely tight (within a relative error of {O(1/\log x)} or better); but
  • (iii) obtaining such tight bounds is a problem of comparable difficulty to the original binary problems.

I will provide some justification for these conclusions below the fold; they are reasonably well known “folklore” to many researchers in the field, but it seems that they are rarely made explicit in the literature (in part because these arguments are, by their nature, heuristic instead of rigorous) and I have been asked about them from time to time, so I decided to try to write them down here.

In view of the above conclusions, it seems that the best one can hope to do by using the circle method for the twin prime or even Goldbach problems is to reformulate such problems into a statement of roughly comparable difficulty to the original problem, even if one assumes powerful conjectures such as the Generalised Riemann Hypothesis (which lets one make very precise control on major arc exponential sums, but not on minor arc ones). These are not rigorous conclusions – after all, we have already seen that one can always artifically insert the circle method into any viable approach on these problems – but they do strongly suggest that one needs a method other than the circle method in order to fully solve either of these two problems. I do not know what such a method would be, though I can give some heuristic objections to some of the other popular methods used in additive number theory (such as sieve methods, or more recently the use of inverse theorems); this will be done at the end of this post.

Read the rest of this entry »

I’ve just uploaded to the arXiv my paper “Every odd number greater than 1 is the sum of at most five primes“, submitted to Mathematics of Computation. The main result of the paper is as stated in the title, and is in the spirit of (though significantly weaker than) the even Goldbach conjecture (every even natural number is the sum of at most two primes) and odd Goldbach conjecture (every odd natural number greater than 1 is the sum of at most three primes). It also improves on a result of Ramaré that every even natural number is the sum of at most six primes. This result had previously also been established by Kaniecki under the additional assumption of the Riemann hypothesis, so one can view the main result here as an unconditional version of Kaniecki’s result.

The method used is the Hardy-Littlewood circle method, which was for instance also used to prove Vinogradov’s theorem that every sufficiently large odd number is the sum of three primes. Let’s quickly recall how this argument works. It is convenient to use a proxy for the primes, such as the von Mangoldt function {\Lambda}, which is mostly supported on the primes. To represent a large number {x} as the sum of three primes, it suffices to obtain a good lower bound for the sum

\displaystyle  \sum_{n_1,n_2,n_3: n_1+n_2+n_3=x} \Lambda(n_1) \Lambda(n_2) \Lambda(n_3).

By Fourier analysis, one can rewrite this sum as an integral

\displaystyle  \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha

where

\displaystyle  S(x,\alpha) := \sum_{n \leq x} \Lambda(n) e(n\alpha)

and {e(\theta) :=e^{2\pi i \theta}}. To control this integral, one then needs good bounds on {S(x,\alpha)} for various values of {\alpha}. To do this, one first approximates {\alpha} by a rational {a/q} with controlled denominator (using a tool such as the Dirichlet approximation theorem) {q}. The analysis then broadly bifurcates into the major arc case when {q} is small, and the minor arc case when {q} is large. In the major arc case, the problem more or less boils down to understanding sums such as

\displaystyle  \sum_{n\leq x} \Lambda(n) e(an/q),

which in turn is almost equivalent to understanding the prime number theorem in arithmetic progressions modulo {q}. In the minor arc case, the prime number theorem is not strong enough to give good bounds (unless one is using some extremely strong hypotheses, such as the generalised Riemann hypothesis), so instead one uses a rather different method, using truncated versions of divisor sum identities such as {\Lambda(n) =\sum_{d|n} \mu(d) \log\frac{n}{d}} to split {S(x,\alpha)} into a collection of linear and bilinear sums that are more tractable to bound, typical examples of which (after using a particularly simple truncated divisor sum identity known as Vaughan’s identity) include the “Type I sum”

\displaystyle \sum_{d \leq U} \mu(d) \sum_{n \leq x/d} \log(n) e(\alpha dn)

and the “Type II sum”

\displaystyle  \sum_{d > U} \sum_{w > V} \mu(d) (\sum_{b|w: b > V} \Lambda(b)) e(\alpha dw) 1_{dw \leq x}.

After using tools such as the triangle inequality or Cauchy-Schwarz inequality to eliminate arithmetic functions such as {\mu(d)} or {\sum_{b|w: b>V}\Lambda(b)}, one ends up controlling plain exponential sums such as {\sum_{V < w < x/d} e(\alpha dw)}, which can be efficiently controlled in the minor arc case.

This argument works well when {x} is extremely large, but starts running into problems for moderate sized {x}, e.g. {x \sim 10^{30}}. The first issue is that of logarithmic losses in the minor arc estimates. A typical minor arc estimate takes the shape

\displaystyle  |S(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^3 x \ \ \ \ \ (1)

when {\alpha} is close to {a/q} for some {1\leq q\leq x}. This only improves upon the trivial estimate {|S(x,\alpha)| \ll x} from the prime number theorem when {\log^6 x \ll q \ll x/\log^6 x}. As a consequence, it becomes necessary to obtain an accurate prime number theorem in arithmetic progressions with modulus as large as {\log^6 x}. However, with current technology, the error term in such theorems are quite poor (terms such as {O(\exp(-c\sqrt{\log x}) x)} for some small {c>0} are typical, and there is also a notorious “Siegel zero” problem), and as a consequence, the method is generally only applicable for very large {x}. For instance, the best explicit result of Vinogradov type known currently is due to Liu and Wang, who established that all odd numbers larger than {10^{1340}} are the sum of three odd primes. (However, on the assumption of the GRH, the full odd Goldbach conjecture is known to be true; this is a result of Deshouillers, Effinger, te Riele, and Zinoviev.)

In this paper, we make a number of refinements to the general scheme, each one of which is individually rather modest and not all that novel, but which when added together turn out to be enough to resolve the five primes problem (though many more ideas would still be needed to tackle the three primes problem, and as is well known the circle method is very unlikely to be the route to make progress on the two primes problem). The first refinement, which is only available in the five primes case, is to take advantage of the numerical verification of the even Goldbach conjecture up to some large {N_0} (we take {N_0=4\times 10^{14}}, using a verification of Richstein, although there are now much larger values of {N_0}as high as {2.6 \times 10^{18}} – for which the conjecture has been verified). As such, instead of trying to represent an odd number {x} as the sum of five primes, we can represent it as the sum of three odd primes and a natural number between {2} and {N_0}. This effectively brings us back to the three primes problem, but with the significant additional boost that one can essentially restrict the frequency variable {\alpha} to be of size {O(1/N_0)}. In practice, this eliminates all of the major arcs except for the principal arc around {0}. This is a significant simplification, in particular avoiding the need to deal with the prime number theorem in arithmetic progressions (and all the attendant theory of L-functions, Siegel zeroes, etc.).

In a similar spirit, by taking advantage of the numerical verification of the Riemann hypothesis up to some height {T_0}, and using the explicit formula relating the von Mangoldt function with the zeroes of the zeta function, one can safely deal with the principal major arc {\{ \alpha = O( T_0 / x ) \}}. For our specific application, we use the value {T_0= 3.29 \times 10^9}, arising from the verification of the Riemann hypothesis of the first {10^{10}} zeroes by van de Lune (unpublished) and Wedeniswki. (Such verifications have since been extended further, the latest being that the first {10^{13}} zeroes lie on the line.)

To make the contribution of the major arc as efficient as possible, we borrow an idea from a paper of Bourgain, and restrict one of the three primes in the three-primes problem to a somewhat shorter range than the other two (of size {O(x/K)} instead of {O(x)}, where we take {K} to be something like {10^3}), as this largely eliminates the “Archimedean” losses coming from trying to use Fourier methods to control convolutions on {{\bf R}}. In our paper, we set the scale parameter {K} to be {10^3} (basically, anything that is much larger than {1} but much less than {T_0} will work), but we found that an additional gain (which we ended up not using) could be obtained by averaging {K} over a range of scales, say between {10^3} and {10^6}. This sort of averaging could be a useful trick in future work on Goldbach-type problems.

It remains to treat the contribution of the “minor arc” {T_0/x \ll |\alpha| \ll 1/N_0}. To do this, one needs good {L^2} and {L^\infty} type estimates on the exponential sum {S(x,\alpha)}. Plancherel’s theorem gives an {L^2} estimate which loses a logarithmic factor, but it turns out that on this particular minor arc one can use tools from the theory of the large sieve (such as Montgomery’s uncertainty principle) to eliminate this logarithmic loss almost completely; it turns out that the most efficient way to do this is use an effective upper bound of Siebert on the number of prime pairs {(p,p+h)} less than {x} to obtain an {L^2} bound that only loses a factor of {8} (or of {7}, once one cuts out the major arc).

For {L^\infty} estimates, it turns out that existing effective versions of (1) (in particular, the bound given by Chen and Wang) are insufficient, due to the three logarithmic factors of {\log x} in the bound. By using a smoothed out version {S_\eta(x,\alpha) :=\sum_{n}\Lambda(n) e(n\alpha) \eta(n/x)} of the sum {S(\alpha,x)}, for some suitable cutoff function {\eta}, one can save one factor of a logarithm, obtaining a bound of the form

\displaystyle  |S_\eta(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^2 x

with effective constants. One can improve the constants further by restricting all summations to odd integers (which barely affects {S_\eta(x,\alpha)}, since {\Lambda} was mostly supported on odd numbers anyway), which in practice reduces the effective constants by a factor of two or so. One can also make further improvements in the constants by using the very sharp large sieve inequality to control the “Type II” sums that arise from Vaughan’s identity, and by using integration by parts to improve the bounds on the “Type I” sums. A final gain can then be extracted by optimising the cutoff parameters {U, V} appearing in Vaughan’s identity to minimise the contribution of the Type II sums (which, in practice, are the dominant term). Combining all these improvements, one ends up with bounds of the shape

\displaystyle  |S_\eta(x,\alpha)| \ll \frac{x}{q} \log^2 x + \frac{x}{\sqrt{q}} \log^2 q

when {q} is small (say {1 < q < x^{1/3}}) and

\displaystyle  |S_\eta(x,\alpha)| \ll \frac{x}{(x/q)^2} \log^2 x + \frac{x}{\sqrt{x/q}} \log^2(x/q)

when {q} is large (say {x^{2/3} < q < x}). (See the paper for more explicit versions of these estimates.) The point here is that the {\log x} factors have been partially replaced by smaller logarithmic factors such as {\log q} or {\log x/q}. Putting together all of these improvements, one can finally obtain a satisfactory bound on the minor arc. (There are still some terms with a {\log x} factor in them, but we use the effective Vinogradov theorem of Liu and Wang to upper bound {\log x} by {3100}, which ends up making the remaining terms involving {\log x} manageable.)

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,587 other followers