You are currently browsing the tag archive for the ‘parity problem’ tag.

Many problems and results in analytic prime number theory can be formulated in the following general form: given a collection of (affine-)linear forms {L_1(n),\dots,L_k(n)}, none of which is a multiple of any other, find a number {n} such that a certain property {P( L_1(n),\dots,L_k(n) )} of the linear forms {L_1(n),\dots,L_k(n)} are true. For instance:

  • For the twin prime conjecture, one can use the linear forms {L_1(n) := n}, {L_2(n) := n+2}, and the property {P( L_1(n), L_2(n) )} in question is the assertion that {L_1(n)} and {L_2(n)} are both prime.
  • For the even Goldbach conjecture, the claim is similar but one uses the linear forms {L_1(n) := n}, {L_2(n) := N-n} for some even integer {N}.
  • For Chen’s theorem, we use the same linear forms {L_1(n),L_2(n)} as in the previous two cases, but now {P(L_1(n), L_2(n))} is the assertion that {L_1(n)} is prime and {L_2(n)} is an almost prime (in the sense that there are at most two prime factors).
  • In the recent results establishing bounded gaps between primes, we use the linear forms {L_i(n) = n + h_i} for some admissible tuple {h_1,\dots,h_k}, and take {P(L_1(n),\dots,L_k(n))} to be the assertion that at least two of {L_1(n),\dots,L_k(n)} are prime.

For these sorts of results, one can try a sieve-theoretic approach, which can broadly be formulated as follows:

  1. First, one chooses a carefully selected sieve weight {\nu: {\bf N} \rightarrow {\bf R}^+}, which could for instance be a non-negative function having a divisor sum form

    \displaystyle  \nu(n) := \sum_{d_1|L_1(n), \dots, d_k|L_k(n); d_1 \dots d_k \leq x^{1-\varepsilon}} \lambda_{d_1,\dots,d_k}

    for some coefficients {\lambda_{d_1,\dots,d_k}}, where {x} is a natural scale parameter. The precise choice of sieve weight is often quite a delicate matter, but will not be discussed here. (In some cases, one may work with multiple sieve weights {\nu_1, \nu_2, \dots}.)

  2. Next, one uses tools from analytic number theory (such as the Bombieri-Vinogradov theorem) to obtain upper and lower bounds for sums such as

    \displaystyle  \sum_n \nu(n) \ \ \ \ \ (1)

    or

    \displaystyle  \sum_n \nu(n) 1_{L_i(n) \hbox{ prime}} \ \ \ \ \ (2)

    or more generally of the form

    \displaystyle  \sum_n \nu(n) f(L_i(n)) \ \ \ \ \ (3)

    where {f(L_i(n))} is some “arithmetic” function involving the prime factorisation of {L_i(n)} (we will be a bit vague about what this means precisely, but a typical choice of {f} might be a Dirichlet convolution {\alpha*\beta(L_i(n))} of two other arithmetic functions {\alpha,\beta}).

  3. Using some combinatorial arguments, one manipulates these upper and lower bounds, together with the non-negative nature of {\nu}, to conclude the existence of an {n} in the support of {\nu} (or of at least one of the sieve weights {\nu_1, \nu_2, \dots} being considered) for which {P( L_1(n), \dots, L_k(n) )} holds

For instance, in the recent results on bounded gaps between primes, one selects a sieve weight {\nu} for which one has upper bounds on

\displaystyle  \sum_n \nu(n)

and lower bounds on

\displaystyle  \sum_n \nu(n) 1_{n+h_i \hbox{ prime}}

so that one can show that the expression

\displaystyle  \sum_n \nu(n) (\sum_{i=1}^k 1_{n+h_i \hbox{ prime}} - 1)

is strictly positive, which implies the existence of an {n} in the support of {\nu} such that at least two of {n+h_1,\dots,n+h_k} are prime. As another example, to prove Chen’s theorem to find {n} such that {L_1(n)} is prime and {L_2(n)} is almost prime, one uses a variety of sieve weights to produce a lower bound for

\displaystyle  S_1 := \sum_{n \leq x} 1_{L_1(n) \hbox{ prime}} 1_{L_2(n) \hbox{ rough}}

and an upper bound for

\displaystyle  S_2 := \sum_{z \leq p < x^{1/3}} \sum_{n \leq x} 1_{L_1(n) \hbox{ prime}} 1_{p|L_2(n)} 1_{L_2(n) \hbox{ rough}}

and

\displaystyle  S_3 := \sum_{n \leq x} 1_{L_1(n) \hbox{ prime}} 1_{L_2(n)=pqr \hbox{ for some } z \leq p \leq x^{1/3} < q \leq r},

where {z} is some parameter between {1} and {x^{1/3}}, and “rough” means that all prime factors are at least {z}. One can observe that if {S_1 - \frac{1}{2} S_2 - \frac{1}{2} S_3 > 0}, then there must be at least one {n} for which {L_1(n)} is prime and {L_2(n)} is almost prime, since for any rough number {m}, the quantity

\displaystyle  1 - \frac{1}{2} \sum_{z \leq p < x^{1/3}} 1_{p|m} - \frac{1}{2} \sum_{z \leq p \leq x^{1/3} < q \leq r} 1_{m = pqr}

is only positive when {m} is an almost prime (if {m} has three or more factors, then either it has at least two factors less than {x^{1/3}}, or it is of the form {pqr} for some {p \leq x^{1/3} < q \leq r}). The upper and lower bounds on {S_1,S_2,S_3} are ultimately produced via asymptotics for expressions of the form (1), (2), (3) for various divisor sums {\nu} and various arithmetic functions {f}.

Unfortunately, there is an obstruction to sieve-theoretic techniques working for certain types of properties {P(L_1(n),\dots,L_k(n))}, which Zeb Brady and I recently formalised at an AIM workshop this week. To state the result, we recall the Liouville function {\lambda(n)}, defined by setting {\lambda(n) = (-1)^j} whenever {n} is the product of exactly {j} primes (counting multiplicity). Define a sign pattern to be an element {(\epsilon_1,\dots,\epsilon_k)} of the discrete cube {\{-1,+1\}^k}. Given a property {P(l_1,\dots,l_k)} of {k} natural numbers {l_1,\dots,l_k}, we say that a sign pattern {(\epsilon_1,\dots,\epsilon_k)} is forbidden by {P} if there does not exist any natural numbers {l_1,\dots,l_k} obeying {P(l_1,\dots,l_k)} for which

\displaystyle  (\lambda(l_1),\dots,\lambda(l_k)) = (\epsilon_1,\dots,\epsilon_k).

Example 1 Let {P(l_1,l_2,l_3)} be the property that at least two of {l_1,l_2,l_3} are prime. Then the sign patterns {(+1,+1,+1)}, {(+1,+1,-1)}, {(+1,-1,+1)}, {(-1,+1,+1)} are forbidden, because prime numbers have a Liouville function of {-1}, so that {P(l_1,l_2,l_3)} can only occur when at least two of {\lambda(l_1),\lambda(l_2), \lambda(l_3)} are equal to {-1}.

Example 2 Let {P(l_1,l_2)} be the property that {l_1} is prime and {l_2} is almost prime. Then the only forbidden sign patterns are {(+1,+1)} and {(+1,-1)}.

Example 3 Let {P(l_1,l_2)} be the property that {l_1} and {l_2} are both prime. Then {(+1,+1), (+1,-1), (-1,+1)} are all forbidden sign patterns.

We then have a parity obstruction as soon as {P} has “too many” forbidden sign patterns, in the following (slightly informal) sense:

Claim 1 (Parity obstruction) Suppose {P(l_1,\dots,l_k)} is such that that the convex hull of the forbidden sign patterns of {P} contains the origin. Then one cannot use the above sieve-theoretic approach to establish the existence of an {n} such that {P(L_1(n),\dots,L_k(n))} holds.

Thus for instance, the property in Example 3 is subject to the parity obstruction since {0} is a convex combination of {(+1,-1)} and {(-1,+1)}, whereas the properties in Examples 1, 2 are not. One can also check that the property “at least {j} of the {k} numbers {l_1,\dots,l_k} is prime” is subject to the parity obstruction as soon as {j \geq \frac{k}{2}+1}. Thus, the largest number of elements of a {k}-tuple that one can force to be prime by purely sieve-theoretic methods is {k/2}, rounded up.

This claim is not precisely a theorem, because it presumes a certain “Liouville pseudorandomness conjecture” (a very close cousin of the more well known “Möbius pseudorandomness conjecture”) which is a bit difficult to formalise precisely. However, this conjecture is widely believed by analytic number theorists, see e.g. this blog post for a discussion. (Note though that there are scenarios, most notably the “Siegel zero” scenario, in which there is a severe breakdown of this pseudorandomness conjecture, and the parity obstruction then disappears. A typical instance of this is Heath-Brown’s proof of the twin prime conjecture (which would ordinarily be subject to the parity obstruction) under the hypothesis of a Siegel zero.) The obstruction also does not prevent the establishment of an {n} such that {P(L_1(n),\dots,L_k(n))} holds by introducing additional sieve axioms beyond upper and lower bounds on quantities such as (1), (2), (3). The proof of the Friedlander-Iwaniec theorem is a good example of this latter scenario.

Now we give a (slightly nonrigorous) proof of the claim.

Proof: (Nonrigorous) Suppose that the convex hull of the forbidden sign patterns contain the origin. Then we can find non-negative numbers {p_{\epsilon_1,\dots,\epsilon_k}} for sign patterns {(\epsilon_1,\dots,\epsilon_k)}, which sum to {1}, are non-zero only for forbidden sign patterns, and which have mean zero in the sense that

\displaystyle  \sum_{(\epsilon_1,\dots,\epsilon_k)} p_{\epsilon_1,\dots,\epsilon_k} \epsilon_i = 0

for all {i=1,\dots,k}. By Fourier expansion (or Lagrange interpolation), one can then write {p_{\epsilon_1,\dots,\epsilon_k}} as a polynomial

\displaystyle  p_{\epsilon_1,\dots,\epsilon_k} = 1 + Q( \epsilon_1,\dots,\epsilon_k)

where {Q(t_1,\dots,t_k)} is a polynomial in {k} variables that is a linear combination of monomials {t_{i_1} \dots t_{i_r}} with {i_1 < \dots < i_r} and {r \geq 2} (thus {Q} has no constant or linear terms, and no monomials with repeated terms). The point is that the mean zero condition allows one to eliminate the linear terms. If we now consider the weight function

\displaystyle  w(n) := 1 + Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) )

then {w} is non-negative, is supported solely on {n} for which {(\lambda(L_1(n)),\dots,\lambda(L_k(n)))} is a forbidden pattern, and is equal to {1} plus a linear combination of monomials {\lambda(L_{i_1}(n)) \dots \lambda(L_{i_r}(n))} with {r \geq 2}.

The Liouville pseudorandomness principle then predicts that sums of the form

\displaystyle  \sum_n \nu(n) Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) )

and

\displaystyle  \sum_n \nu(n) Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) ) 1_{L_i(n) \hbox{ prime}}

or more generally

\displaystyle  \sum_n \nu(n) Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) ) f(L_i(n))

should be asymptotically negligible; intuitively, the point here is that the prime factorisation of {L_i(n)} should not influence the Liouville function of {L_j(n)}, even on the short arithmetic progressions that the divisor sum {\nu} is built out of, and so any monomial {\lambda(L_{i_1}(n)) \dots \lambda(L_{i_r}(n))} occurring in {Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) )} should exhibit strong cancellation for any of the above sums. If one accepts this principle, then all the expressions (1), (2), (3) should be essentially unchanged when {\nu(n)} is replaced by {\nu(n) w(n)}.

Suppose now for sake of contradiction that one could use sieve-theoretic methods to locate an {n} in the support of some sieve weight {\nu(n)} obeying {P( L_1(n),\dots,L_k(n))}. Then, by reweighting all sieve weights by the additional multiplicative factor of {w(n)}, the same arguments should also be able to locate {n} in the support of {\nu(n) w(n)} for which {P( L_1(n),\dots,L_k(n))} holds. But {w} is only supported on those {n} whose Liouville sign pattern is forbidden, a contradiction. \Box

Claim 1 is sharp in the following sense: if the convex hull of the forbidden sign patterns of {P} do not contain the origin, then by the Hahn-Banach theorem (in the hyperplane separation form), there exist real coefficients {c_1,\dots,c_k} such that

\displaystyle  c_1 \epsilon_1 + \dots + c_k \epsilon_k < -c

for all forbidden sign patterns {(\epsilon_1,\dots,\epsilon_k)} and some {c>0}. On the other hand, from Liouville pseudorandomness one expects that

\displaystyle  \sum_n \nu(n) (c_1 \lambda(L_1(n)) + \dots + c_k \lambda(L_k(n)))

is negligible (as compared against {\sum_n \nu(n)} for any reasonable sieve weight {\nu}. We conclude that for some {n} in the support of {\nu}, that

\displaystyle  c_1 \lambda(L_1(n)) + \dots + c_k \lambda(L_k(n)) > -c \ \ \ \ \ (4)

and hence {(\lambda(L_1(n)),\dots,\lambda(L_k(n)))} is not a forbidden sign pattern. This does not actually imply that {P(L_1(n),\dots,L_k(n))} holds, but it does not prevent {P(L_1(n),\dots,L_k(n))} from holding purely from parity considerations. Thus, we do not expect a parity obstruction of the type in Claim 1 to hold when the convex hull of forbidden sign patterns does not contain the origin.

Example 4 Let {G} be a graph on {k} vertices {\{1,\dots,k\}}, and let {P(l_1,\dots,l_k)} be the property that one can find an edge {\{i,j\}} of {G} with {l_i,l_j} both prime. We claim that this property is subject to the parity problem precisely when {G} is two-colourable. Indeed, if {G} is two-colourable, then we can colour {\{1,\dots,k\}} into two colours (say, red and green) such that all edges in {G} connect a red vertex to a green vertex. If we then consider the two sign patterns in which all the red vertices have one sign and the green vertices have the opposite sign, these are two forbidden sign patterns which contain the origin in the convex hull, and so the parity problem applies. Conversely, suppose that {G} is not two-colourable, then it contains an odd cycle. Any forbidden sign pattern then must contain more {+1}s on this odd cycle than {-1}s (since otherwise two of the {-1}s are adjacent on this cycle by the pigeonhole principle, and this is not forbidden), and so by convexity any tuple in the convex hull of this sign pattern has a positive sum on this odd cycle. Hence the origin is not in the convex hull, and the parity obstruction does not apply. (See also this previous post for a similar obstruction ultimately coming from two-colourability).

Example 5 An example of a parity-obstructed property (supplied by Zeb Brady) that does not come from two-colourability: we let {P( l_{\{1,2\}}, l_{\{1,3\}}, l_{\{1,4\}}, l_{\{2,3\}}, l_{\{2,4\}}, l_{\{3,4\}} )} be the property that {l_{A_1},\dots,l_{A_r}} are prime for some collection {A_1,\dots,A_r} of pair sets that cover {\{1,\dots,4\}}. For instance, this property holds if {l_{\{1,2\}}, l_{\{3,4\}}} are both prime, or if {l_{\{1,2\}}, l_{\{1,3\}}, l_{\{1,4\}}} are all prime, but not if {l_{\{1,2\}}, l_{\{1,3\}}, l_{\{2,3\}}} are the only primes. An example of a forbidden sign pattern is the pattern where {\{1,2\}, \{2,3\}, \{1,3\}} are given the sign {-1}, and the other three pairs are given {+1}. Averaging over permutations of {1,2,3,4} we see that zero lies in the convex hull, and so this example is blocked by parity. However, there is no sign pattern such that it and its negation are both forbidden, which is another formulation of two-colourability.

Of course, the absence of a parity obstruction does not automatically mean that the desired claim is true. For instance, given an admissible {5}-tuple {h_1,\dots,h_5}, parity obstructions do not prevent one from establishing the existence of infinitely many {n} such that at least three of {n+h_1,\dots,n+h_5} are prime, however we are not yet able to actually establish this, even assuming strong sieve-theoretic hypotheses such as the generalised Elliott-Halberstam hypothesis. (However, the argument giving (4) does easily give the far weaker claim that there exist infinitely many {n} such that at least three of {n+h_1,\dots,n+h_5} have a Liouville function of {-1}.)

Remark 1 Another way to get past the parity problem in some cases is to take advantage of linear forms that are constant multiples of each other (which correlates the Liouville functions to each other). For instance, on GEH we can find two {E_3} numbers (products of exactly three primes) that differ by exactly {60}; a direct sieve approach using the linear forms {n,n+60} fails due to the parity obstruction, but instead one can first find {n} such that two of {n,n+4,n+10} are prime, and then among the pairs of linear forms {(15n,15n+60)}, {(6n,6n+60)}, {(10n+40,10n+100)} one can find a pair of {E_3} numbers that differ by exactly {60}. See this paper of Goldston, Graham, Pintz, and Yildirim for more examples of this type.

I thank John Friedlander and Sid Graham for helpful discussions and encouragement.

Two of the most famous open problems in additive prime number theory are the twin prime conjecture and the binary Goldbach conjecture. They have quite similar forms:

  • Twin prime conjecture The equation {p_1 - p_2 = 2} has infinitely many solutions with {p_1,p_2} prime.
  • Binary Goldbach conjecture The equation {p_1 + p_2 = N} has at least one solution with {p_1,p_2} prime for any given even {N \geq 4}.

In view of this similarity, it is not surprising that the partial progress on these two conjectures have tracked each other fairly closely; the twin prime conjecture is generally considered slightly easier than the binary Goldbach conjecture, but broadly speaking any progress made on one of the conjectures has also led to a comparable amount of progress on the other. (For instance, Chen’s theorem has a version for the twin prime conjecture, and a version for the binary Goldbach conjecture.) Also, the notorious parity obstruction is present in both problems, preventing a solution to either conjecture by almost all known methods (see this previous blog post for more discussion).

In this post, I would like to note a divergence from this general principle, with regards to bounded error versions of these two conjectures:

  • Twin prime with bounded error The inequalities {0 < p_1 - p_2 < H} has infinitely many solutions with {p_1,p_2} prime for some absolute constant {H}.
  • Binary Goldbach with bounded error The inequalities {N \leq p_1+p_2 \leq N+H} has at least one solution with {p_1,p_2} prime for any sufficiently large {N} and some absolute constant {H}.

The first of these statements is now a well-known theorem of Zhang, and the Polymath8b project hosted on this blog has managed to lower {H} to {H=246} unconditionally, and to {H=6} assuming the generalised Elliott-Halberstam conjecture. However, the second statement remains open; the best result that the Polymath8b project could manage in this direction is that (assuming GEH) at least one of the binary Goldbach conjecture with bounded error, or the twin prime conjecture with no error, had to be true.

All the known proofs of Zhang’s theorem proceed through sieve-theoretic means. Basically, they take as input equidistribution results that control the size of discrepancies such as

\displaystyle  \Delta(f; a\ (q)) := \sum_{x \leq n \leq 2x; n=a\ (q)} f(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} f(n) \ \ \ \ \ (1)

for various congruence classes {a\ (q)} and various arithmetic functions {f}, e.g. {f(n) = \Lambda(n+h_i)} (or more generaly {f(n) = \alpha * \beta(n+h_i)} for various {\alpha,\beta}). After taking some carefully chosen linear combinations of these discrepancies, and using the trivial positivity lower bound

\displaystyle  a_n \geq 0 \hbox{ for all } n \implies \sum_n a_n \geq 0 \ \ \ \ \ (2)

one eventually obtains (for suitable {H}) a non-trivial lower bound of the form

\displaystyle  \sum_{x \leq n \leq 2x} \nu(n) 1_A(n) > 0

where {\nu} is some weight function, and {A} is the set of {n} such that there are at least two primes in the interval {[n,n+H]}. This implies at least one solution to the inequalities {0 < p_1 - p_2 < H} with {p_1,p_2 \sim x}, and Zhang’s theorem follows.

In a similar vein, one could hope to use bounds on discrepancies such as (1) (for {x} comparable to {N}), together with the trivial lower bound (2), to obtain (for sufficiently large {N}, and suitable {H}) a non-trivial lower bound of the form

\displaystyle  \sum_{n \leq N} \nu(n) 1_B(n) > 0 \ \ \ \ \ (3)

for some weight function {\nu}, where {B} is the set of {n} such that there is at least one prime in each of the intervals {[n,n+H]} and {[N-n-H,n]}. This would imply the binary Goldbach conjecture with bounded error.

However, the parity obstruction blocks such a strategy from working (for much the same reason that it blocks any bound of the form {H \leq 4} in Zhang’s theorem, as discussed in the Polymath8b paper.) The reason is as follows. The sieve-theoretic arguments are linear with respect to the {n} summation, and as such, any such sieve-theoretic argument would automatically also work in a weighted setting in which the {n} summation is weighted by some non-negative weight {\omega(n) \geq 0}. More precisely, if one could control the weighted discrepancies

\displaystyle  \Delta(f\omega; a\ (q)) = \sum_{x \leq n \leq 2x; n=a\ (q)} f(n) \omega(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} f(n) \omega(n)

to essentially the same accuracy as the unweighted discrepancies (1), then thanks to the trivial weighted version

\displaystyle  a_n \geq 0 \hbox{ for all } n \implies \sum_n a_n \omega(n) \geq 0

of (2), any sieve-theoretic argument that was capable of proving (3) would also be capable of proving the weighted estimate

\displaystyle  \sum_{n \leq N} \nu(n) 1_B(n) \omega(n) > 0. \ \ \ \ \ (4)

However, (4) may be defeated by a suitable choice of weight {\omega}, namely

\displaystyle  \omega(n) := \prod_{i=1}^H (1 + \lambda(n) \lambda(n+i)) \times \prod_{j=0}^H (1 - \lambda(n) \lambda(N-n-j))

where {n \mapsto \lambda(n)} is the Liouville function, which counts the parity of the number of prime factors of a given number {n}. Since {\lambda(n)^2 = 1}, one can expand out {\omega(n)} as the sum of {1} and a finite number of other terms, each of which consists of the product of two or more translates (or reflections) of {\lambda}. But from the Möbius randomness principle (or its analogue for the Liouville function), such products of {\lambda} are widely expected to be essentially orthogonal to any arithmetic function {f(n)} that is arising from a single multiplicative function such as {\Lambda}, even on very short arithmetic progressions. As such, replacing {1} by {\omega(n)} in (1) should have a negligible effect on the discrepancy. On the other hand, in order for {\omega(n)} to be non-zero, {\lambda(n+i)} has to have the same sign as {\lambda(n)} and hence the opposite sign to {\lambda(N-n-j)} cannot simultaneously be prime for any {0 \leq i,j \leq H}, and so {1_B(n) \omega(n)} vanishes identically, contradicting (4). This indirectly rules out any modification of the Goldston-Pintz-Yildirim/Zhang method for establishing the binary Goldbach conjecture with bounded error.

The above argument is not watertight, and one could envisage some ways around this problem. One of them is that the Möbius randomness principle could simply be false, in which case the parity obstruction vanishes. A good example of this is the result of Heath-Brown that shows that if there are infinitely many Siegel zeroes (which is a strong violation of the Möbius randomness principle), then the twin prime conjecture holds. Another way around the obstruction is to start controlling the discrepancy (1) for functions {f} that are combinations of more than one multiplicative function, e.g. {f(n) = \Lambda(n) \Lambda(n+2)}. However, controlling such functions looks to be at least as difficult as the twin prime conjecture (which is morally equivalent to obtaining non-trivial lower-bounds for {\sum_{x \leq n \leq 2x} \Lambda(n) \Lambda(n+2)}). A third option is not to use a sieve-theoretic argument, but to try a different method (e.g. the circle method). However, most other known methods also exhibit linearity in the “{n}” variable and I would suspect they would be vulnerable to a similar obstruction. (In any case, the circle method specifically has some other difficulties in tackling binary problems, as discussed in this previous post.)

One of the most basic methods in additive number theory is the Hardy-Littlewood circle method. This method is based on expressing a quantity of interest to additive number theory, such as the number of representations {f_3(x)} of an integer {x} as the sum of three primes {x = p_1+p_2+p_3}, as a Fourier-analytic integral over the unit circle {{\bf R}/{\bf Z}} involving exponential sums such as

\displaystyle  S(x,\alpha) := \sum_{p \leq x} e( \alpha p) \ \ \ \ \ (1)

where the sum here ranges over all primes up to {x}, and {e(x) := e^{2\pi i x}}. For instance, the expression {f(x)} mentioned earlier can be written as

\displaystyle  f_3(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha. \ \ \ \ \ (2)

The strategy is then to obtain sufficiently accurate bounds on exponential sums such as {S(x,\alpha)} in order to obtain non-trivial bounds on quantities such as {f_3(x)}. For instance, if one can show that {f_3(x)>0} for all odd integers {x} greater than some given threshold {x_0}, this implies that all odd integers greater than {x_0} are expressible as the sum of three primes, thus establishing all but finitely many instances of the odd Goldbach conjecture.

Remark 1 In practice, it can be more efficient to work with smoother sums than the partial sum (1), for instance by replacing the cutoff {p \leq x} with a smoother cutoff {\chi(p/x)} for a suitable chocie of cutoff function {\chi}, or by replacing the restriction of the summation to primes by a more analytically tractable weight, such as the von Mangoldt function {\Lambda(n)}. However, these improvements to the circle method are primarily technical in nature and do not have much impact on the heuristic discussion in this post, so we will not emphasise them here. One can also certainly use the circle method to study additive combinations of numbers from other sets than the set of primes, but we will restrict attention to additive combinations of primes for sake of discussion, as it is historically one of the most studied sets in additive number theory.

In many cases, it turns out that one can get fairly precise evaluations on sums such as {S(x,\alpha)} in the major arc case, when {\alpha} is close to a rational number {a/q} with small denominator {q}, by using tools such as the prime number theorem in arithmetic progressions. For instance, the prime number theorem itself tells us that

\displaystyle  S(x,0) \approx \frac{x}{\log x}

and the prime number theorem in residue classes modulo {q} suggests more generally that

\displaystyle  S(x,\frac{a}{q}) \approx \frac{\mu(q)}{\phi(q)} \frac{x}{\log x}

when {q} is small and {a} is close to {q}, basically thanks to the elementary calculation that the phase {e(an/q)} has an average value of {\mu(q)/\phi(q)} when {n} is uniformly distributed amongst the residue classes modulo {q} that are coprime to {q}. Quantifying the precise error in these approximations can be quite challenging, though, unless one assumes powerful hypotheses such as the Generalised Riemann Hypothesis.

In the minor arc case when {\alpha} is not close to a rational {a/q} with small denominator, one no longer expects to have such precise control on the value of {S(x,\alpha)}, due to the “pseudorandom” fluctuations of the quantity {e(\alpha p)}. Using the standard probabilistic heuristic (supported by results such as the central limit theorem or Chernoff’s inequality) that the sum of {k} “pseudorandom” phases should fluctuate randomly and be of typical magnitude {\sim \sqrt{k}}, one expects upper bounds of the shape

\displaystyle  |S(x,\alpha)| \lessapprox \sqrt{\frac{x}{\log x}} \ \ \ \ \ (3)

for “typical” minor arc {\alpha}. Indeed, a simple application of the Plancherel identity, followed by the prime number theorem, reveals that

\displaystyle  \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2\ d\alpha \sim \frac{x}{\log x} \ \ \ \ \ (4)

which is consistent with (though weaker than) the above heuristic. In practice, though, we are unable to rigorously establish bounds anywhere near as strong as (3); upper bounds such as {x^{4/5+o(1)}} are far more typical.

Because one only expects to have upper bounds on {|S(x,\alpha)|}, rather than asymptotics, in the minor arc case, one cannot realistically hope to make much use of phases such as {e(-x\alpha)} for the minor arc contribution to integrals such as (2) (at least if one is working with a single, deterministic, value of {x}, so that averaging in {x} is unavailable). In particular, from upper bound information alone, it is difficult to avoid the “conspiracy” that the magnitude {|S(x,\alpha)|^3} oscillates in sympathetic resonance with the phase {e(-x\alpha)}, thus essentially eliminating almost all of the possible gain in the bounds that could arise from exploiting cancellation from that phase. Thus, one basically has little option except to use the triangle inequality to control the portion of the integral on the minor arc region {\Omega_{minor}}:

\displaystyle  |\int_{\Omega_{minor}} |S(x,\alpha)|^3 e(-x\alpha)\ d\alpha| \leq \int_{\Omega_{minor}} |S(x,\alpha)|^3\ d\alpha.

Despite this handicap, though, it is still possible to get enough bounds on both the major and minor arc contributions of integrals such as (2) to obtain non-trivial lower bounds on quantities such as {f(x)}, at least when {x} is large. In particular, this sort of method can be developed to give a proof of Vinogradov’s famous theorem that every sufficiently large odd integer {x} is the sum of three primes; my own result that all odd numbers greater than {1} can be expressed as the sum of at most five primes is also proven by essentially the same method (modulo a number of minor refinements, and taking advantage of some numerical work on both the Goldbach problems and on the Riemann hypothesis ). It is certainly conceivable that some further variant of the circle method (again combined with a suitable amount of numerical work, such as that of numerically establishing zero-free regions for the Generalised Riemann Hypothesis) can be used to settle the full odd Goldbach conjecture; indeed, under the assumption of the Generalised Riemann Hypothesis, this was already achieved by Deshouillers, Effinger, te Riele, and Zinoviev back in 1997. I am optimistic that an unconditional version of this result will be possible within a few years or so, though I should say that there are still significant technical challenges to doing so, and some clever new ideas will probably be needed to get either the Vinogradov-style argument or numerical verification to work unconditionally for the three-primes problem at medium-sized ranges of {x}, such as {x \sim 10^{50}}. (But the intermediate problem of representing all even natural numbers as the sum of at most four primes looks somewhat closer to being feasible, though even this would require some substantially new and non-trivial ideas beyond what is in my five-primes paper.)

However, I (and many other analytic number theorists) are considerably more skeptical that the circle method can be applied to the even Goldbach problem of representing a large even number {x} as the sum {x = p_1 + p_2} of two primes, or the similar (and marginally simpler) twin prime conjecture of finding infinitely many pairs of twin primes, i.e. finding infinitely many representations {2 = p_1 - p_2} of {2} as the difference of two primes. At first glance, the situation looks tantalisingly similar to that of the Vinogradov theorem: to settle the even Goldbach problem for large {x}, one has to find a non-trivial lower bound for the quantity

\displaystyle  f_2(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^2 e(-x\alpha)\ d\alpha \ \ \ \ \ (5)

for sufficiently large {x}, as this quantity {f_2(x)} is also the number of ways to represent {x} as the sum {x=p_1+p_2} of two primes {p_1,p_2}. Similarly, to settle the twin prime problem, it would suffice to obtain a lower bound for the quantity

\displaystyle  \tilde f_2(x) = \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2 e(-2\alpha)\ d\alpha \ \ \ \ \ (6)

that goes to infinity as {x \rightarrow \infty}, as this quantity {\tilde f_2(x)} is also the number of ways to represent {2} as the difference {2 = p_1-p_2} of two primes less than or equal to {x}.

In principle, one can achieve either of these two objectives by a sufficiently fine level of control on the exponential sums {S(x,\alpha)}. Indeed, there is a trivial (and uninteresting) way to take any (hypothetical) solution of either the asymptotic even Goldbach problem or the twin prime problem and (artificially) convert it to a proof that “uses the circle method”; one simply begins with the quantity {f_2(x)} or {\tilde f_2(x)}, expresses it in terms of {S(x,\alpha)} using (5) or (6), and then uses (5) or (6) again to convert these integrals back into a the combinatorial expression of counting solutions to {x=p_1+p_2} or {2=p_1-p_2}, and then uses the hypothetical solution to the given problem to obtain the required lower bounds on {f_2(x)} or {\tilde f_2(x)}.

Of course, this would not qualify as a genuine application of the circle method by any reasonable measure. One can then ask the more refined question of whether one could hope to get non-trivial lower bounds on {f_2(x)} or {\tilde f_2(x)} (or similar quantities) purely from the upper and lower bounds on {S(x,\alpha)} or similar quantities (and of various {L^p} type norms on such quantities, such as the {L^2} bound (4)). Of course, we do not yet know what the strongest possible upper and lower bounds in {S(x,\alpha)} are yet (otherwise we would already have made progress on major conjectures such as the Riemann hypothesis); but we can make plausible heuristic conjectures on such bounds. And this is enough to make the following heuristic conclusions:

  • (i) For “binary” problems such as computing (5), (6), the contribution of the minor arcs potentially dominates that of the major arcs (if all one is given about the minor arc sums is magnitude information), in contrast to “ternary” problems such as computing (2), in which it is the major arc contribution which is absolutely dominant.
  • (ii) Upper and lower bounds on the magnitude of {S(x,\alpha)} are not sufficient, by themselves, to obtain non-trivial bounds on (5), (6) unless these bounds are extremely tight (within a relative error of {O(1/\log x)} or better); but
  • (iii) obtaining such tight bounds is a problem of comparable difficulty to the original binary problems.

I will provide some justification for these conclusions below the fold; they are reasonably well known “folklore” to many researchers in the field, but it seems that they are rarely made explicit in the literature (in part because these arguments are, by their nature, heuristic instead of rigorous) and I have been asked about them from time to time, so I decided to try to write them down here.

In view of the above conclusions, it seems that the best one can hope to do by using the circle method for the twin prime or even Goldbach problems is to reformulate such problems into a statement of roughly comparable difficulty to the original problem, even if one assumes powerful conjectures such as the Generalised Riemann Hypothesis (which lets one make very precise control on major arc exponential sums, but not on minor arc ones). These are not rigorous conclusions – after all, we have already seen that one can always artifically insert the circle method into any viable approach on these problems – but they do strongly suggest that one needs a method other than the circle method in order to fully solve either of these two problems. I do not know what such a method would be, though I can give some heuristic objections to some of the other popular methods used in additive number theory (such as sieve methods, or more recently the use of inverse theorems); this will be done at the end of this post.

Read the rest of this entry »

The parity problem is a notorious problem in sieve theory: this theory was invented in order to count prime patterns of various types (e.g. twin primes), but despite superb success in obtaining upper bounds on the number of such patterns, it has proven to be somewhat disappointing in obtaining lower bounds. [Sieves can also be used to study many other things than primes, of course, but we shall focus only on primes in this post.] Even the task of reproving Euclid’s theorem – that there are infinitely many primes – seems to be extremely difficult to do by sieve theoretic means, unless one of course injects into the theory an estimate at least as strong as Euclid’s theorem (e.g. the prime number theorem). The main obstruction is the parity problem: even assuming such strong hypotheses as the Elliott-Halberstam conjecture (a sort of “super-generalised Riemann Hypothesis” for sieves), sieve theory is largely (but not completely) unable to distinguish numbers with an odd number of prime factors from numbers with an even number of prime factors. This “parity barrier” has been broken for some select patterns of primes by injecting some powerful non-sieve theory methods into the subject, but remains a formidable obstacle in general.

I’ll discuss the parity problem in more detail later in this post, but I want to first discuss how sieves work [drawing in part on some excellent unpublished lecture notes of Iwaniec]; the basic ideas are elementary and conceptually simple, but there are many details and technicalities involved in actually executing these ideas, and which I will try to suppress for sake of exposition.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,985 other followers