You are currently browsing the tag archive for the ‘prime tuples conjecture’ tag.

Tamar Ziegler and I have just uploaded to the arXiv our paper “Infinite partial sumsets in the primes“. This is a short paper inspired by a recent result of Kra, Moreira, Richter, and Robertson (discussed for instance in this Quanta article from last December) showing that for any set ${A}$ of natural numbers of positive upper density, there exists a sequence ${b_1 < b_2 < b_3 < \dots}$ of natural numbers and a shift ${t}$ such that ${b_i + b_j + t \in A}$ for all ${i this answers a question of Erdős). In view of the “transference principle“, it is then plausible to ask whether the same result holds if ${A}$ is replaced by the primes. We can show the following results:

Theorem 1
• (i) If the Hardy-Littlewood prime tuples conjecture (or the weaker conjecture of Dickson) is true, then there exists an increasing sequence ${b_1 < b_2 < b_3 < \dots}$ of primes such that ${b_i + b_j + 1}$ is prime for all ${i < j}$.
• (ii) Unconditionally, there exist increasing sequences ${a_1 < a_2 < \dots}$ and ${b_1 < b_2 < \dots}$ of natural numbers such that ${a_i + b_j}$ is prime for all ${i.
• (iii) These conclusions fail if “prime” is replaced by “positive (relative) density subset of the primes” (even if the density is equal to 1).

We remark that it was shown by Balog that there (unconditionally) exist arbitrarily long but finite sequences ${b_1 < \dots < b_k}$ of primes such that ${b_i + b_j + 1}$ is prime for all ${i < j \leq k}$. (This result can also be recovered from the later results of Ben Green, myself, and Tamar Ziegler.) Also, it had previously been shown by Granville that on the Hardy-Littlewood prime tuples conjecture, there existed increasing sequences ${a_1 < a_2 < \dots}$ and ${b_1 < b_2 < \dots}$ of natural numbers such that ${a_i+b_j}$ is prime for all ${i,j}$.

The conclusion of (i) is stronger than that of (ii) (which is of course consistent with the former being conditional and the latter unconditional). The conclusion (ii) also implies the well-known theorem of Maynard that for any given ${k}$, there exist infinitely many ${k}$-tuples of primes of bounded diameter, and indeed our proof of (ii) uses the same “Maynard sieve” that powers the proof of that theorem (though we use a formulation of that sieve closer to that in this blog post of mine). Indeed, the failure of (iii) basically arises from the failure of Maynard’s theorem for dense subsets of primes, simply by removing those clusters of primes that are unusually closely spaced.

Our proof of (i) was initially inspired by the topological dynamics methods used by Kra, Moreira, Richter, and Robertson, but we managed to condense it to a purely elementary argument (taking up only half a page) that makes no reference to topological dynamics and builds up the sequence ${b_1 < b_2 < \dots}$ recursively by repeated application of the prime tuples conjecture.

The proof of (ii) takes up the majority of the paper. It is easiest to phrase the argument in terms of “prime-producing tuples” – tuples ${(h_1,\dots,h_k)}$ for which there are infinitely many ${n}$ with ${n+h_1,\dots,n+h_k}$ all prime. Maynard’s theorem is equivalent to the existence of arbitrarily long prime-producing tuples; our theorem is equivalent to the stronger assertion that there exist an infinite sequence ${h_1 < h_2 < \dots}$ such that every initial segment ${(h_1,\dots,h_k)}$ is prime-producing. The main new tool for achieving this is the following cute measure-theoretic lemma of Bergelson:

Lemma 2 (Bergelson intersectivity lemma) Let ${E_1,E_2,\dots}$ be subsets of a probability space ${(X,\mu)}$ of measure uniformly bounded away from zero, thus ${\inf_i \mu(E_i) > 0}$. Then there exists a subsequence ${E_{i_1}, E_{i_2}, \dots}$ such that

$\displaystyle \mu(E_{i_1} \cap \dots \cap E_{i_k} ) > 0$

for all ${k}$.

This lemma has a short proof, though not an entirely obvious one. Firstly, by deleting a null set from ${X}$, one can assume that all finite intersections ${E_{i_1} \cap \dots \cap E_{i_k}}$ are either positive measure or empty. Secondly, a routine application of Fatou’s lemma shows that the maximal function ${\limsup_N \frac{1}{N} \sum_{i=1}^N 1_{E_i}}$ has a positive integral, hence must be positive at some point ${x_0}$. Thus there is a subsequence ${E_{i_1}, E_{i_2}, \dots}$ whose finite intersections all contain ${x_0}$, thus have positive measure as desired by the previous reduction.

It turns out that one cannot quite combine the standard Maynard sieve with the intersectivity lemma because the events ${E_i}$ that show up (which roughly correspond to the event that ${n + h_i}$ is prime for some random number ${n}$ (with a well-chosen probability distribution) and some shift ${h_i}$) have their probability going to zero, rather than being uniformly bounded from below. To get around this, we borrow an idea from a paper of Banks, Freiberg, and Maynard, and group the shifts ${h_i}$ into various clusters ${h_{i,1},\dots,h_{i,J_1}}$, chosen in such a way that the probability that at least one of ${n+h_{i,1},\dots,n+h_{i,J_1}}$ is prime is bounded uniformly from below. One then applies the Bergelson intersectivity lemma to those events and uses many applications of the pigeonhole principle to conclude.

Joni Teräväinen and I have just uploaded to the arXiv our preprint “The Hardy–Littlewood–Chowla conjecture in the presence of a Siegel zero“. This paper is a development of the theme that certain conjectures in analytic number theory become easier if one makes the hypothesis that Siegel zeroes exist; this places one in a presumably “illusory” universe, since the widely believed Generalised Riemann Hypothesis (GRH) precludes the existence of such zeroes, yet this illusory universe seems remarkably self-consistent and notoriously impossible to eliminate from one’s analysis.

For the purposes of this paper, a Siegel zero is a zero ${\beta}$ of a Dirichlet ${L}$-function ${L(\cdot,\chi)}$ corresponding to a primitive quadratic character ${\chi}$ of some conductor ${q_\chi}$, which is close to ${1}$ in the sense that

$\displaystyle \beta = 1 - \frac{1}{\eta \log q_\chi}$

for some large ${\eta \gg 1}$ (which we will call the quality) of the Siegel zero. The significance of these zeroes is that they force the Möbius function ${\mu}$ and the Liouville function ${\lambda}$ to “pretend” to be like the exceptional character ${\chi}$ for primes of magnitude comparable to ${q_\chi}$. Indeed, if we define an exceptional prime to be a prime ${p^*}$ in which ${\chi(p^*) \neq -1}$, then very few primes near ${q_\chi}$ will be exceptional; in our paper we use some elementary arguments to establish the bounds

$\displaystyle \sum_{q_\chi^{1/2+\varepsilon} < p^* \leq x} \frac{1}{p^*} \ll_\varepsilon \frac{\log x}{\eta \log q_\chi} \ \ \ \ \ (1)$

for any ${x \geq q_\chi^{1/2+\varepsilon}}$ and ${\varepsilon>0}$, where the sum is over exceptional primes in the indicated range ${q_\chi^{1/2+\varepsilon} < p^* \leq x}$; this bound is non-trivial for ${x}$ as large as ${q_\chi^{\eta^{1-\varepsilon}}}$. (See Section 1 of this blog post for some variants of this argument, which were inspired by work of Heath-Brown.) There is also a companion bound (somewhat weaker) that covers a range of ${p^*}$ a little bit below ${q_\chi^{1/2}}$.

One of the early influential results in this area was the following result of Heath-Brown, which I previously blogged about here:

Theorem 1 (Hardy-Littlewood assuming Siegel zero) Let ${h}$ be a fixed natural number. Suppose one has a Siegel zero ${\beta}$ associated to some conductor ${q_\chi}$. Then we have

$\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+h) = ({\mathfrak S} + O( \frac{1}{\log\log \eta} )) x$

for all ${q_\chi^{250} \leq x \leq q_\chi^{300}}$, where ${\Lambda}$ is the von Mangoldt function and ${{\mathfrak S}}$ is the singular series

$\displaystyle {\mathfrak S} = \prod_{p|h} \frac{p}{p-1} \prod_{p \nmid h} (1 - \frac{1}{(p-1)^2})$

In particular, Heath-Brown showed that if there are infinitely many Siegel zeroes, then there are also infinitely many twin primes, with the correct asymptotic predicted by the Hardy-Littlewood prime tuple conjecture at infinitely many scales.

Very recently, Chinis established an analogous result for the Chowla conjecture (building upon earlier work of Germán and Katai):

Theorem 2 (Chowla assuming Siegel zero) Let ${h_1,\dots,h_\ell}$ be distinct fixed natural numbers. Suppose one has a Siegel zero ${\beta}$ associated to some conductor ${q_\chi}$. Then one has

$\displaystyle \sum_{n \leq x} \lambda(n+h_1) \dots \lambda(n+h_\ell) \ll \frac{x}{(\log\log \eta)^{1/2} (\log \eta)^{1/12}}$

in the range ${q_\chi^{10} \leq x \leq q_\chi^{\log\log \eta / 3}}$, where ${\lambda}$ is the Liouville function.

In our paper we unify these results and also improve the quantitative estimates and range of ${x}$:

Theorem 3 (Hardy-Littlewood-Chowla assuming Siegel zero) Let ${h_1,\dots,h_k,h'_1,\dots,h'_\ell}$ be distinct fixed natural numbers with ${k \leq 2}$. Suppose one has a Siegel zero ${\beta}$ associated to some conductor ${q_\chi}$. Then one has

$\displaystyle \sum_{n \leq x} \Lambda(n+h_1) \dots \Lambda(n+h_k) \lambda(n+h'_1) \dots \lambda(n+h'_\ell)$

$\displaystyle = ({\mathfrak S} + O_\varepsilon( \frac{1}{\log^{1/10\max(1,k)} \eta} )) x$

for

$\displaystyle q_\chi^{10k+\frac{1}{2}+\varepsilon} \leq x \leq q_\chi^{\eta^{1/2}}$

for any fixed ${\varepsilon>0}$.

Our argument proceeds by a series of steps in which we replace ${\Lambda}$ and ${\lambda}$ by more complicated looking, but also more tractable, approximations, until the correlation is one that can be computed in a tedious but straightforward fashion by known techniques. More precisely, the steps are as follows:

• (i) Replace the Liouville function ${\lambda}$ with an approximant ${\lambda_{\mathrm{Siegel}}}$, which is a completely multiplicative function that agrees with ${\lambda}$ at small primes and agrees with ${\chi}$ at large primes.
• (ii) Replace the von Mangoldt function ${\Lambda}$ with an approximant ${\Lambda_{\mathrm{Siegel}}}$, which is the Dirichlet convolution ${\chi * \log}$ multiplied by a Selberg sieve weight ${\nu}$ to essentially restrict that convolution to almost primes.
• (iii) Replace ${\lambda_{\mathrm{Siegel}}}$ with a more complicated truncation ${\lambda_{\mathrm{Siegel}}^\sharp}$ which has the structure of a “Type I sum”, and which agrees with ${\lambda_{\mathrm{Siegel}}}$ on numbers that have a “typical” factorization.
• (iv) Replace the approximant ${\Lambda_{\mathrm{Siegel}}}$ with a more complicated approximant ${\Lambda_{\mathrm{Siegel}}^\sharp}$ which has the structure of a “Type I sum”.
• (v) Now that all terms in the correlation have been replaced with tractable Type I sums, use standard Euler product calculations and Fourier analysis, similar in spirit to the proof of the pseudorandomness of the Selberg sieve majorant for the primes in this paper of Ben Green and myself, to evaluate the correlation to high accuracy.

Steps (i), (ii) proceed mainly through estimates such as (1) and standard sieve theory bounds. Step (iii) is based primarily on estimates on the number of smooth numbers of a certain size.

The restriction ${k \leq 2}$ in our main theorem is needed only to execute step (iv) of this step. Roughly speaking, the Siegel approximant ${\Lambda_{\mathrm{Siegel}}}$ to ${\Lambda}$ is a twisted, sieved version of the divisor function ${\tau}$, and the types of correlation one is faced with at the start of step (iv) are a more complicated version of the divisor correlation sum

$\displaystyle \sum_{n \leq x} \tau(n+h_1) \dots \tau(n+h_k).$

For ${k=1}$ this sum can be easily controlled by the Dirichlet hyperbola method. For ${k=2}$ one needs the fact that ${\tau}$ has a level of distribution greater than ${1/2}$; in fact Kloosterman sum bounds give a level of distribution of ${2/3}$, a folklore fact that seems to have first been observed by Linnik and Selberg. We use a (notationally more complicated) version of this argument to treat the sums arising in (iv) for ${k \leq 2}$. Unfortunately for ${k > 2}$ there are no known techniques to unconditionally obtain asymptotics, even for the model sum

$\displaystyle \sum_{n \leq x} \tau(n) \tau(n+1) \tau(n+2),$

although we do at least have fairly convincing conjectures as to what the asymptotics should be. Because of this, it seems unlikely that one will be able to relax the ${k \leq 2}$ hypothesis in our main theorem at our current level of understanding of analytic number theory.

Step (v) is a tedious but straightforward sieve theoretic computation, similar in many ways to the correlation estimates of Goldston and Yildirim used in their work on small gaps between primes (as discussed for instance here), and then also used by Ben Green and myself to locate arithmetic progressions in primes.