Joni Teräväinen and I have just uploaded to the arXiv our preprint “The Hardy–Littlewood–Chowla conjecture in the presence of a Siegel zero“. This paper is a development of the theme that certain conjectures in analytic number theory become easier if one makes the hypothesis that Siegel zeroes exist; this places one in a presumably “illusory” universe, since the widely believed Generalised Riemann Hypothesis (GRH) precludes the existence of such zeroes, yet this illusory universe seems remarkably self-consistent and notoriously impossible to eliminate from one’s analysis.

For the purposes of this paper, a Siegel zero is a zero {\beta} of a Dirichlet {L}-function {L(\cdot,\chi)} corresponding to a primitive quadratic character {\chi} of some conductor {q_\chi}, which is close to {1} in the sense that

\displaystyle  \beta = 1 - \frac{1}{\eta \log q_\chi}

for some large {\eta \gg 1} (which we will call the quality) of the Siegel zero. The significance of these zeroes is that they force the Möbius function {\mu} and the Liouville function {\lambda} to “pretend” to be like the exceptional character {\chi} for primes of magnitude comparable to {q_\chi}. Indeed, if we define an exceptional prime to be a prime {p^*} in which {\chi(p^*) \neq -1}, then very few primes near {q_\chi} will be exceptional; in our paper we use some elementary arguments to establish the bounds

\displaystyle  \sum_{q_\chi^{1/2+\varepsilon} < p^* \leq x} \frac{1}{p^*} \ll_\varepsilon \frac{\log x}{\eta \log q_\chi} \ \ \ \ \ (1)

for any {x \geq q_\chi^{1/2+\varepsilon}} and {\varepsilon>0}, where the sum is over exceptional primes in the indicated range {q_\chi^{1/2+\varepsilon} < p^* \leq x}; this bound is non-trivial for {x} as large as {q_\chi^{\eta^{1-\varepsilon}}}. (See Section 1 of this blog post for some variants of this argument, which were inspired by work of Heath-Brown.) There is also a companion bound (somewhat weaker) that covers a range of {p^*} a little bit below {q_\chi^{1/2}}.

One of the early influential results in this area was the following result of Heath-Brown, which I previously blogged about here:

Theorem 1 (Hardy-Littlewood assuming Siegel zero) Let {h} be a fixed natural number. Suppose one has a Siegel zero {\beta} associated to some conductor {q_\chi}. Then we have

\displaystyle  \sum_{n \leq x} \Lambda(n) \Lambda(n+h) = ({\mathfrak S} + O( \frac{1}{\log\log \eta} )) x

for all {q_\chi^{250} \leq x \leq q_\chi^{300}}, where {\Lambda} is the von Mangoldt function and {{\mathfrak S}} is the singular series

\displaystyle  {\mathfrak S} = \prod_{p|h} \frac{p}{p-1} \prod_{p \nmid h} (1 - \frac{1}{(p-1)^2})

In particular, Heath-Brown showed that if there are infinitely many Siegel zeroes, then there are also infinitely many twin primes, with the correct asymptotic predicted by the Hardy-Littlewood prime tuple conjecture at infinitely many scales.

Very recently, Chinis established an analogous result for the Chowla conjecture (building upon earlier work of Germán and Katai):

Theorem 2 (Chowla assuming Siegel zero) Let {h_1,\dots,h_\ell} be distinct fixed natural numbers. Suppose one has a Siegel zero {\beta} associated to some conductor {q_\chi}. Then one has

\displaystyle  \sum_{n \leq x} \lambda(n+h_1) \dots \lambda(n+h_\ell) \ll \frac{x}{(\log\log \eta)^{1/2} (\log \eta)^{1/12}}

in the range {q_\chi^{10} \leq x \leq q_\chi^{\log\log \eta / 3}}, where {\lambda} is the Liouville function.

In our paper we unify these results and also improve the quantitative estimates and range of {x}:

Theorem 3 (Hardy-Littlewood-Chowla assuming Siegel zero) Let {h_1,\dots,h_k,h'_1,\dots,h'_\ell} be distinct fixed natural numbers with {k \leq 2}. Suppose one has a Siegel zero {\beta} associated to some conductor {q_\chi}. Then one has

\displaystyle  \sum_{n \leq x} \Lambda(n+h_1) \dots \Lambda(n+h_k) \lambda(n+h'_1) \dots \lambda(n+h'_\ell)

\displaystyle = ({\mathfrak S} + O_\varepsilon( \frac{1}{\log^{1/10\max(1,k)} \eta} )) x

for

\displaystyle  q_\chi^{10k+\frac{1}{2}+\varepsilon} \leq x \leq q_\chi^{\eta^{1/2}}

for any fixed {\varepsilon>0}.

Our argument proceeds by a series of steps in which we replace {\Lambda} and {\lambda} by more complicated looking, but also more tractable, approximations, until the correlation is one that can be computed in a tedious but straightforward fashion by known techniques. More precisely, the steps are as follows:

  • (i) Replace the Liouville function {\lambda} with an approximant {\lambda_{\mathrm{Siegel}}}, which is a completely multiplicative function that agrees with {\lambda} at small primes and agrees with {\chi} at large primes.
  • (ii) Replace the von Mangoldt function {\Lambda} with an approximant {\Lambda_{\mathrm{Siegel}}}, which is the Dirichlet convolution {\chi * \log} multiplied by a Selberg sieve weight {\nu} to essentially restrict that convolution to almost primes.
  • (iii) Replace {\lambda_{\mathrm{Siegel}}} with a more complicated truncation {\lambda_{\mathrm{Siegel}}^\sharp} which has the structure of a “Type I sum”, and which agrees with {\lambda_{\mathrm{Siegel}}} on numbers that have a “typical” factorization.
  • (iv) Replace the approximant {\Lambda_{\mathrm{Siegel}}} with a more complicated approximant {\Lambda_{\mathrm{Siegel}}^\sharp} which has the structure of a “Type I sum”.
  • (v) Now that all terms in the correlation have been replaced with tractable Type I sums, use standard Euler product calculations and Fourier analysis, similar in spirit to the proof of the pseudorandomness of the Selberg sieve majorant for the primes in this paper of Ben Green and myself, to evaluate the correlation to high accuracy.

Steps (i), (ii) proceed mainly through estimates such as (1) and standard sieve theory bounds. Step (iii) is based primarily on estimates on the number of smooth numbers of a certain size.

The restriction {k \leq 2} in our main theorem is needed only to execute step (iv) of this step. Roughly speaking, the Siegel approximant {\Lambda_{\mathrm{Siegel}}} to {\Lambda} is a twisted, sieved version of the divisor function {\tau}, and the types of correlation one is faced with at the start of step (iv) are a more complicated version of the divisor correlation sum

\displaystyle  \sum_{n \leq x} \tau(n+h_1) \dots \tau(n+h_k).

For {k=1} this sum can be easily controlled by the Dirichlet hyperbola method. For {k=2} one needs the fact that {\tau} has a level of distribution greater than {1/2}; in fact Kloosterman sum bounds give a level of distribution of {2/3}, a folklore fact that seems to have first been observed by Linnik and Selberg. We use a (notationally more complicated) version of this argument to treat the sums arising in (iv) for {k \leq 2}. Unfortunately for {k > 2} there are no known techniques to unconditionally obtain asymptotics, even for the model sum

\displaystyle  \sum_{n \leq x} \tau(n) \tau(n+1) \tau(n+2),

although we do at least have fairly convincing conjectures as to what the asymptotics should be. Because of this, it seems unlikely that one will be able to relax the {k \leq 2} hypothesis in our main theorem at our current level of understanding of analytic number theory.

Step (v) is a tedious but straightforward sieve theoretic computation, similar in many ways to the correlation estimates of Goldston and Yildirim used in their work on small gaps between primes (as discussed for instance here), and then also used by Ben Green and myself to locate arithmetic progressions in primes.