You are currently browsing the tag archive for the ‘William Banks’ tag.

William Banks, Kevin Ford, and I have just uploaded to the arXiv our paper “Large prime gaps and probabilistic models“. In this paper we introduce a random model to help understand the connection between two well known conjectures regarding the primes ${{\mathcal P} := \{2,3,5,\dots\}}$, the Cramér conjecture and the Hardy-Littlewood conjecture:

Conjecture 1 (Cramér conjecture) If ${x}$ is a large number, then the largest prime gap ${G_{\mathcal P}(x) := \sup_{p_n, p_{n+1} \leq x} p_{n+1}-p_n}$ in ${[1,x]}$ is of size ${\asymp \log^2 x}$. (Granville refines this conjecture to ${\gtrsim \xi \log^2 x}$, where ${\xi := 2e^{-\gamma} = 1.1229\dots}$. Here we use the asymptotic notation ${X \gtrsim Y}$ for ${X \geq (1-o(1)) Y}$, ${X \sim Y}$ for ${X \gtrsim Y \gtrsim X}$, ${X \gg Y}$ for ${X \geq C^{-1} Y}$, and ${X \asymp Y}$ for ${X \gg Y \gg X}$.)

Conjecture 2 (Hardy-Littlewood conjecture) If ${\mathcal{H} := \{h_1,\dots,h_k\}}$ are fixed distinct integers, then the number of numbers ${n \in [1,x]}$ with ${n+h_1,\dots,n+h_k}$ all prime is ${({\mathfrak S}(\mathcal{H}) +o(1)) \int_2^x \frac{dt}{\log^k t}}$ as ${x \rightarrow \infty}$, where the singular series ${{\mathfrak S}(\mathcal{H})}$ is defined by the formula

$\displaystyle {\mathfrak S}(\mathcal{H}) := \prod_p \left( 1 - \frac{|{\mathcal H} \hbox{ mod } p|}{p}\right) (1-\frac{1}{p})^{-k}.$

(One can view these conjectures as modern versions of two of the classical Landau problems, namely Legendre’s conjecture and the twin prime conjecture respectively.)

A well known connection between the Hardy-Littlewood conjecture and prime gaps was made by Gallagher. Among other things, Gallagher showed that if the Hardy-Littlewood conjecture was true, then the prime gaps ${p_{n+1}-p_n}$ with ${n \leq x}$ were asymptotically distributed according to an exponential distribution of mean ${\log x}$, in the sense that

$\displaystyle | \{ n: p_n \leq x, p_{n+1}-p_n \geq \lambda \log x \}| = (e^{-\lambda}+o(1)) \frac{x}{\log x} \ \ \ \ \ (1)$

as ${x \rightarrow \infty}$ for any fixed ${\lambda \geq 0}$. Roughly speaking, the way this is established is by using the Hardy-Littlewood conjecture to control the mean values of ${\binom{|{\mathcal P} \cap (p_n, p_n + \lambda \log x)|}{k}}$ for fixed ${k,\lambda}$, where ${p_n}$ ranges over the primes in ${[1,x]}$. The relevance of these quantities arises from the Bonferroni inequalities (or “Brun pure sieve“), which can be formulated as the assertion that

$\displaystyle 1_{N=0} \leq \sum_{k=0}^K (-1)^k \binom{N}{k}$

when ${K}$ is even and

$\displaystyle 1_{N=0} \geq \sum_{k=0}^K (-1)^k \binom{N}{k}$

when ${K}$ is odd, for any natural number ${N}$; setting ${N := |{\mathcal P} \cap (p_n, p_n + \lambda \log x)|}$ and taking means, one then gets upper and lower bounds for the probability that the interval ${(p_n, p_n + \lambda \log x)}$ is free of primes. The most difficult step is to control the mean values of the singular series ${{\mathfrak S}(\mathcal{H})}$ as ${{\mathcal H}}$ ranges over ${k}$-tuples in a fixed interval such as ${[0, \lambda \log x]}$.

Heuristically, if one extrapolates the asymptotic (1) to the regime ${\lambda \asymp \log x}$, one is then led to Cramér’s conjecture, since the right-hand side of (1) falls below ${1}$ when ${\lambda}$ is significantly larger than ${\log x}$. However, this is not a rigorous derivation of Cramér’s conjecture from the Hardy-Littlewood conjecture, since Gallagher’s computations only establish (1) for fixed choices of ${\lambda}$, which is only enough to establish the far weaker bound ${G_{\mathcal P}(x) / \log x \rightarrow \infty}$, which was already known (see this previous paper for a discussion of the best known unconditional lower bounds on ${G_{\mathcal P}(x)}$). An inspection of the argument shows that if one wished to extend (1) to parameter choices ${\lambda}$ that were allowed to grow with ${x}$, then one would need as input a stronger version of the Hardy-Littlewood conjecture in which the length ${k}$ of the tuple ${{\mathcal H} = (h_1,\dots,h_k)}$, as well as the magnitudes of the shifts ${h_1,\dots,h_k}$, were also allowed to grow with ${x}$. Our initial objective in this project was then to quantify exactly what strengthening of the Hardy-Littlewood conjecture would be needed to rigorously imply Cramer’s conjecture. The precise results are technical, but roughly we show results of the following form:

Theorem 3 (Large gaps from Hardy-Littlewood, rough statement)

• If the Hardy-Littlewood conjecture is uniformly true for ${k}$-tuples of length ${k \ll \frac{\log x}{\log\log x}}$, and with shifts ${h_1,\dots,h_k}$ of size ${O( \log^2 x )}$, with a power savings in the error term, then ${G_{\mathcal P}(x) \gg \frac{\log^2 x}{\log\log x}}$.
• If the Hardy-Littlewood conjecture is “true on average” for ${k}$-tuples of length ${k \ll \frac{y}{\log x}}$ and shifts ${h_1,\dots,h_k}$ of size ${y}$ for all ${\log x \leq y \leq \log^2 x \log\log x}$, with a power savings in the error term, then ${G_{\mathcal P}(x) \gg \log^2 x}$.

In particular, we can recover Cramer’s conjecture given a sufficiently powerful version of the Hardy-Littlewood conjecture “on the average”.

Our proof of this theorem proceeds more or less along the same lines as Gallagher’s calculation, but now with ${k}$ allowed to grow slowly with ${x}$. Again, the main difficulty is to accurately estimate average values of the singular series ${{\mathfrak S}({\mathfrak H})}$. Here we found it useful to switch to a probabilistic interpretation of this series. For technical reasons it is convenient to work with a truncated, unnormalised version

$\displaystyle V_{\mathcal H}(z) := \prod_{p \leq z} \left( 1 - \frac{|{\mathcal H} \hbox{ mod } p|}{p} \right)$

of the singular series, for a suitable cutoff ${z}$; it turns out that when studying prime tuples of size ${t}$, the most convenient cutoff ${z(t)}$ is the “Pólya magic cutoff“, defined as the largest prime for which

$\displaystyle \prod_{p \leq z(t)}(1-\frac{1}{p}) \geq \frac{1}{\log t} \ \ \ \ \ (2)$

(this is well defined for ${t \geq e^2}$); by Mertens’ theorem, we have ${z(t) \sim t^{1/e^\gamma}}$. One can interpret ${V_{\mathcal Z}(z)}$ probabilistically as

$\displaystyle V_{\mathcal Z}(z) = \mathbf{P}( {\mathcal H} \subset \mathcal{S}_z )$

where ${\mathcal{S}_z \subset {\bf Z}}$ is the randomly sifted set of integers formed by removing one residue class ${a_p \hbox{ mod } p}$ uniformly at random for each prime ${p \leq z}$. The Hardy-Littlewood conjecture can be viewed as an assertion that the primes ${{\mathcal P}}$ behave in some approximate statistical sense like the random sifted set ${\mathcal{S}_z}$, and one can prove the above theorem by using the Bonferroni inequalities both for the primes ${{\mathcal P}}$ and for the random sifted set, and comparing the two (using an even ${K}$ for the sifted set and an odd ${K}$ for the primes in order to be able to combine the two together to get a useful bound).

The proof of Theorem 3 ended up not using any properties of the set of primes ${{\mathcal P}}$ other than that this set obeyed some form of the Hardy-Littlewood conjectures; the theorem remains true (with suitable notational changes) if this set were replaced by any other set. In order to convince ourselves that our theorem was not vacuous due to our version of the Hardy-Littlewood conjecture being too strong to be true, we then started exploring the question of coming up with random models of ${{\mathcal P}}$ which obeyed various versions of the Hardy-Littlewood and Cramér conjectures.

This line of inquiry was started by Cramér, who introduced what we now call the Cramér random model ${{\mathcal C}}$ of the primes, in which each natural number ${n \geq 3}$ is selected for membership in ${{\mathcal C}}$ with an independent probability of ${1/\log n}$. This model matches the primes well in some respects; for instance, it almost surely obeys the “Riemann hypothesis”

$\displaystyle | {\mathcal C} \cap [1,x] | = \int_2^x \frac{dt}{\log t} + O( x^{1/2+o(1)})$

and Cramér also showed that the largest gap ${G_{\mathcal C}(x)}$ was almost surely ${\sim \log^2 x}$. On the other hand, it does not obey the Hardy-Littlewood conjecture; more precisely, it obeys a simplified variant of that conjecture in which the singular series ${{\mathfrak S}({\mathcal H})}$ is absent.

Granville proposed a refinement ${{\mathcal G}}$ to Cramér’s random model ${{\mathcal C}}$ in which one first sieves out (in each dyadic interval ${[x,2x]}$) all residue classes ${0 \hbox{ mod } p}$ for ${p \leq A}$ for a certain threshold ${A = \log^{1-o(1)} x = o(\log x)}$, and then places each surviving natural number ${n}$ in ${{\mathcal G}}$ with an independent probability ${\frac{1}{\log n} \prod_{p \leq A} (1-\frac{1}{p})^{-1}}$. One can verify that this model obeys the Hardy-Littlewood conjectures, and Granville showed that the largest gap ${G_{\mathcal G}(x)}$ in this model was almost surely ${\gtrsim \xi \log^2 x}$, leading to his conjecture that this bound also was true for the primes. (Interestingly, this conjecture is not yet borne out by numerics; calculations of prime gaps up to ${10^{18}}$, for instance, have shown that ${G_{\mathcal P}(x)}$ never exceeds ${0.9206 \log^2 x}$ in this range. This is not necessarily a conflict, however; Granville’s analysis relies on inspecting gaps in an extremely sparse region of natural numbers that are more devoid of primes than average, and this region is not well explored by existing numerics. See this previous blog post for more discussion of Granville’s argument.)

However, Granville’s model does not produce a power savings in the error term of the Hardy-Littlewood conjectures, mostly due to the need to truncate the singular series at the logarithmic cutoff ${A}$. After some experimentation, we were able to produce a tractable random model ${{\mathcal R}}$ for the primes which obeyed the Hardy-Littlewood conjectures with power savings, and which reproduced Granville’s gap prediction of ${\gtrsim \xi \log^2 x}$ (we also get an upper bound of ${\lesssim \xi \log^2 x \frac{\log\log x}{2 \log\log\log x}}$ for both models, though we expect the lower bound to be closer to the truth); to us, this strengthens the case for Granville’s version of Cramér’s conjecture. The model can be described as follows. We select one residue class ${a_p \hbox{ mod } p}$ uniformly at random for each prime ${p}$, and as before we let ${S_z}$ be the sifted set of integers formed by deleting the residue classes ${a_p \hbox{ mod } p}$ with ${p \leq z}$. We then set

$\displaystyle {\mathcal R} := \{ n \geq e^2: n \in S_{z(t)}\}$

with ${z(t)}$ Pólya’s magic cutoff (this is the cutoff that gives ${{\mathcal R}}$ a density consistent with the prime number theorem or the Riemann hypothesis). As stated above, we are able to show that almost surely one has

$\displaystyle \xi \log^2 x \lesssim {\mathcal G}_{\mathcal R}(x) \lesssim \xi \log^2 x \frac{\log\log x}{2 \log\log\log x} \ \ \ \ \ (3)$

and that the Hardy-Littlewood conjectures hold with power savings for ${k}$ up to ${\log^c x}$ for any fixed ${c < 1}$ and for shifts ${h_1,\dots,h_k}$ of size ${O(\log^c x)}$. This is unfortunately a tiny bit weaker than what Theorem 3 requires (which more or less corresponds to the endpoint ${c=1}$), although there is a variant of Theorem 3 that can use this input to produce a lower bound on gaps in the model ${{\mathcal R}}$ (but it is weaker than the one in (3)). In fact we prove a more precise almost sure asymptotic formula for ${{\mathcal G}_{\mathcal R}(x) }$ that involves the optimal bounds for the linear sieve (or interval sieve), in which one deletes one residue class modulo ${p}$ from an interval ${[0,y]}$ for all primes ${p}$ up to a given threshold. The lower bound in (3) relates to the case of deleting the ${0 \hbox{ mod } p}$ residue classes from ${[0,y]}$; the upper bound comes from the delicate analysis of the linear sieve by Iwaniec. Improving on either of the two bounds looks to be quite a difficult problem.

The probabilistic analysis of ${{\mathcal R}}$ is somewhat more complicated than of ${{\mathcal C}}$ or ${{\mathcal G}}$ as there is now non-trivial coupling between the events ${n \in {\mathcal R}}$ as ${n}$ varies, although moment methods such as the second moment method are still viable and allow one to verify the Hardy-Littlewood conjectures by a lengthy but fairly straightforward calculation. To analyse large gaps, one has to understand the statistical behaviour of a random linear sieve in which one starts with an interval ${[0,y]}$ and randomly deletes a residue class ${a_p \hbox{ mod } p}$ for each prime ${p}$ up to a given threshold. For very small ${p}$ this is handled by the deterministic theory of the linear sieve as discussed above. For medium sized ${p}$, it turns out that there is good concentration of measure thanks to tools such as Bennett’s inequality or Azuma’s inequality, as one can view the sieving process as a martingale or (approximately) as a sum of independent random variables. For larger primes ${p}$, in which only a small number of survivors are expected to be sieved out by each residue class, a direct combinatorial calculation of all possible outcomes (involving the random graph that connects interval elements ${n \in [0,y]}$ to primes ${p}$ if ${n}$ falls in the random residue class ${a_p \hbox{ mod } p}$) turns out to give the best results.