Kaisa Matomaki, Maksym Radziwill, and I have uploaded to the arXiv our paper “Correlations of the von Mangoldt and higher divisor functions I. Long shift ranges“, submitted to Proceedings of the London Mathematical Society. This paper is concerned with the estimation of correlations such as

for medium-sized and large , where is the von Mangoldt function; we also consider variants of this sum in which one of the von Mangoldt functions is replaced with a (higher order) divisor function, but for sake of discussion let us focus just on the sum (1). Understanding this sum is very closely related to the problem of finding pairs of primes that differ by ; for instance, if one could establish a lower bound

then this would easily imply the twin prime conjecture.

The (first) Hardy-Littlewood conjecture asserts an asymptotic

as for any fixed positive , where the *singular series* is an arithmetic factor arising from the irregularity of distribution of at small moduli, defined explicitly by

when is even, and when is odd, where

is (half of) the twin prime constant. See for instance this previous blog post for a a heuristic explanation of this conjecture. From the previous discussion we see that (2) for would imply the twin prime conjecture. Sieve theoretic methods are only able to provide an upper bound of the form .

Needless to say, apart from the trivial case of odd , there are no values of for which the Hardy-Littlewood conjecture is known. However there are some results that say that this conjecture holds “on the average”: in particular, if is a quantity depending on that is somewhat large, there are results that show that (2) holds for most (i.e. for ) of the betwen and . Ideally one would like to get as small as possible, in particular one can view the full Hardy-Littlewood conjecture as the endpoint case when is bounded.

The first results in this direction were by van der Corput and by Lavrik, who established such a result with (with a subsequent refinement by Balog); Wolke lowered to , and Mikawa lowered further to . The main result of this paper is a further lowering of to . In fact (as in the preceding works) we get a better error term than , namely an error of the shape for any .

Our arguments initially proceed along standard lines. One can use the Hardy-Littlewood circle method to express the correlation in (2) as an integral involving exponential sums . The contribution of “major arc” is known by a standard computation to recover the main term plus acceptable errors, so it is a matter of controlling the “minor arcs”. After averaging in and using the Plancherel identity, one is basically faced with establishing a bound of the form

for any “minor arc” . If is somewhat close to a low height rational (specifically, if it is within of such a rational with ), then this type of estimate is roughly of comparable strength (by another application of Plancherel) to the best available prime number theorem in short intervals on the average, namely that the prime number theorem holds for most intervals of the form , and we can handle this case using standard mean value theorems for Dirichlet series. So we can restrict attention to the “strongly minor arc” case where is far from such rationals.

The next step (following some ideas we found in a paper of Zhan) is to rewrite this estimate not in terms of the exponential sums , but rather in terms of the Dirichlet polynomial . After a certain amount of computation (including some oscillatory integral estimates arising from stationary phase), one is eventually reduced to the task of establishing an estimate of the form

for any (with sufficiently large depending on ).

The next step, which is again standard, is the use of the Heath-Brown identity (as discussed for instance in this previous blog post) to split up into a number of components that have a Dirichlet convolution structure. Because the exponent we are shooting for is less than , we end up with five types of components that arise, which we call “Type “, “Type “, “Type “, “Type “, and “Type II”. The “Type II” sums are Dirichlet convolutions involving a factor supported on a range and is quite easy to deal with; the “Type ” terms are Dirichlet convolutions that resemble (non-degenerate portions of) the divisor function, formed from convolving together portions of . The “Type ” and “Type ” terms can be estimated satisfactorily by standard moment estimates for Dirichlet polynomials; this already recovers the result of Mikawa (and our argument is in fact slightly more elementary in that no Kloosterman sum estimates are required). It is the treatment of the “Type ” and “Type ” sums that require some new analysis, with the Type terms turning to be the most delicate. After using an existing moment estimate of Jutila for Dirichlet L-functions, matters reduce to obtaining a family of estimates, a typical one of which (relating to the more difficult Type sums) is of the form

for “typical” ordinates of size , where is the Dirichlet polynomial (a fragment of the Riemann zeta function). The precise definition of “typical” is a little technical (because of the complicated nature of Jutila’s estimate) and will not be detailed here. Such a claim would follow easily from the Lindelof hypothesis (which would imply that ) but of course we would like to have an unconditional result.

At this point, having exhausted all the Dirichlet polynomial estimates that are usefully available, we return to “physical space”. Using some further Fourier-analytic and oscillatory integral computations, we can estimate the left-hand side of (3) by an expression that is roughly of the shape

The phase can be Taylor expanded as the sum of and a lower order term , plus negligible errors. If we could discard the lower order term then we would get quite a good bound using the exponential sum estimates of Robert and Sargos, which control averages of exponential sums with purely monomial phases, with the averaging allowing us to exploit the hypothesis that is “typical”. Figuring out how to get rid of this lower order term caused some inefficiency in our arguments; the best we could do (after much experimentation) was to use Fourier analysis to shorten the sums, estimate a one-parameter average exponential sum with a binomial phase by a two-parameter average with a monomial phase, and then use the van der Corput process followed by the estimates of Robert and Sargos. This rather complicated procedure works up to it may be possible that some alternate way to proceed here could improve the exponent somewhat.

In a sequel to this paper, we will use a somewhat different method to reduce to a much smaller value of , but only if we replace the correlations by either or , and also we now only save a in the error term rather than .

## 7 comments

Comments feed for this article

6 July, 2017 at 4:29 am

AnonymousDear Terry,

I think there’s a typo concerning the value of the twin prime constant, which is around 0.66016.

[Corrected, thanks -T.]6 July, 2017 at 6:18 am

AnonymousIs it possible (by these methods) to improve the new exponent ? (i.e. is it best possible under current methods ?)

6 July, 2017 at 6:45 pm

Terence TaoI think there is room for further improvement, in particular new bounds on exponential sums will certainly help (e.g. finding a version of the Robert-Sargos estimates that work in the non-monomial cases). We tried a half a dozen different things while we were at MSRI together and got a whole range of exponents, of which 8/33 was the best that we could come up with, but perhaps there is some arrangement of the existing tools (or, preferably, the introduction of a new tool) that we overlooked. Conjecturally, of course, the exponent should be able to go all the way down to 0. (This is for instance the case if one assumes the exponent pairs conjecture, or even just the Lindelof hypothesis.)

7 July, 2017 at 11:21 am

David ColeGood luck with your work on proving the twin prime conjecture! The conjecture is true! Why? Refer to the links below for details.

Note: Proving twin prime, or the more general Polignac conjecture, or Goldbach conjecture depends on the distribution of prime numbers along the natural number line. And if one can prove the Riemann Hypothesis, then will have enough knowledge of the prime distribution to prove those conjectures probabilistically. Prove Riemann Hypothesis and the other results will follow logically.

Reference link:

https://www.quora.com/What-great-conjectures-in-mathematics-combine-additive-theory-of-numbers-with-the-multiplicative-theory-of-numbers/answer/David-Cole-146;

https://www.researchgate.net/post/Why_is_the_Riemann_Hypothesis_true2.

24 September, 2017 at 3:29 am

hxypqrit is well known that in the article “A quadratic divisor problem”write by W.Duke,J.B.Friedlander and H.Iwaniec,they proof a linear estimate of this type:$$.

based on A.Weil’s estimate on Ramanujan sum:,and method.and the asymptotic formula .

is it possible to refinement the similar estimate for use the idea ofmethod and your so powerful estimate from medium term h in this blog?

24 September, 2017 at 8:44 am

anonymousDear hxypqr,

The short answer is no, the reason why the method of DFI works for is because has two smooth variables of size at worst .

27 September, 2017 at 5:09 pm

hxypqrthank you.yeah you are right,from my calculate i understand the key point is that does not have a good control with the error term,so because we are deal with the bilinear form,the error term is out of control.in fact if we expect to make the argument still effective.we just need a very good error term estimate of .i think it is nearly to proof RH.

but for the main part,the diagonal estimate and off-diagonal estimate is still effective.