There are multiple purposes to this blog post.
The first purpose is to announce the uploading of the paper “New equidistribution estimates of Zhang type, and bounded gaps between primes” by D.H.J. Polymath, which is the main output of the Polymath8a project on bounded gaps between primes, to the arXiv, and to describe the main results of this paper below the fold.
The second purpose is to roll over the previous thread on all remaining Polymath8a-related matters (e.g. updates on the submission status of the paper) to a fresh thread. (Discussion of the ongoing Polymath8b project is however being kept on a separate thread, to try to reduce confusion.)
The final purpose of this post is to coordinate the writing of a retrospective article on the Polymath8 experience, which has been solicited for the Newsletter of the European Mathematical Society. I suppose that this could encompass both the Polymath8a and Polymath8b projects, even though the second one is still ongoing (but I think we will soon be entering the endgame there). I think there would be two main purposes of such a retrospective article. The first one would be to tell a story about the process of conducting mathematical research, rather than just describe the outcome of such research; this is an important aspect of the subject which is given almost no attention in most mathematical writing, and it would be good to be able to capture some sense of this process while memories are still relatively fresh. The other would be to draw some tentative conclusions with regards to what the strengths and weaknesses of a Polymath project are, and how appropriate such a format would be for other mathematical problems than bounded gaps between primes. In my opinion, the bounded gaps problem had some fairly unique features that made it particularly amenable to a Polymath project, such as (a) a high level of interest amongst the mathematical community in the problem; (b) a very focused objective (“improve !”), which naturally provided an obvious metric to measure progress; (c) the modular nature of the project, which allowed for people to focus on one aspect of the problem only, and still make contributions to the final goal; and (d) a very reasonable level of ambition (for instance, we did not attempt to prove the twin prime conjecture, which in my opinion would make a terrible Polymath project at our current level of mathematical technology). This is not an exhaustive list of helpful features of the problem; I would welcome other diagnoses of the project by other participants.
With these two objectives in mind, I propose a format for the retrospective article consisting of a brief introduction to the polymath concept in general and the polymath8 project in particular, followed by a collection of essentially independent contributions by different participants on their own experiences and thoughts. Finally we could have a conclusion section in which we make some general remarks on the polymath project (such as the remarks above). I’ve started a dropbox subfolder for this article (currently in a very skeletal outline form only), and will begin writing a section on my own experiences; other participants are of course encouraged to add their own sections (it is probably best to create separate files for these, and then input them into the main file retrospective.tex, to reduce edit conflicts. If there are participants who wish to contribute but do not currently have access to the Dropbox folder, please email me and I will try to have you added (or else you can supply your thoughts by email, or in the comments to this post; we may have a section for shorter miscellaneous comments from more casual participants, for people who don’t wish to write a lengthy essay on the subject).
As for deadlines, the EMS Newsletter would like a submitted article by mid-April in order to make the June issue, but in the worst case, it will just be held over until the issue after that.
— 1. Description of Polymath8a results —
Let denote the quantity
where denotes the
prime. Thus for instance the notorious twin prime conjecture is equivalent to the claim that
. However, even establishing the finite nature of
unconditionally was an open problem until the celebrated work of Zhang last year, who established the bound
Zhang’s argument, which built upon earlier work of Goldston, Pintz, and Yildirim, can be summarised as follows. For any natural number , define an admissible
-tuple to be a tuple
of increasing integers, which avoids at least one residue class modulo
for each prime
. For instance,
is an admissible
-tuple, but
is not. The Hardy-Littlewood prime tuples conjecture asserts that if
is an admissible
-tuple, then there exists infinitely many
such that
are simultaneously prime. This conjecture is currently out of reach for any
; for instance, the case when
and the tuple is
is the twin prime conjecture. However, Zhang was able to prove a weaker claim, which we call
, for sufficiently large
. Specifically, (following the notation of Pintz) let
denote the assertion that given any admissible
-tuple
, one has infinitely many
such that at least two of the
are prime. It is easy to see that if
holds and
is an admissible
-tuple, then
. So to bound
, it suffices to show that
holds for some
, and then find as narrow an admissible
-tuple as possible.
Zhang was able to obtain for
, and then took the first
primes larger than
to be the admissible
-tuple, observing that this tuple had diameter at most
. (Actually, it has diameter
, as observed by Trudgian.) The earliest phase of the Polymath8a project consisted of using increasingly sophisticated methods to search for narrow admissible tuples of a given cardinality; in the case of this particular
, we were able to find an admissible tuple whose diameter was
. On the other hand, an application of the large sieve inequalities shows that admissible
-tuples asymptotically must have size at least
(and we conjecture that the narrowest
-tuple in fact has size
), so there is a definite limit to how much one can improve the bound on
purely from finding ever narrower admissible tuples. (As part of the Polymath8a project, a database of narrow tuples was set up here (and is still accepting submissions), building upon previous data of Engelsma.)
To make further progress, one has to analyse how the result is proven. Here, Zhang follows the arguments of Goldston, Pintz, and Yildirim, which are based on constructing a sieve function
, supported on (say) the interval
for a large
, such that the sum
has good upper bounds, and the sums
has good lower bounds for . Provided that the ratio between the lower and upper bounds is big enough, one can then easily deduce
(essentially from the pigeonhole principle).
One then needs to find a good choice of , which on the one hand is simple enough that the sums (1), (2) can be bounded rigorously, but on the other hand are sophisticated enough that one gets a good ratio between (2) and (1). Goldston, Pintz, and Yildirim eventually settled on a choice essentially of the form
for some auxiliary parameter and some
; this is a variant of the Selberg sieve. With this choice, they were already able to establish upper bounds of
as strong as
on the Elliott-Halberstam conjecture, which asserts that
for all and
and to obtain the weaker result
without this conjecture. Furthermore, any nontrivial progress on the Elliott-Halberstam conjecture (beyond what is provided by the Bombieri-Vinogradov theorem, which covers the case
) would give some finite bound on
.
Even after all the recent progress on bounded gaps, we still do not have any direct progress on the Elliott-Halberstam conjecture (3) for any . However, Zhang (and independently, Motohashi and Pintz) observed that one does not need the full strength of (3) in order to obtain the conclusions of Goldston-Pintz-Yildirim. Firstly, one does not need all residue classes
here, but only those classes that are the roots of a certain polynomial. Secondly, one does not need all moduli here, but can restrict attention to smooth (or friable moduli – moduli with no large prime factors – as the error incurred by ignoring all other moduli turns out to be exponentially small in
. With these caveats, Zhang was able to obtain a restricted form of (3) with
as large as
, which he then used to obtain
as small as
.
Actually, Zhang’s treatment of the truncation error is not optimal, and by being more careful here (and by relaxing the requirement of smooth moduli to the less stringent requirement of “densely divisible” moduli) we were able to reduce down to
. Furthermore, by replacing the monomial
with the more flexible cutoff
and then optimising in
(a computation first made in unpublished work of Conrey, and then in the paper of Farkas, Pintz, and Revesz, with the optimal
turning out to come from a Bessel function), one could reduce
to be as small as
(leading to a bound of
that ended up to be
).
To go beyond this, we had to unpack Zhang’s proof of (a weakened version of) the Elliott-Halberstam type bound (3). His approach follows a well known sequence of papers by Bombieri, Fouvry, Friedlander, and Iwaniec on various restricted breakthroughs beyond the Bombieri-Vinogradov barrier, although with the key difference that Zhang did not use automorphic form techniques, which (at our current level of understanding) are almost entirely restricted to the regime where the residue class is fixed in
(as opposed to varying amongst the roots of a polynomial modulo
, which is what is needed for the current application). However, the remaining steps are familiar: first one uses the Heath-Brown identity to decompose (a variant of) the expression in (3) into some simpler bilinear and trilinear sums, which Zhang called “Type I”, “Type II”, and “Type III” (though one should caution that these are slightly different from the “Type I” and “Type II” sums arising from Vaughan-type identities). The Type I and Type II sums turn out to be treatable using a careful combination of the Cauchy-Schwarz inequality (as embodied in tools such as the dispersion method of Linnik), the Polya-Vinogradov completion of sums method, and estimates on one-dimensional exponential sums (which are variants of Kloosterman sums) which can ultimately be handled by the Riemann hypothesis for curves over finite fields, first established by Weil (and which can in this particular context also be proven by the elementary method of Stepanov). The Type III sums can be treated by a variant of these methods, except that one-dimensional exponential sum estimates are insufficient; Zhang instead needed to turn to the three-dimensional exponential sum estimates of Birch and Bombieri to get an adequate amount of cancellation, and these estimates ultimately arose from the deep work of Deligne on the Riemann hypothesis for higher dimensional varieties (see this previous blog post for a discussion of these hypotheses).
In our work, we were able to improve the Cauchy-Schwarz components of these arguments in a number of ways, with the most significant gain coming from applying the “-van der Corput
-process” of Graham and Ringrose to the Type I sums; we also have a slightly different way to handle the Type III sums (based on a recent preprint of Fouvry, Kowalski, and Michel), based on correlations of hyper-Kloosterman sums (again coming from Deligne’s work), which gives significantly better results for these sums (so much so, in fact, that the Type III sums are no longer the dominant obstruction to further improvement of the numerology). Putting all these computations together, we can stretch Zhang’s improvement to Bombieri-Vinogradov by about an order of magnitude, with
now allowed to be as large as
rather than
. This leads to a value of
as low as
, which in turn leads to the bound
. These latter bounds have since been improved by Maynard and by Polymath8b, mostly by significant improvements to the sieve-theoretic part of the argument (and no longer using any distributional result on the primes beyond the Bombieri-Vinogradov theorem), but the distribution result of Polymath8a is still the best distribution result known on the primes, and may well have other applications beyond the bounded gaps problem.
Interestingly, the -van der Corput
-process is strong enough, in fact, that we can still get non-trivial bounds of (weakened versions of) the form (3) even if we don’t attempt to estimate the Type III sums, so in particular we can obtain a Zhang-type distribution theorem even without using Deligne’s theorems, with
now reaching as large as
.
17 comments
Comments feed for this article
7 February, 2014 at 9:41 am
Lior Silberman
The right parenthesis in the definition of the GPY
is missing.
[Corrected, thanks – T.]
7 February, 2014 at 10:15 am
Anonymous
Annals link http://annals.math.princeton.edu/wp-content/uploads/YitangZhang.pdf is broken.
“smooth (or friable moduli – moduli with no small prime factors” – should say no large prime factors.
[Added, thanks – T.]
7 February, 2014 at 10:46 am
Emmanuel Kowalski
I’ll start adding a file with my perspective soon. By the way, in “well-known sequence of papers by Bombieri, Friedlander and Iwaniec”, one should add Fouvry’s name.
[Added, thanks – T.]
7 February, 2014 at 1:40 pm
Mark Bennet
I would suggest that the educational aspect of a public discussion about serious open problems should be mentioned. I am well beyond my sell by date as a serious mathematician, but this has engaged me throughout. The fact that this is near enough to a high-stakes problem to be interesting, but not obviously the key to glory, seems to have created the dynamic within which significant progress has been made. The bifurcation on 8a, which realised that a significant result could be accessible to a wide audience reminds me a bit of the way the Prime Number Theorem is tackled in Hardy and Wright. Even before I went to university I could understand each step – the proof made sense – but how anyone ever discovered it was a mystery, only to be revealed at a deeper level of understanding.
On the other hand (I think this was from Tim Gowers) it would be sad to spike a viable doctoral thesis on which someone had been working in this way. Collaboration does not necessarily fit the academic models of credit and tenure on which much research activity is currently predicated
10 February, 2014 at 10:00 am
arch1
Mark, is the “spiking” comment based on a belief that collaborative efforts are particularly likely to do this, or that *when* they do this, it’s particularly unfortunate, or .. (something else)? (I’m assuming that “spiking” here means roughly “anticipate the results of”; and yes I’m a layperson:-)
7 February, 2014 at 2:26 pm
Richard
May I just say that I approve of “friable“, and hope for wider use of this term, with its precision, non-ambiguity, quasi-poetic feeling in English (as with many terms used by geologists, a field with amateur British gentleman-scholar early history), and, of course, its vanguard role in Kowalski’s “French-derived insurgency“.
7 February, 2014 at 3:52 pm
Eytan Paldi
In (3), the outer summation should be over
.
[Corrected, thanks – T.]
7 February, 2014 at 5:49 pm
HUMBERTO TRIVIÑO
en (3) recalcar el comportamiento de la indeterminada x y tomar en cuenta el intervalo o restricciones empleando la sugerencia de EYTAN PALDI
10 February, 2014 at 11:38 am
Terence Tao
Here is one sign of the remarkable level of public interest in Yitang Zhang’s story and bounded gaps between primes: http://playground-sf.org/topic/
10 February, 2014 at 11:56 am
arch1
It seemed as though there was concern for awhile that in Polymath 8b your increasing efforts might only be getting you asymptotically closer to 2. If so it would be interesting to hear whether you think that the collaborative approach increased not only your capability, but also your gumption.
10 February, 2014 at 12:26 pm
Terence Tao
Here are some retrospectives on previous Polymath (or Polymath-like) projects:
Tim Gowers and Michael Nielsen on Polymath1: http://www.nature.com/nature/journal/v461/n7266/full/461879a.html
Dick Lipton and Ken Regan on the Polymath-like project to analyse a claimed proof of P != NP: http://link.springer.com/chapter/10.1007%2F978-3-642-41422-0_1
Alison Pease and Ursula Martin analysing Mini-Polymath3: http://homepages.inf.ed.ac.uk/apease/papers/seventy-four.pdf
Our format would be a bit different (giving more first-person perspectives), but these prior articles might give some sense of the type of story a retrospective could give.
14 February, 2014 at 7:47 am
Eytan Paldi
In the abstract of the retrospective paper, it should be
(without the decimal point)
[Corrected, thanks – T.]
16 February, 2014 at 6:11 am
Anonymous
Terry, the time is 2013 not 2014 as you wrote in retrospective paper
[Corrected, thanks – T.]
24 April, 2014 at 3:32 pm
rhaflimarouane
hi,
I developed new prime numbers finding algrithm and i approve the TWIN PRIME NUMBERSCONECTURE , the file is in pdf format and can be downloaded from my blog http://rhaflimarouane.wordpress.com/
thanks
19 June, 2014 at 8:37 pm
Polymath8: wrapping up | What's new
[…] the retrospective paper), superseding the previous Polymath8b thread (which was quite full) and the Polymath8a/retrospective thread (which was more or less […]
22 July, 2014 at 11:03 am
Variants of the Selberg sieve, and bounded intervals containing many primes | What's new
[…] which is the second paper to be produced from the Polymath8 project (the first one being discussed here). We’ll refer to this latter paper here as the Polymath8b paper, and the former as the […]
30 August, 2015 at 10:37 am
Polymath 8 | Euclidean Ramsey Theory
[…] There is an uploaded version of 8a. Also there is going to be a article about the experience of working on 8a and possibly 8b for the newsletter of the European Mathematical society. The deadline is around April. For more check this post. […]