The (presumably) final article arising from the Polymath8 project has now been uploaded to the arXiv as “The “bounded gaps between primes” Polymath project – a retrospective“. This article, submitted to the Newsletter of the European Mathematical Society, consists of personal contributions from ten different participants (at varying levels of stage of career, and intensity of participation) on their own experiences with the project, and some thoughts as to what lessons to draw for any subsequent Polymath projects. (At present, I do not know of any such projects being proposed, but from recent experience I would imagine that some opportunity suitable for a Polymath approach will present itself at some point in the near future.)

This post will also serve as the latest (and probably last) of the Polymath8 threads (rolling over this previous post), to wrap up any remaining discussion about any aspect of this project.

### Like this:

Like Loading...

## 108 comments

Comments feed for this article

1 October, 2014 at 5:46 am

crusttypo on p. 4:

but (0, 2, 4) was not -> but (0, 2, 4) is not

[Thanks, this will be corrected in the next revision of the ms – T.]1 October, 2014 at 6:32 am

crustp.5: peak -> peek

p.11: didnt -> didn’t

p.11: participants some -> participants; some

p.11: Much if the -> Much of the

p.11: compelling there -> compelling; there

p.12: felt as I needed -> felt as if I needed (or: felt I needed)

p.12: and shortly after -> and shortly thereafter

p.16: was written-up -> was written up

p.17: anybody to bring its -> anybody to bring their

p.17: participation to -> participation in

A fascinating read.

[Thanks, this will be corrected in the next revision of the ms – T.]1 October, 2014 at 8:37 am

hmA great read indeed. So basically a polymath project, to be successful, seems to require a blog owner knowledgeable in at least parts of the background material AND ready to shelve other projects. And it seems all participants have greatly benefited from it in various ways. I hope these successes will lead to more high quality blogs come to life and attempt other projects.

A few topics I can’t resist mentionning: the Hadwiger-Nelson problem, and the Jacobian conjecture.

1 October, 2014 at 11:13 pm

AnonymousP. 2, l. -6: “” –> “”

P. 3, l. 2: “” –> “”

P. 3, l. 4: “” –> “”

P. 3, l. 6: “” –> “”

[Thanks, this will be corrected in the revised version of the ms. -T.]1 October, 2014 at 11:31 pm

AnonymousP. 15, l. 17: “often works…)” –> “often works).” [two things]

P. 15, l. -7: “Ph. Michel and” –> “P. Michel, and” [two things]

P. 15, l. -4: “Michel and I” –> “Michel, and I”

P. 16, l. 1, 6: “Ph. Michel’s” –> “P. Michel’s” [two places]

P. 16, l. 1: “Summer” –> “summer of” [two things]

P: 16, l. -9: “me –my” –> “me — my”

P: 16, l. -7: “Deshouillers-Iwaniec– but” –> “Deshouillers-Iwaniec — but”

P: 16, l. -4: “theory – the” –> “theory — the”

P: 16, l. -3: “progressions – due” –> “progressions — due”

P. 17, l. 13: “ “type III sums” ” –> “ “type III” sums ”

[Thanks, this will be corrected in the revised version of the ms. -T.]2 October, 2014 at 8:34 am

Irkutskp. 7: “computationaly” -> “computationally”

[Thanks, this will be added to the next revision of the ms -T.]4 October, 2014 at 12:42 am

Eytan PaldiIn Polymath8b paper (page 16), it seems that “” (in theorem 3.14 statement) should be “” (according to its definition in theorem 3.12).

[Corrected, thanks – T.]4 October, 2014 at 8:18 am

Eytan PaldiIn Polymath8b paper (page 62), in the fourth line above (133) “polynomials ” should be “polyhedra “.

[Corrected, thanks – T.]4 October, 2014 at 9:07 am

Eytan PaldiIn Polymath8b paper (page 64), in the definitions of for the polytopes , their integrands should be squared (i.e. instead of ).

[Corrected, thanks – T.]4 October, 2014 at 10:03 am

Eytan PaldiIn Polymath8b paper (page 65), in the second line below the diagram it seems clearer to delete “each of” (which is somewhat confusing – as may incorrectly be interpreted that each region has 8 integrals).

[Corrected, thanks – T.]4 October, 2014 at 11:04 am

Eytan PaldiIn this page it also seems clearer to replace

“” (with still undefined meaning) by “” – which is the expression for in the previous page.

5 October, 2014 at 1:57 am

Eytan PaldiThe expression “” should be

“” – in accordance with the expression for in page 64.

The correct expression is implemented (as ““) in the code in “Notes on polytope decomposition”.

[Corrected, thanks – T.]7 October, 2014 at 11:40 am

Terence TaoJust heard back from Andrew Granville on the 8a submission. The referees had two final comments on the last revision, one concerning the trace weight section of the paper, and the other concerning the amount of power savings needed for exponential sums. I vaguely recall that any non-trivial amount of power savings would be sufficient to get non-trivial Type I/II sums in the range needed (M,N between and , if one wishes to avoid all use of Type III sums), after a sufficient number of applications of q-van der Corput, but I’ll double check this and make the remark more detailed accordingly. Meanwhile, if someone (Emmanuel or Phillippe?) could check the other referee point, that would be great.

—

In Remark 4.3 they still write that they just need a power saving on one-variable exponential sums to get non-trivial estimates, whereas you wrote they need more than p^{1/4} saving. I asked in the referee report whether that remark was correct, and since they didn’t change anything I guess they are sure about it.

It does make sense to me (they only lose logarithmic factors except when they apply Poisson summation — then they lose a power of x the exponent of which tends to 0 with varpi and delta). But it would be desirable to provide more details, such as references to specific points in the argument, for otherwise the remark may be more confusing than helpful.

– (what is now) remark 6.2: In the 1st paragraph, F has to be a middle extension sheaf. In the second one, it should say “generically pure” instead of “punctually pure”

– Theorem 6.5: “where (…) denotes the Tate-twisted coinvariant space” is wrong, it is just the coinvariant space. If it were Tate-twisted, you wouldn’t need to multiply the trace by q.

12 October, 2014 at 12:02 pm

Terence TaoI’ve just sent the revised version of 8a back to ANT.

9 October, 2014 at 3:03 am

Eytan PaldiIn Polymath8b paper (page 20), the proof of theorem 3.6(i) may be slightly simplified by replacing the asymptotic inequality on (in the fourth line above section 4.2) by the explicit (and stronger) inequality

(which follows directly from (27)).

And also replacing “” in the third line above section 4.2, by ““.

13 October, 2014 at 5:31 pm

Eytan PaldiIn Polymath8b paper, it seems from the proof of theorem 3.5(i) that the strict inequality in (23) can be relaxed to “” (since (23) is used in the proof only to bound by ).

14 October, 2014 at 9:44 am

AnonymousRetropspective article:

P. 8, l. 19: “” –> “”

15 October, 2014 at 7:16 am

Eytan PaldiIn Polymath8b paper (page 34), in the second line below (80), it seems clearer to insert “on ” after “express ” (because the restriction of to is compactly supported – and therefore can be represented as the uniform limit of the compactly supported functions in (81), but not(!) itself – which is not necessarily compactly supported.)

Similarly, in the ninth line below (81), it is clearer to insert “on ” after “can be expressed”.

[Corrected, thanks – T.]20 October, 2014 at 8:41 am

AnonymousIn Polymath8b paper (page 59), the numerical upper bounds in table 3 should be rounded up (i.e. increased by ).

[Corrected, thanks – T.]25 October, 2014 at 7:57 am

Terence TaoA small application of the Polymath8b methods: assuming GEH, we can get infinitely many pairs n,m of products of exactly two primes that differ by at most 2 (a previous paper of Goldston, Graham, Pintz, and Yildirim obtained a bound of 6 unconditionally). The proof uses the admissible tuple of affine-linear forms, and the results in 8b show that there are infinitely many tuples for which at least two of the elements are prime, which implies that two of are products of exactly two primes, giving the claim.

I think this is the limit of the method, that is to say one should not be able to use the method to find products of two primes that differ by exactly one, although I don’t have a formal parity problem obstruction to back up this assertion. (Of course, one of these products will be twice a prime.)

25 October, 2014 at 11:10 am

Aubrey de GreyI must be missing something really obvious here, for which I apologise in advance, but surely at most one of 6n+2 and 6n+4 can be a multiple of two primes, since one of them must be a multiple of 4? Doesn’t that lead to the stronger claim? – i.e. there are infinitely many n for which either both 6n+2 and 6n+3 are multiples of two primes or both 6n+3 and 6n+4 are?

25 October, 2014 at 11:48 am

Terence TaoOops, that was silly of me – the tuple is in fact not admissible after all, since one of and is always even. But if one uses the tuple instead, this is now admissible and one can use to get the products of two primes within 2 of each other.

25 October, 2014 at 12:14 pm

Aubrey de GreyWhew :-)

Very nice result. Can your method prove corresponding bounds for three (etc) products of two primes? Or for products of three (etc) primes?

Also, is there hope for a milder strengthening in which the difference is exactly 2 rather than at most 2?

25 October, 2014 at 12:47 pm

Terence TaoWell, since we can get primes in an interval of length , we can certainly get semiprimes (products of exactly two primes) in an interval of length , simply by doubling all the primes in the former interval. One can probably do better than this crude estimate by choosing k-tuples more cleverly, or by arguing as in the GGPY paper and directly counting the density of semiprimes rather than the density of primes in the GPY sieve. But of course the interval is not going to be as small as 2 any more.

I think one can get products of k-primes within 2 apart on GEH by the same method, e.g. one can find infinitely many n such that two of are prime (this requires to be in a certain residue class mod , as per the Chinese remainder theorem, to make these three quantities all integers), and then two of will be the product of exactly three primes.

Getting semiprimes exactly two apart is unfortunately about the same difficulty as the twin prime conjecture (see this paper of Bombieri for a linkage between the two problems assuming EH). Actually there is a loophole: Bombieri shows that getting the _conjectured asymptotic_ for twin primes is equivalent to getting the conjectured asymptotic for twin semiprimes, assuming EH (and normalising the semiprimes by , actually for this claim I think one needs GEH rather than EH). It might be possible though to exhibit a very thin set of twin semiprimes without necessarily having any implications for twin primes. For instance, if there are only finitely many twin semiprimes, then the prime tuples conjecture fails for an infinite family of pairs of linear forms, namely the forms with prime and ; each such instance of the tuples conjecture corresponds to a sparse set of potential twin semiprimes. Unfortunately, it doesn’t look like the GPY method forces the tuples conjecture to hold for at least one of these pairs, but perhaps a future method could do something like this, even if it falls short of forcing the tuples conjecture for a specific pair such as . (For instance, if the graph connecting linear forms together by connecting with whenever prime and contains an odd cycle, then there is hope. Maybe if one replaces semiprimes by products of exactly k primes for some large k then there is a better chance of success.)

25 October, 2014 at 1:59 pm

Terence TaoOK, thinking about it more, I have an

approachto proving the claim “there exist infinitely many semiprimes that differ by exactly 12” on GEH that doesn’t obviously violate the parity obstruction. However, it doesn’t follow directly from the polymath8b sieve, but requires another sieve with a certain property, and one would have to hunt numerically for such a sieve similar to how we had to work for many weeks to find the sieve that gave . I can replace by any multiple of , but in order to keep admissibility mod 2 and 3 I was not able to make the argument below work for just 2, i.e. I don’t have a route to twin semiprimes this way.The argument goes as this. Suppose for contradiction that semiprimes never differed by exactly 12. Now consider the admissible tuple . If is prime, then cannot be prime, otherwise and would be semiprimes differing by exactly 12. Similarly, if is prime, then cannot be prime. Also, and cannot both be semiprime. We conclude that if is prime, then at least one of or has to have at least three prime factors counting multiplicity, i.e. or . So if we can find a sieve on such that

then we win. Now the existence of such a sieve is not directly blocked by parity, because we are allowing and to both have an even or odd number of prime factors. However, one still needs to construct . In principle one could use the polymath8b calculations to write down an explicit inequality involving a three-dimensional smooth functions F with various support and marginal constraints that one would have to find a solution to, but it's going to be a bit messy.

Perhaps more feasible is to show the existence of some explicit for which one can show (on GEH) that there are infinitely many with . (The above argument is aimed at , but it may end up being better to consider larger values of .) Of course, "explicit" is key: we know for instance that the above assertion is true for at least one of from my previous blog comment, but we don't know which.

25 October, 2014 at 2:09 pm

Terence TaoActually, works, i.e. one can find infinitely many pairs of products of three primes that differ by exactly 60. We start with the admissible tuple and conclude that we can find infinitely many n such that two of are prime. If are prime then . If are prime then . If are prime then .

It may be possible to find a smaller pair than (3,60) by some variant of this method.

26 October, 2014 at 12:43 pm

Eytan PaldiAccording to the wikipedia article “Chen Prime” is a prime such that is either a prime or a product of two primes. Chen proved (in 1966) that there are infinitely many such primes.

26 October, 2014 at 8:40 am

James Maynardusing gives consecutive products of exactly two primes differing by 1.

26 October, 2014 at 8:42 am

James MaynardSorry, this is rubbish – please ignore!

26 October, 2014 at 11:21 am

Aubrey de GreyIn addition to seeking smaller (k,h) for which (on GEH) there are infinitely many products of exactly k primes differing by exactly h, am I right in saying that the question h=1, k at most 2 [apologies for my capitulation on getting WordPress to treat angle brackets nicely] is still open? In other words, does GEH imply that there are infinitely many pairs (n,n+1) of which one is twice a prime and the other is either a prime or a semiprime? After playing with the case h=1, k=2 for a while I am reasonably convinced that it indeed suffers a parity obstruction, though I can’t crystallise it – Terry, could you possibly outline the basis for your original pessimism on that point? – but the case h=1, k at most 2 intuitively seems like it might be freer of such issues.

26 October, 2014 at 11:49 am

Aubrey de GreyAh, indeed this seems to be easy: using the tuple (n, 2n+1, 3n+2) we get the consecutive prime-or-semiprime pairs (2n+1, 2n+2) or (3n+2, 3n+3) or (6n+3, 6n+4) depending on which members of the tuple are prime.

26 October, 2014 at 12:03 pm

Aubrey de GreyGah – what James said…

26 October, 2014 at 12:19 pm

Terence TaoA slight modification of Chen’s theorem will produce infinitely many primes such that is either a prime or a semiprime.

30 October, 2014 at 8:22 am

Eytan PaldiIf is also prime, than is a “Sophie Germain prime” (it is conjectured that there are infinitely many such primes.)

30 October, 2014 at 7:40 am

Pace NielsenWhile I haven’t been able to improve the gap, surprisingly I can improve by increasing to get . Use the admissible 3-tuple .

For instance, if are simultaneously prime, then multiply the first by and the second by to get are both products of 4 primes. Similarly if are both prime, multiply the first by and the second by to get are both products of 4 primes. I’ll leave the last case for readers to work out.

On a slightly different track, would people find it interesting to construct an explicit pair that holds without GEH? If so I could give it some thought. (This would involve finding an appropriate admissible 246-tuple, which might be out of reach. But it may be interesting to see how close we can get.)

30 October, 2014 at 7:50 am

Aubrey de GreyNice! I’d certainly be interested. Maybe start with the EH case?

30 October, 2014 at 9:26 am

Pace NielsenThe EH case isn’t too bad. If I didn’t make any mistakes, I believe the admissible 4-tuple will work to give

30 October, 2014 at 12:14 pm

Terence TaoUnfortunately for EH we need to take admissible 5-tuples rather than 4-tuples (because we can only use technology if GEH is not available, and we only have for ).

Without GEH or EH, we need to work with admissible 50-tuples.

30 October, 2014 at 1:58 pm

Pace NielsenOh, for the 246 vs. 50, I was thinking of the smallest diameter! A 50-tuple is much more doable.

Here is an admissible 8-tuple: (with to get admissibility).

29 October, 2014 at 12:07 am

Eytan PaldiIn Polymath8b paper (page 56), it seems clearer to insert in the second line above (125) “square-integrable symmetric” before “functions ” (since (126) holds provided that these functions are square- integrable, and (127) holds provided that these functions are also symmetric.)

[Corrected, thanks – T.]30 October, 2014 at 7:34 am

Eytan PaldiIs it possible (perhaps under RH) to transfere bounds on the spacing between consecutive primes to corresponding ones on the spacing between consecutive zeros of on the critical line?

30 October, 2014 at 12:23 pm

Terence TaoI don’t think so, but Goldston and Montgomery did show that the pair correlation conjecture (which basically governs spacings between consecutive zeroes, or more precisely differences between nearby (but not necessarily consecutive) zeroes) was equivalent to an assertion about the variance in the error term in the prime number theorem in short intervals . James was able to use the bounded gaps between primes technology produce a sparse set of intervals with primes for very small values of c, but this is well short of what is needed to get to the Goldston-Montgomery claim. Basically, the bounded gaps between primes machinery (a) produces a lower bound on primes in various sets, but this lower bound is well short of the conjectural truth, and (b) it only works for a fairly sparse set of configurations. On the other hand, the zeroes of the zeta function control asymptotics for primes (or products of primes) in generic intervals, and are not so sensitive to what happens in a sparse set of intervals.

31 October, 2014 at 6:57 am

Eytan PaldiThanks for the explanation!

So it seems that (in general), a claim about a (sufficiently) sparse set of primes can’t be simply reformulated by a corresponding equivalent claim on the zeros of the zeta function (i.e. the corresponding claim is generally weaker than the original one.)

31 October, 2014 at 6:52 am

Terence TaoI just realised that much of the above discussion was anticipated in this paper of Goldston, Graham, Pintz, and Yildirim and its followup. In the first paper, GGPY establish unconditionally the analogue of DHL(3,2) for semiprimes: given any three admissible linear forms, one can find infinitely many values of n such that at least two of the forms are semiprime (product of exactly two primes). In the second paper, they use that to establish the claim. The idea is to find n such that two of hold, that is to say two of “4m+1 is the product of three primes”, “5m+1 is semiprime”, and “6m+1 is the product of three primes” hold. This can be accomplished for instance by first finding l such that two of 28l+3, 385l+41, 66l+7 are semiprime, and then setting m = 77l+8. (These odd looking forms arise from the Chinese remainder theorem, as discussed in Section 2 of the second GGPY paper.) Once one has the desired m, then one of the pairs (20m+4,20m+5), (12m+2,12m+3), (30m+5,30+6) will work.

On GEH, the same argument now gives . In fact I would conjecture that a modification of the argument gives for any , which would mean on GEH that every integer can be expressed in the form for some primes .

31 October, 2014 at 7:51 am

Terence TaoI *think* I can get (k,h) = (2,12) on GEH by a modification of the proof of Chen’s theorem, but I have to double check the calculations which are a bit messy. Chen’s theorem tells us, for instance, that one can find infinitely primes p such that p+6 is almost prime (the product of at most two primes). Similarly for p+6 replaced by p-6. I think that if one assumes GEH, one can get the stronger assertion that there are infinitely many primes p such that p+6 and p-6 are

bothalmost prime. (Chen’s argument uses only the Bombieri-Vinogradov theorem, which heuristically is only “half as powerful” as GEH, so one should expect a result that is approximately “twice as strong” if one uses GEH.) If so, then by the argument from https://terrytao.wordpress.com/2014/09/30/the-bounded-gaps-between-primes-polymath-project-a-retrospective/#comment-433315 , one of the triplets (p-6,p+6), (2p, 2p+12), (2p-12, 2p) will be a pair of semiprimes differing by exactly 12.31 October, 2014 at 8:05 am

Terence TaoUnfortunately, the double-check revealed that my calculations were faulty (I had applied the upper and lower bounds for the linear sieve to what is now a two-dimensional sieve). It may not be as easy to obtain the claimed strengthening of Chen’s theorem on GEH after all.

31 October, 2014 at 10:49 am

Terence TaoNow trying to modify Chen’s argument to produce infinitely many primes p such that (p+6)(p-6) has at most 4 prime factors, as this will also suffice. Calculations are a mess though.

2 November, 2014 at 8:26 pm

Aubrey de GreyJust trying to make sure I understand what is known and what is open here, specifically in the unconditional setting (no EH or GEH). Am I right in saying that the following are known, where “(k,h)” means “infinitely many pairs [n,n+h] are both the product of exactly k primes”:

– (k,h) for all h, for all k at least 4

– (1,h) for all h that are the width of an admissible n-tuple, n at least 50

– (3,h) for selected h, of which the smallest is h=60

and that the following are open:

– (2,h) for all h, pending a report from Terry regarding his attack on (2,12)

– (3,h) for all h other than 60 and a few others that can be achieved using the same approach

Is this an accurate summary? I’m thinking that surely we know something about (2,h), since semiprimes are denser than primes, but I haven’t seen any specifics. Also, I’m somewhat confused about the weaker property “at least one pair [n,n+h] are both the product of exactly k primes”. For which [k,h], if any, is this known but the “infinitely many” case is not?

3 November, 2014 at 9:59 am

Terence TaoUnconditionally: the GGPY paper establishes (k,1) for all k at least 4, and it is likely that their method also gives (k,h) for all h and all k at least 4, although this has not been checked yet. By using an admissible 50-tuple, one should be able to get (3,h) for some enormous value of h, but this has not been worked out. I don’t think there are any results for (2,h) or (1,h) known unconditionally.

On GEH: the GGPY method gives (k,1) for all k at least 3, and it is likely that their method also gives (k,h) for all h and all k at least 3, although this has not been checked yet. This would supersede what Pace and I worked out, which was (3,60) and (4,30) respectively (and one can probably replace 60 or 30 by multiples thereof). I have an approach for (2,12) (and more generally (2,12m) for any m), but the sieve-theoretic calculations turned out to be quite nasty and I am no longer confident I can modify the Chen argument to work here. The Chen argument, when used for instance to find primes p with p+6 almost prime, relies on the inequality

for some parameter z (e.g. ), where -rough means "no prime factor less than z". This reflects the fact that if a number has at least three prime factors and is less than , then either it has at least two factors less than or equal to , or it has one factor less than or equal to and is of the form for some . One then uses the linear sieve to lower bound the first term on the RHS and upper bound the other two terms. For the term one has to use the quadratic sieve rather than the linear sieve, but the bounds here are significantly less favorable and it doesn't look like I'm going to get to have as few as four prime factors with this method. (One may be able to get something like six or seven if one really tries, but this won't lead to any result for k=2.)

Finally, even on GEH there is no result of the form (1,h) known for any _specific_ value of h, as this is of comparable difficulty to the twin prime conjecture (which asserts (1,2)). Of course, from 8b we know that (1,h) is true for at least one of h=2, h=4, h=6, but we can't say which. (Using the tuple n,n+6,n+12 we can also say that one of (1,6) and (1,12) is true, but again we can't say which.)

3 November, 2014 at 8:36 pm

Pace NielsenIn the GGPY paper linked above, there seems to be a problem in the statement of Theorem 1. My guess is that the authors meant to say that only of the forms are (rather than of them), but this is only a guess.

Using the the theorem as stated, taking yields which would get exact gaps of any even size between numbers!

This also seems to necessitate a change to the constant stated in Corollary 2.

3 November, 2014 at 10:59 pm

Terence TaoOh, I see, it has to do with Footnote 2. The o(1) error here is with regards to the asymptotic limit , and so the theorem gives no explicit value of for any fixed value of , such as .

4 November, 2014 at 7:12 am

Pace NielsenOh, good catch! I must have re-read those first few pages ten times, but never really looked at the footnote.

31 October, 2014 at 9:38 am

Luq MalikReblogged this on Luq Malik and commented:

If you haven’t been following the polymath Project, concerning “the bounded gaps between primes” and major implications about the Goldbach conjecture, you should get up to snuff immediately, as the results are piling up exponentially. Initiated by eminent mathematician and expositor Terence Tao, PMP exemplifies the extraordinary power of collaborative research.

31 October, 2014 at 4:28 pm

Eytan PaldiOn a possible improvement in theorem 3.8:

In Polymath8b paper, in the proof of theorem 3.8 (page 35), the inequality (84) is somewhat stronger than required by theorem 3.5(i) (assumption (23)). Therefore, in order to exploit this slight inefficiency, suppose that in (84) (perhaps by using symmetry) it is possible to have also

Than in (84) can be replaced by

Which implies that in (72) (and also in the key inequality assumption of theorem 3.8), may be replaced by

.

To illustrate the improvement, we have for and the condition which for is

which is verified using the lower bound 3.93586 in Table 3 (page 59). This gives the current record for (without the epsilon enlargement!)

Moreover, for we have the condition

which is almost(!) verified by the lower bound given by Ignace (in Table 2 of “Krylov method for lower bounding “) – using polynomials up to degree 25. It seems that (a new record!) can be verified by increasing very slightly the maximal degree above 25.

31 October, 2014 at 7:27 pm

Terence TaoYes, one can enlarge the simplex to from this observation, replacing by the quantity which I think we called . In principle could be as large as but in practice it is smaller because we are not always in the ideal case (just because the function is symmetric, this does not mean that the various components of need to be symmetric). If I recall correctly, the problem with working with in the medium dimensional setting k=49 or k=50 was that the integrals were just too hard to compute (and the Krylov method becomes complicated very quickly, with lots of piecewise polynomial functions flying around).

1 November, 2014 at 12:08 pm

Eytan PaldiI agree that should be supported on Otherwise, it is not difficult to show that for any of the form (82) which is not(!) supported on , there is alway one (j-th) component for which (23) is not satisfied, i.e.

For some and .

6 November, 2014 at 9:38 am

Eytan PaldiMore precisely, should be supported on the intersection of and – needed to apply (in the proof of theorem 3.8) both theorems 3.5(i) and 3.6(i).

It is easy to verify that this intersection is whenever .

1 November, 2014 at 9:45 am

Terence TaoThe Polymath8b paper is now published at http://www.resmathsci.com/content/1/1/12

9 November, 2014 at 4:28 am

Eytan PaldiIn Polymath8b paper, in the proof of lemma 6.1 (page 42), the integrand is undefined (being ““) for . Therefore, in the fourth line of the proof, “” should be

““.

[Corrected, thanks – T.]9 November, 2014 at 11:50 am

Eytan PaldiNow “??” appears (everywhere) instead of reference numbers.

[Fixed, thanks – T.]12 November, 2014 at 8:29 pm

Terence TaoThe Polymath8a paper is now accepted for publication at Algebra & Number Theory. I will go ahead and upload the final version of the paper source files to their web site.

13 November, 2014 at 6:35 am

AnonymousNice. Will the latest version also be uploaded to arXiv?

15 November, 2014 at 6:41 am

Terence TaoI now have galley proofs for the retrospective article at https://terrytao.files.wordpress.com/2014/11/nl94_proofs.pdf .The EMS would like them back by the end of the week, so this is the last chance to have any corrections to them to be made in print.

15 November, 2014 at 1:37 pm

Eytan PaldiThe partial information (“to appear”) for Ref. [47] (Zhang’s paper) should be updated.

15 November, 2014 at 3:14 pm

AnonymousComments to the references:

[19], [20], [30], [38], [43]: Full stop missing at the end of the references.

[39]: En-dash instead of hyphen to indicate page range.

13 December, 2014 at 8:20 am

Eytan PaldiIn Polymath8b paper (page 33), in the fourth line below (74), it seems that “ stays away” should be “ support stays away”.

[Corrected, thanks – T.]13 December, 2014 at 5:22 pm

Terence TaoGalley proofs for the Polymath8a paper are now available at http://ef.msp.org/articles/proofs/ant/140204-Polymath.pdf

The corrections and queries look minor, and I think I can address them myself in a few days. This is the last chance to make any changes to the published version of 8a, so let me know if there are any such changes to make before returning the proofs by next week or so.

14 December, 2014 at 6:50 am

AnonymousIn the last line of Lemma 3.4 (and probably in other places too) the ellipses should NOT be centered as the referee claims; see page 172 in The TeXbook.

14 December, 2014 at 6:06 pm

AnonymousFor all the maps, \colon should be used instead of : in order to get the correct spacing.

2 January, 2015 at 3:01 am

AnonymousIn Polymath8b paper, it seems that the proof of theorem 3.8 (in page 33) can be slightly simplified by constructing directly from by multiplying with the cutoff function ( can be chosen to be the region given by the third line below (74)).

3 January, 2015 at 9:29 am

Terence TaoPolymath8a has now been published at http://msp.org/ant/2014/8-9/index.xhtml

24 January, 2015 at 2:05 pm

Terence TaoThe documentary “Counting from infinity”, about Yitang Zhang (and, tangentially, about Polymath8), produced with the support of MSRI and the Simons foundation, is now finished: http://www.zalafilms.com/films/countingindex.html .

24 January, 2015 at 5:30 pm

AnonymousCan the documentary be watch online for free (legally, of course) somewhere?

25 January, 2015 at 3:24 am

Zhang Tan2

n.2+1=3,5

n.2.3+1=7,13,19,25

n.2.3+5=11,17,23,29

n.2.3.5+1=31,61,91,121,151,181

n.2.3.5+7=37,67,97,127,157,187

n.2.3.5+11=41,71,101,131,161,191 row a

n.2.3.5+13=43,73,103,133,163,193 row b

n.2.3.5+17=47,77,107,137,167,197

n.2.3.5+19=49,79,109,139,169,199

n.2.3.5+23=53,83,113,143,173,203

n.2.3.5+29=59,89,119,149,179,209

This is a sieve to generate prime numbers

On the left hand side of = is the first nth prime numbers multiplied together added with a co-prime.

On the right hand side of = is all the possible co-primes generated for the next sieve and all possible prime numbers.

All non-prime numbers on the right hand side of = are made up of the co-primes to the first nth prime numbers multiplied together.

As you can see gaps of 2 between two prime numbers are possible.

Starting with the set n.2.3+1 and n.2.3+5 the probability of a pair of numbers which differ by 2 and does not have a factor of 5 is 3/5.

Starting with the set n.2.3+1 and n.2.3+5 the probability of a pair of numbers which differ by 2 and does not have a factor of 5 and 7 is 3/5 multiplied by 5/7.

Starting with the set n.2.3+1 and n.2.3+5 the probability of a pair of numbers which differ by 2 and does not have a factor of 5 and 7 and 11 is 3/5 multiplied by 5/7 multiplied by 9/11.

There must exist a twin prime in n.2.3+1 and n.2.3+5.

Start with the next set n.2.3.5+coprime the probability of a pair of numbers in row a and row b which differ by 2 and does not have a factor of 7 is 5/7.

Start with the next set n.2.3.5+coprime the probability of a pair of numbers in row a and row b which differ by 2 and does not have a factor of 7 and 11 is 5/7 multiplied by 9/11

There must exist a twin prime in n.2.3.5+11 and n.2.3.5+13.

Each n.2.3.5…. + coprime has a gap of 2 from the previous sieve.

I hope this clearifies the possibility of twin primes.

Regards

Zhang

26 January, 2015 at 2:52 am

KoussayI would like to hear a new theory that will help to advance research on twin primes, and it will push forward to the effects on the theorem Gold Bach!:

Or I and J two intervals whose lengths are: n, m, then:

Limit [P (I) / P (J)] = n / m, where “n” tends to infinity

Since P ([a, b]) is the number of prime numbers contained in the interval [a, b]

26 January, 2015 at 3:01 am

KoussaySorry sorry!, I repeat my new theory

If I and J is tow intervals: I = [a, a n], and J = [b, b + Kn]

Limit [P (J) / P (I)] = K, where “n” tends to infinity

Since P ([c, d]) is the number of prime numbers contained in the interval [c, d]

27 January, 2015 at 10:40 am

Eytan PaldiIt is interesting to observe that the (very good) lower bound

was obtained using only(!) functions of the special form

Is there a simple (perhaps intuitive) explanation for such a good bound (in spite of using a very restricted class of functions in )?

27 January, 2015 at 6:16 pm

AnonymousNew article in upcoming issue of New Yorker:

http://www.newyorker.com/magazine/2015/02/02/pursuit-beauty

28 January, 2015 at 12:20 am

observerLocks making job! It would be interesting to hear about industrial experience of other mathematicians. E.g., I worked with drunkards on a car assembly line and between each task I read an English book.

29 January, 2015 at 4:01 am

KoussaySorry!, Correction of orthographe et grammaire:

New Theory: whatever has two integers “r” and “N” then N, and “r*N + 1” are relatively prime,((Coprimes))

Demonstration:

Suppose “p” is one of the divisors of “N”, then this “p” itself is a divisor of “r*N” so this “p” can not divide (a * N + 1)

Find yourself a potential Error?

29 January, 2015 at 4:50 am

KoussayGeneralization:

Whatever N, a, and b are 3 integers :

If “N”, and “b” do not have a common divisor less than, or equal to “b”, then: “N”, and “a * n + b” will be coprime !,

Demonstration:

assume that “p” is a divisor of “N”, it is also a divisor of “a * N”

so this “p” can not be a divisor of “a * N + b” Except for cases where:

“p” = “b”, or “p” divides “b”

(( I would like to know your opinion on this new theorem ))

<>

31 January, 2015 at 2:44 am

KoussayJ’ai démontré qu’il y a , au moins, un premier entre N, et (3/2)N, quelque soit N > 3, est-ce que c’est important, selon vous ?

31 January, 2015 at 3:03 am

Koussay*** Soient les 2 intervales: J1 = [ a , a + n] , et J2 = [ b , b + kn], on note:

P( [c,d] ) = le nombre des nombres premiers entre c, et d,

Alors Limite{ P(J2)/P(J1) } = k, lorsque n tend vers l’infini,

Je vais publier la démonstration, si vous me confirme l’importance de ce new théory!

31 January, 2015 at 3:29 am

AnonymousPlease stop this gibberish now!

19 March, 2015 at 12:05 am

AnonymousWhy the “singular series” (actually defined as a product over primes) is not called “singular product”?

19 March, 2015 at 9:39 pm

Terence TaoHistorically, the singular series was first computed by Hardy and Littlewood using their circle method as a sum over major arcs. Later, it was realised that one could factor the series into an Euler product that was simpler and more conceptually natural, and so the singular series is often now defined in its equivalent product form.

14 April, 2015 at 3:03 pm

AnonymousIn Polymath8b paper (page 60), the current lower bound 2.00558 in the RHS of (128) should be slightly smaller (since the ratio in the LHS is 2.0055790…).

[Thanks, this has been corrected in the Dropbox version of the paper, but I don’t think I’ll try to change the arXiv or published version unless a more significant erratum is needed. -T.]22 April, 2015 at 8:51 am

AnonymousIn Polymath8b paper (page 33), in the proof of theorem 3.8 it is clear that the denominator of (75) approximates the denominator of (74) since approximates in , but the approximation of the corresponding numerators is not sufficiently clear (although it follows from the continuity of the functionals in – which is equivalent to – but this is proved only later(!) in corollary 6.4).

22 April, 2015 at 9:27 am

Terence TaoContinuity of the in can be proven fairly easily by a number of means (for instance, it is easy to see that the are bounded on both and , and then one can apply interpolation; of course, the Cauchy-Schwarz argument used to prove Corollary 6.4 also works here, and one can also use a sufficiently general form of Schur’s test).

7 May, 2015 at 5:54 pm

Terence TaoOne of the Polymath8b bounds has just been improved… the bound on for large has improved from to , by Baker and Irving. The main idea is to replace the von Mangoldt function in the GPY argument by a slightly smaller (and occasionally negative) function with a slightly better exponent of distribution, using a sieve of Harman.

10 May, 2015 at 2:34 am

AnonymousIt is interesting to observe that the exponent of distribution

with , is determined in Polymath8a paper by two “active” constraints:

1. Combinatorial constraint (lemma 2.7, page 11):

2. A constraint for new type I estimate (theorem 2.8(iii), page 11):

In the paper by Baker and Irving, the combinatorial constraint is no more active, and is (effectively) replaced by the new active constraint:

1′. A constraint for type III estimate (Polymath8a paper, theorem 2.8(v)):

Which (as explained in lemma 6) gives the (slightly larger) exponent of distribution for the function , where (effectively) the constraint on is slightly relaxed to .

Note that this relaxed constraint on is no longer combinatorial! (it is determined by the non-combinatorial constraints 2 and 1′ above).

6 July, 2015 at 3:40 pm

Aubrey de GreySorry for being so late to this new party. Does the Baker/Irving result revive interest in improving the Type III estimates? There are two ideas for improving the Type III estimates that are mentioned at http://michaelnielsen.org/polymath1/index.php?title=Distribution_of_primes_in_smooth_moduli but became uninteresting after the relevant border for maximising \varpi became the Type I/combinatorial one, but I have no sense of how much the resulting improvements might turn out to be. Does the Baker/Irving work still leave a limit on the extent to which a combinatorial constraint is pushed out of the picture, or is that aspect really entirely gone (as was the hope when we were discussing a succession of Type IV, Type V etc limits)? (There’s also the approach to improving Type I mentioned in https://terrytao.wordpress.com/2014/04/14/polymath8b-x-writing-the-paper-and-chasing-down-loose-ends/#comment-306762 – two 1D vdC’s rather than the elusive 2D option.)

13 July, 2015 at 3:03 am

AnonymousIn the home page for the Polymath8 project, it seems appropriate to add a remark to the current records table that this improved bound on is by Baker and Irving (not by the Polymath8 project).

[Fair enough – I’ve added a reference. -T.]8 May, 2015 at 4:51 am

AnonymousIt seems that there is a difficulty in the derivation of lemma 15 (page 21) in the proof of lemma 1 in this paper. More precisely, by replacing the “detection function” (used in Polymath8b paper) by the function , then by using lemma 3.4 (page 10 in Polymath8b paper) both(!) the numerator and the denominator of the key inequality

are (apparently) multiplied by the same(!) factor

which (after cancellation) should give the same result of theorem 3.10 (in Polymath8b paper) without(!) the factor in lemma 15 in this paper.

This should improve the condition on in lemma 1 to

.

8 May, 2015 at 7:51 am

Terence TaoI believe the factor doesn’t show up in the denominator term, which does not involve or (it is basically just the sum , up to some other normalising factors).

8 May, 2015 at 9:24 am

AnonymousYes, It was my mistake! (I saw the estimate of which has this factor, and forgot that the denominator should be related to estimate.)

14 July, 2015 at 7:57 am

mixedmathDr. Tao, I wonder what the next Polymath project might be, or where it might come from. Perhaps Polymath8 and Polymath9 are still recent, but I do not know of any current proposals for Polymaths. Are you aware of current proposals — or relatedly, do you have any thoughts for another good polymath project?

16 July, 2015 at 10:32 pm

AnonymousFor each positive integer , let be the smallest integer for which . Is it possible to find an upper bound on ?

17 July, 2015 at 12:03 am

Terence TaoNot exactly, but we do have the result of Maynard http://arxiv.org/abs/1405.2593 that for sufficiently large x, there are intervals in of length that contain primes, which implies that for any m, there is an n such that for some . However, while this gap is bounded, it may not equal the absolute minimal value of this gap, which may potentially only occur much later in the sequence of primes.

10 June, 2016 at 5:17 am

AnonymousIn the Polymath8b paper, since the discussion in the parity problem section is (as stated) “somewhat informal and heuristic in nature”, is it possible to make it sufficiently rigorous to imply a theorem that is best possible using certain (well defined!) standard sieve methods?

Otherwise, what could be the existing loopholes in that discussion – which (hopefully) still enable improvement of by (well defined class of) standard sieve methods?

31 October, 2016 at 8:47 am

Sultan M.Dear POLYMATH8 Members:

First forgive me for my poor English;

Secondly: if we have Twin Primes of gap 2 on the form: P- Q = 2 , where P & Q are prime numbers , I had noticed that Q – 2 in all the cases that I had tested always equalls a

Multiple of 3 , the first pair that ” My Claims” applys to is (5,7) , so :

7- 5 = 2 , then 5 – 2 = 3 , which is the first Multiple of 3 .

Also :

13 – 11 = 2 , then 11 – 2 = 9 , which is the third Multiple of 3 ,

199 – 197 = 2 , then 197 – 2 = 195 , which equalls 3 x 65 ,

421 – 419 = 2 , then 419 – 2 = 417 , which equalls 3 x 139 ,

7936141 – 7936139 = 2 , then 7936139 – 2 = 7936137 , which equalls 3 x 2645379 ,

And so on.

In every case that I had encountered the same thing happened ; also I had noticed that if we generalized the above equations in the form : Q – 2 = 3 x M , where 3M is a

Multiple of 3 , the Decimal place of “Units” in any M in all the cases that I had examined always be 3 or 5 or 9 , for example :

19 – 17 = 2 , then 17 – 2 = 15 , which equalls 3 x 5 , so we have only one decimal place which is of course is Units with value of 5 , for more explanation we take the following

examples ;

283 – 281 = 2 , then 281 – 2 = 279 , which is 3 x 93 , so by looking at the decimal place of Units in 93 we found 3 ,

523 – 521 = 2 , then 521 – 2 = 519 , which is 3 x 173 , here we had 3 in the decimal place of units ,

241 – 239 = 2 , then 239 – 2 = 237 , which is 3 x 79 , here we had 9 in the decimal place of units ,

5001121 – 5001119 = 2 , then 5001119 – 2 =5001117 , which is 3 x 1667039 , here we had 9 in the decimal place of units ,

11034349 – 11034347 = 2 , then 11034347 – 2 = 11034345 , which is 3 x 3678115 , here we had 5 in the decimal place of units , and so on.

31 October, 2016 at 12:22 pm

AnonymousNote that if are primes greater than , it is easy to verify that and , which implies your observation that is divisible by .

1 November, 2016 at 10:41 am

Sultan M.I am very thankful for your clear explaination

30 April, 2019 at 10:57 am

Terence TaoAs this project has been inactive for many years it is perhaps unlikely that too many of the former participants are still following this thread. Nevertheless, this seems like the right place to make a small announcement: I have been informed that the Polymath8b paper “Variants of the Selberg Sieve, and bounded intervals containing many primes” published in Research in the Mathematical Sciences was just selected for the inaugural “Best Paper Award” by that journal (an official announcement of this shall be made shortly). This award comes with a modest cash prize ($500) which obviously cannot be accepted by the pseudonymic DHJ Polymath. However, they can donate this prize to some charitable cause instead. The editor at RMS handling the paper, Ken Ono, suggested donating to the AMS “Who wants to be a mathematician?” contest; other suggestions are welcome.

30 April, 2019 at 2:39 pm

Robert Silverman SilvermanWhat about the Number Theory Foundation? Established by John Selfridge, I believe.

Number Theory Foundation

Department of Mathematics

Dartmouth College

Hanover, NH 03755

30 April, 2019 at 3:49 pm

AnonymousI think this is a good idea. The number theory foundation is likely to support the kind of mathematics that was related to this Polymath project.

7 May, 2019 at 10:25 am

Terence TaoThis sounds like a very appropriate use of the funds, especially since the NTF nowadays is primarily sponsoring a best paper award of its own (the Selfridge prize). I will let RMS know that this is where we would like the prize funds to be directed.

12 February, 2020 at 6:33 pm

Failing fast and failing forward - Back-to-Front Maths | Kennedy PressBack-to-Front Maths | Kennedy Press[…] [3] https://terrytao.wordpress.com/2014/09/30/the-bounded-gaps-between-primes-polymath-project-a-retrosp… […]