This is the eleventh thread for the Polymath8b project to obtain new bounds for the quantity

;

the previous thread may be found here.

The main focus is now on writing up the results, with a draft paper close to completion here (with the directory of source files here). Most of the sections are now written up more or less completely, with the exception of the appendix on narrow admissible tuples, which was awaiting the bounds on such tuples to stabilise. There is now also an acknowledgments section (linking to the corresponding page on the wiki, which participants should check to see if their affiliations etc. are posted correctly), and in the final remarks section there is now also some discussion about potential improvements to the bounds. I’ve also added some mention of a recent paper of Banks, Freiberg and Maynard which makes use of some of our results (in particular, that ). On the other hand, the portions of the writeup relating to potential improvements to the MPZ estimates have been commented out, as it appears that one cannot easily obtain the exponential sum estimates required to make those go through. (Perhaps, if there are significant new developments, one could incorporate them into a putative Polymath8c project, although at present I think there’s not much urgency to start over once again.)

Regarding the numerics in Section 7 of the paper, one thing which is missing at present is some links to code in case future readers wish to verify the results; alternatively one could include such code and data into the arXiv submission.

It’s about time to discuss possible journals to submit the paper to. Ken Ono has invited us to submit to his new journal, “Research in the Mathematical Sciences“. Another option would be to submit to the same journal “Algebra & Number Theory” that is currently handling our Polymath8a paper (no news on the submission there, but it is a very long paper), although I think the papers are independent enough that it is not absolutely necessary to place them in the same journal. A third natural choice is “Mathematics of Computation“, though I should note that when the Polymath4 paper was submitted there, the editors required us to use our real names instead of the D.H.J. Polymath pseudonym as it would have messed up their metadata system otherwise. (But I can check with the editor there before submitting to see if there is some workaround now, perhaps their policies have changed.) At present I have no strong preferences regarding journal selection, and would welcome further thoughts and proposals. (It is perhaps best to avoid the journals that I am editor or associate editor of, namely Amer. J. Math, Forum of Mathematics, Analysis & PDE, and Dynamics and PDE, due to conflict of interest (and in the latter two cases, specialisation to a different area of mathematics)).

## 182 comments

Comments feed for this article

5 June, 2014 at 12:03 pm

Sylvain JULIENI don’t want to make a nuisance of myself, but perhaps rigorizing the approach sketched in http://mathoverflow.net/questions/132973/would-the-following-conjectures-imply-lim-inf-n-to-inftyp-nk-p-n-ok-lo, given that the quantity r0(n) can be defined unconditionally (i.e without assuming Goldbach’s conjecture) and that there is some evidence that r0(n)=O(log² n), hence supporting NFPR conjecture, could be useful to obtain better bounds for the quantity H_m. Indeed I managed under these conjectures to get the desired form for the asymptotics of H_m by neglecting the divergent error terms involved in the relevant expressions of p(n+k)-p(n). I don’t know if this procedure is valid though, could you please tell me more about it? Thank you in advance.

5 June, 2014 at 5:58 pm

Gergely HarcosI think your is only defined when is the sum of two primes. That is, you seem to assume Goldbach’s conjecture in the first place.

6 June, 2014 at 11:24 pm

Sylvain JULIENActually, can be defined unconditionally as the smallest potential typical primality radius of . This notion is precisely defined in http://mathoverflow.net/questions/61842/about-goldbachs-conjecture but as it is quite long to state precisely, I gave the conditional definition (hence assuming Goldbach’s conjecture) in the question of the previous link.

20 June, 2014 at 6:04 am

sylvainjulienYou might be interested in reading http://ideasfornumbertheory.wordpress.com/ where I give further information about these topics.

5 June, 2014 at 12:35 pm

Aubrey de GreyFor avoidance of confusion, and after some illumination from Pace, I just want to clear up one detail arising from the past few days of back-and forth, namely the question of what maximum degrees for signatures and for overall polynomials have been in force in the various cases I’ve been running. My code is based on a version of Pace’s that predates the restriction to using only even numbers in signatures, and which, contrary to my prior assumption, does not automatically have maximum total degree exceeding maximum signature degree – though of course it does do so when the specified maximum total degree is odd. Thus, concerning a few of my reports from recent days:

– 4.00124 for M_50,1/25 is with max total degree 25, max signature degree 24 – so actually TWO degrees less than Pace’s original report of 4.0043, not one as I stated earlier. Note also that in the paper the value 4.0043 is said to have used max total degree 28, but actually it was 27.

– 4.00156 for M_51,1/50 is actually with max total degree 22, same as the max signature degree, contrary to my speculation on June 1st.

– 3.99402 for M_50,1/49 and 3.99342 for M_53,0 are with max total degree 27, max signature degree 26.

– 4.00238 for M_54,0 is with max total degree 23 (not 24 as it currently says in the paper), max signature degree 22, i.e. the same as Pace’s original report, but using all 195 signatures which he had been unable to do. Following clarification from Pace about exactly which three signatures he dropped back then, I now confirm his value of 4.001115 with those same maximum degrees when those signatures are dropped. The surprising value 3.9991 for M_54,0 that I previously obtained, and which uncovered a (happily harmless to the paper) bug in the code yesterday, occurs with max signature degree still 22 but max total degree also 22.

All the above are with evens-only signatures.

5 June, 2014 at 2:36 pm

Pace NielsenNow that the code runs so quickly, it might make sense to drop all of the nonsense I had to do with two different degrees (for max signature degree and max total degree), using even signatures, and so forth.

Instead, we could simply list the results for some total degree, and just using unrestricted signatures (besides not allowing 1’s).

5 June, 2014 at 2:57 pm

Aubrey de GreyThat would certainly be a nice simplification of the exposition, but (a) I think it is actually so interesting that using only even numbers works very nearly as well as using all signatures that it would be a shame not to mention it in the paper, and (b) we would have to redo quite a few computations (and by “we” I mean you, because my machine can’t feasibly go beyond d=19 or 20 with unrestricted signatures). Also, in practice you weren’t really using two different degrees – you were just using odd-numbered maximum total degree, which imposes a maximum signature degree one less just because even numbers can only sum to even numbers. So all in all I think I’d vote to leave things as they are.

5 June, 2014 at 4:32 pm

Terence TaoIt would be great if you or Pace would modify the relevant portion of the paper (i.e. the first two sections of h1-new.tex) to reflect all the new developments (and maybe also to provide links to supporting data and code), as I’ve kind of lost track of exactly what the state of art of the algorithms are, and what the parameters are for the results claimed in the paper.

5 June, 2014 at 4:42 pm

Aubrey de GreyI’m afraid I have never edited a .tex file and would be scared of messing it up… Pace, can you take this on?

5 June, 2014 at 6:49 pm

Pace NielsenI’d be happy to, but I’m a little lost at the new developments as well. Go ahead and email me a list of the changes that need to be made, and I’ll make them.

6 June, 2014 at 8:33 am

Aubrey de Grey[Done.] Many thanks Pace!

6 June, 2014 at 10:46 am

Aubrey de GreyPace has done the edits to Section 7 (and the bounds on a couple of subtheorems in Section 3) and tells me he will upload them imminently. The only additional change that I think could be useful would be to edit the first item in the list of possible avenues for further progress, to reflect our extrapolations regarding the limit of M for large d. I suggest something like the following (replacing all but the first line of the current paragraph):

It is natural to speculate whether k could be lowered slightly, for instance to k=49, by further numerical computations. Our computations are limited by the growth of execution time with allowed degree of polynomials, but for the range of degrees within our ability to explore we have observed that for a given k and epsilon M always rises with d at a rate that declines very close to exponentially, allowing us to extrapolate a limit for M at large degree to within 0.001 or so with high confidence. This has not allowed us to decide whether M_49,epsilon can exceed 4, but we are confident that M_48,epsilon cannot, so the best that could be achieved in this way is H_1<=240. The best hope for deciding the k=49 case may be a refinement of the Krylov subspace approach described in Section 7.2. We additionally remark that this extrapolation approach has led us to conclude that no improvements are possible to the bounds stated in Theorems 3.9 (vii) and 3.13 (xiii) using these methods.

6 June, 2014 at 10:46 am

Pace NielsenThe appropriate changes to h1-new.tex and sutheorems.tex have been made and uploaded to dropbox.

6 June, 2014 at 12:23 pm

Eytan PaldiAubrey: since the question whether is still open (see e.g. Ignace’s comment above – in which repeated Shanks transform extrapolations indicate that is barely above 4), I’m not sure about your conclusion that it is not possible to improve the bound in theorem 3.9(vii).

7 June, 2014 at 10:37 am

Terence TaoThanks for the edits! I went through the paper and it looks close to submittable now once the final remarks on improvements are updated, and links to dropbox files are included. (Perhaps the thing to do is to create a subfolder in dropbox for all code and files, and link to that subfolder, rather than give two or three links to individual file names?)

I think we should be able to submit the paper in a few weeks, if everyone signs off on it. Since there was a weak preference for submitting to ANT, I’ll ask the editors there if they would like to handle a second large paper from us (it may well be that they would prefer not to do so to allow room for other authors too). I’ll get back to everyone on the response.

6 June, 2014 at 12:59 pm

Aubrey de GreyAh, damn – thank you Eytan – in my text I inadvertantly switched the M_53,0 and M_49,epsilon cases. Here is what I should have written:

It is natural to speculate whether k could be lowered slightly, for instance to k=49, by further numerical computations. Our computations are limited by the growth of execution time with allowed degree of polynomials, but for the range of degrees accessible by these methods we have observed that for a given k and epsilon M always rises with d at a rate that declines very close to exponentially, allowing us to extrapolate a limit for M at large degree to within 0.001 or so with high confidence. This has led us to conclude that it is unlikely that k can be reduced even to 49, let alone further. This same approach also strongly implies that no improvements are possible to the bounds stated in Theorem 3.13 (xiii) using these methods. Thus, the only case of interest for which there remains substantial uncertainty is whether M_53,0 can exceed 4 with suffifiently high-degree polynomials, which would improve the bound of Theorem 3.9 (vii). However, the necessary degree is probably out of reach by these methods; the best hope for deciding the k=53 case may be a refinement of the Krylov subspace approach described in Section 7.2.

7 June, 2014 at 11:33 am

Eytan PaldiIn the fourth line below (6.25), the “dr” should be removed.

[Corrected, thanks – T.]7 June, 2014 at 3:02 pm

AnonymousSee page 62. :-)

[Corrected, thanks – T.]8 June, 2014 at 1:45 am

Eytan PaldiIn page 56, lines 16-17: “the natural number” may be removed.

In page 56, line 9 from below: it seems clearer to replace “optimizes” by “nearly maximizes” , and to insert “” after “quantity”.

In page 61, last line: It seems clearer to replace “(up to almost everywhere” by “(almost everywhere)”.

In page 70, line 6 above (3): “section 7.4” should be “section 7.2” , and it seems better to place the reference to footnote 8 after “further polytopes” (in the second line above (3)).

[Corrected, thanks – T.]8 June, 2014 at 10:23 am

Andrew V. SutherlandIn the process of double-checking the bounds in Theorem 3.3 I noticed a significant typo. The bound for part (ix) H(41588) <= 474226 should be H(41588) <= 474266, as is listed in the records table for m=2 (without Deligne or EH), in the timeline (Jan 9 entry), and in Table 3 of Appendix A.

The same fix needs to be made in (7) on p.72 of Appendix A, which was copied form Theorem 3.3.

[Corrected, thanks – T.]8 June, 2014 at 11:33 am

Andrew V. SutherlandI have successfully verified all the (corrected) bounds in Theorem 3.3.

Assuming we are all satisfied with Theorems 3.2 and 3.3, I’m wondering if it now makes sense to remove the question marks from the corresponding k and H values in the timeline (I notice that there is no question mark on the most recently added H(k) bound for m=5).

[Question marks removed – T.]8 June, 2014 at 11:17 am

Eytan PaldiPage 69: it seems that the singular series notation here is not the “standard” one – used by previous papers.

(see also the continuation of footnote 7 in page 70.)

[Corrected, thanks – T.]9 June, 2014 at 1:41 am

Eytan Paldithere is a problem with the references in the compiled pdf of the paper.

[Corrected, thanks – T.]9 June, 2014 at 7:52 am

Eytan PaldiPage 62: there are some “too large” vertical spaces.

9 June, 2014 at 11:14 am

Eytan PaldiIt should be remarked that the recent breakthrough using the idea of constructing a sieve in several dimensions, is in some sense similar to the ideas of Roth and Baker (to construct auxiliary functions in several variables) in their breakthrough theorems.

9 June, 2014 at 4:12 pm

polymath_fanJust out of curiosity, can anyone elaborate a bit on this recent breakthrough? I hadn’t heard about it. When was it made and how has it affected this polymath project? Where has it been discussed?

9 June, 2014 at 6:42 pm

Pace NielsenI think that Eytan is referring to James Maynard’s (and independently Terry Tao’s) idea to use a multidimensional sieve. This is the basis of all improvements made in the Polymath 8b project. For more information on the multidimensional sieve I would refer you to James’ paper (accepted in Annals) and the first few posts by Terry in the polymath 8b project.

9 June, 2014 at 7:33 pm

Terence TaoDear Eytan,

Could you be more specific about the connections to the work of Roth and Baker?

10 June, 2014 at 2:00 am

Eytan PaldiThe common basic (simple – but very rewarding!) idea in all cases is to generalize previous work (using auxiliary function of one or two variables) by introducing a multivariate auxiliary function. In our case (polymath 8b) the univariate sieve of polymath8a (determined by the function ) is generalized to a multivariate sieve (determined by the function ).

Roth and Baker works (as clearly described in sections 7 and 2 of Baker’s book “Transcendental number theory”) use this basic idea:

Roth’s theorem (on rational approximation to algebraic numbers) is generalizing previous work (based on the construction of auxiliary polynomials in two variables – by Thue, Siegel and Dyson) – via the

construction of a multivariate auxiliary polynomial.

Baker’s theorem (on linear forms in logarithms) is generalizing previous work (based on the construction of an auxiliary function of a single complex variable – by Gelfond and Schneider) – via the construction of an auxiliary function of several complex variables.

10 June, 2014 at 7:58 am

Terence TaoI see. It’s an interesting observation, although I don’t know how to push it any further – it’s not clear, for instance, that any of the other ideas in transcendence theory transfer over to sieve theory (or vice versa). In any event, given that the sieve theory version of this idea comes from Maynard’s paper rather than the Polymath paper, our current paper might not be a suitable place to make this remark.

9 June, 2014 at 3:33 pm

Eytan PaldiPage 29: in the last line, the definition of (as an integral over variables) implies that it is a function of (which contradicts its asymptotic property – as a functional of

[Clarified – T.]9 June, 2014 at 3:39 pm

Eytan PaldiCorrection: in my comment above, the index should be .

10 June, 2014 at 12:23 am

interested non-expertMinor correction: Zhang’s paper (ref 50) has been published: Pages 1121-1174 from Volume 179 (2014), Issue 3 (http://annals.math.princeton.edu/2014/179-3/p07)

[Added, thanks – T.]10 June, 2014 at 12:36 am

interested non-expertand Maynard’s text is online available at http://annals.math.princeton.edu/articles/8772 – so maybe writing “to appear, Ann. of Math” is the best.

[Added, thanks – T.]10 June, 2014 at 12:37 pm

Eytan PaldiPage 37: in the lines 7, 9, 12 above (5.18), it seems clearer to have the summands inside parentheses.

(also in page 38, 7 lines below (5.19)).

11 June, 2014 at 1:11 pm

James MaynardI’ve just had a final read-through of the draft paper, which I think is in a good form. Here is a short list of (minor) thoughts I had:

Page 2, comment under Theorem 1.3 regarding effective lower bounds for :

Maybe it would be better to leave this comment out, since one can obtain effective lower bounds without too much effort. (In the to-be-published version of my paper I have a remark on Page 12 indicating that similarly to Pintz’s suggestion that the bound can be made effective, and this is done more explicitly in my dense clusters paper. One can obtain explicit bounds using Cauchy-Schwarz and Proposition 4.2, the idea of which goes back to GPY and is also done explicitly in the dense clusters paper.)

Page 7, footnote:

In the version of the BFI paper I’ve taken online, I think this is labelled conjecture 1 instead of conjecture 0.

Page 15, remark under Theorem 3.13:

It is maybe better if we don’t have the comment about my joint preprint with Banks and Freiberg here, since I believe we are planning to revise that paper to be based on the stronger result which is only sketched at the end of the preprint, and this then wouldn’t be making use of the result of Theorem 3.13(i).

Page 25, first line:

I think we should have instead of here.

Page 25, lines 8 and 10:

I think the bounds should be something like and respectively, instead of with .

Page 39, lines 2 and 3:

Integration should be over not and the numerators for the differentials should be not .

Page 45, three lines after (6.5):

The version of my paper on the Annals website is slightly different from the arxiv version. Therefore, given that we are now citing the annals version, the reference to equation (8.17) from my paper should instead refer to (7.19).

Page 49, line 12:

I think we haven’t defined , maybe its better to write .

Page 51, line 7:

I think the numerator of the integrand should be instead of .

Page 55, line -8:

The reference to Lemma 7.3 in the older version of my paper should refer to Lemma 8.3 in the newer version.

Page 62, line 1:

I was slightly confused by the reference to the suggestion of the optimal choice of . I assume that we don’t believe that this choice is globally optimal for , but merely optimal given the restriction to our 60 piece decomposition?

Page 70, lines -12, -13:

I think there should be a singular series factor in the right hand sides here.

[Corrected, thanks – T.]15 June, 2014 at 1:04 am

StefangtA bit of my reasoning…

for

It seems to me that there are more advanced features than f(n), which are getting closer and closer approaches the function

15 June, 2014 at 5:52 am

Eytan PaldiFrom the lower bounds in table 2, it seems that for (at least) , we have

It is possible that this lower bound (which is very close to the upper bound) holds for all .

15 June, 2014 at 9:22 am

Terence TaoFrom Ignace’s extended data at http://users.ugent.be/~ibogaert/KrylovMk/KrylovMk.pdf it looks like there may be a crossover between and at , although this is not watertight because the lower bounds we have for the may be off from the truth.

15 June, 2014 at 9:55 am

Eytan PaldiPerhaps this bound indicates that one should try to show that

(maybe by using a method similar to the one by which was proved, in a previous comment, to be monotonically increasing.)

16 June, 2014 at 3:41 am

Ignace BogaertYes, for fixed Krylov subspace size , the largest eigenvalue actually converges to as . Therefore, unfortunately, numerical results of this type do not give us information about the asymptotic behavior of .

16 June, 2014 at 4:58 am

Eytan PaldiIgnace: in your table (with fixed ), the largest eigenvalue seems to monotonically increase with (without convergence to ).

Obviously, increasing with is needed to keep the resulting lower bounds sufficiently close to .

16 June, 2014 at 5:47 am

Ignace BogaertMy statement in the above was not completely accurate. Rather, it should be: for fixed and , the largest eigenvalue converges to a value that quickly goes to four as a function of increasing .

@Eytan: yes, they increase first, but after a while they start decreasing again. For , this happens after , which is beyond the reach of the table.

17 June, 2014 at 3:36 am

Aubrey de GreyIgnace: apologies for my limited understanding of the Krylov method, but is it now your belief that:

– we might well be able to decide the M_53 case by increasing r, but

– without using the Shanks transform, r=25 is currently the limit of your computational capacity, and

– the inaccuracy that that transform introduces is too great to let us trust the results, since they come out so close to M=4?

If so, I might be able to reimplement your Maple code in Mathematica and optimise it to reach sufficient r without the transform. Of course this is all somewhat peripheral at present since it is limited to epsilon=0, but it may have the side-benefit of providing more confidence concerning the behaviour of M for really large d, especially d exceeding k, which might not follow the exponential decay pattern we’ve seen at smaller d.

17 June, 2014 at 4:43 am

Ignace BogaertHi Aubrey: that is a pretty good description of the situation right now! I believe that increasing should allow settling this question but I cannot estimate how large would have to be.

Judging by the scaling of the number of linearly independent symmetric polynomials as a function of the degree, I think current (parallel) hardware could realistically compute the Krylov subspace and the matrix entries for up to somewhere around 50. That leaves a lot of room for improvement upon the current .

Given the fact that you achieved such impressive speedups for Pace’s code (and the fact that I’m not a Maple expert), it seems quite likely that you could push higher than 25. Should you decide on doing this, I’d be willing to answer any questions that you might have about the Maple code.

17 June, 2014 at 4:57 am

Aubrey de GreyGreat. I don’t have (or know) Maple, but I had no trouble reimplementing Terry’s Maple script for validating k in the asymptotic setting in Mathematica, so I’d hope to find your code similarly straightforward. However, I don’t have your source code – there seems to be nothing in the dropbox, and your own site seems only to have compiled code. Could you please email the source code to me at aubrey@sens.org so I can have a look? (I’m very busy this week, so no promises, but I’d like to have a go at this when I can.)

16 June, 2014 at 5:47 am

Anonymous(Almost) layman asking: if I understand all of this correctly, your current bound, without any conjectures, 246. Relying on unproves conjectures, you are able to reach a bound of 6. In particular, you have been able to show that your current methods are insufficient to obtain a better result than 6, and in order to prove the twin prime conjecture, it is more than likely that completely new insight is needed. Is this correct?

[Yes – T.]16 June, 2014 at 1:45 pm

Eytan PaldiPage 10, line 10: “can be positive only if … ” seems clearer.

[Modified, thanks – T.]17 June, 2014 at 2:50 am

Aubrey de GreyI’ve been tinkering a bit more with choices of monomials, in the hope that some kind of explanation might emerge for why omitting all terms whose signatures contain odd numbers reduces M by such a small amount. That hasn’t happened, but one thing I found seemed interesting enough to report here (not least since it may give others an idea for said explanation): among terms with evens-only signatures, those whose power of (1-P_1) is even can be eliminated at very little cost. A second trend is that the smaller a signature’s first (and thus largest) element is, the smaller the cost of omitting terms using it, the signature {} being the cheapest of all. Taking advantage of these, it transpires that with k=50, d=25 and epsilon optimised to 37/1000 we can reach M=4.000181 using only 1061 of the 1780 evens-only-signature terms. This is achieved by deleting terms whose power of (1-P_1) is even and whose signature is {} or has a first element of at most 10. Both trends are quite approximate, so my impression is that by exploiting the relative contribution of each term more exhaustively we could probably keep M_50,epsilon above 4 with considerably fewer than 1000 terms – though probably still too many to make it reasonable to perform the Maple validation that Terry did with k=3, unfortunately!

17 June, 2014 at 2:52 am

Eytan PaldiPage 50: in definition, it would be clearer to have the entire argument of the expectation inside parentheses.

In definition (last line), should be .

Remark: in page 51 (third line), the threshold may be optimized as an additional free parameter.

[Corrected, thanks – T.]17 June, 2014 at 8:25 am

Eytan PaldiIt seems that in definition, the argument of the expectation should include also the integral (since its integration domain is random – depending on ). Therefore, the new “)” should be moved to enclose also the integral.

[Corrected, thanks – T.]19 June, 2014 at 8:37 pm

Polymath8: wrapping up | What's new[…] paper, the nearly finished Polymath8b paper, and the retrospective paper), superseding the previous Polymath8b thread (which was quite full) and the Polymath8a/retrospective thread (which was more or less […]