The main purpose of this post is to roll over the discussion from the previous Polymath8 thread, which has become rather full with comments. As with the previous thread, the main focus on the comments to this thread are concerned with writing up the results of the Polymath8 “bounded gaps between primes” project; the latest files on this writeup may be found at this directory, with the most recently compiled PDF file (clocking in at about 90 pages so far, with a few sections still to be written!) being found here. There is also still some active discussion on improving the numerical results, with a particular focus on improving the sieving step that converts distribution estimates such as into weak prime tuples results . (For a discussion of the terminology, and for a general overview of the proof strategy, see this previous progress report on the Polymath8 project.) This post can also contain any other discussion pertinent to any aspect of the polymath8 project, of course.

There are a few sections that still need to be written for the draft, mostly concerned with the Type I, Type II, and Type III estimates. However, the proofs of these estimates exist already on this blog, so I hope to transcribe them to the paper fairly shortly (say by the end of this week). Barring any unexpected surprises, or major reorganisation of the paper, it seems that the main remaining task in the writing process would be the proofreading and polishing, and turning from the technical mathematical details to expository issues. As always, feedback from casual participants, as well as those who have been closely involved with the project, would be very valuable in this regard. (One small comment, by the way, regarding corrections: as the draft keeps changing with time, referring to a specific line of the paper using page numbers and line numbers can become inaccurate, so if one could try to use section numbers, theorem numbers, or equation numbers as reference instead (e.g. “the third line after (5.35)” instead of “the twelfth line of page 54”) that would make it easier to track down specific portions of the paper.)

Also, we have set up a wiki page for listing the participants of the polymath8 project, their contact information, and grant information (if applicable). We have two lists of participants; one for those who have been making significant contributions to the project (comparable to that of a co-author of a traditional mathematical research paper), and another list for those who have made auxiliary contributions (e.g. typos, stylistic suggestions, or supplying references) that would typically merit inclusion in the Acknowledgments section of a traditional paper. It’s difficult to exactly draw the line between the two types of contributions, but we have relied in the past on self-reporting, which has worked pretty well so far. (By the time this project concludes, I may go through the comments to previous posts and see if any further names should be added to these lists that have not already been self-reported.)

## 130 comments

Comments feed for this article

3 September, 2013 at 12:18 am

somedudeWouldn’t it be easier to setup a git repo and then you can refer to equation (31) of git version a1234?

3 September, 2013 at 4:31 am

someotherdudeWhen I am a world famous professor nearing forty years old with a family, I will start spending my remaining spare time to learn how to use new version control software.

3 September, 2013 at 9:11 am

Terence TaoI have used git (and also Subversion) in previous collaborations, and did weigh all three options when deciding how to set up the repository. There is a definite tradeoff here between ease of use/reliability (particularly among casual users of this sort of technology) versus powerful functionality, with Dropbox at the former end of the spectrum, git at the latter, and Subversion in between. (In previous Polymath projects, we also experimented with shared Google documents and even the Polymath wiki, but these platforms were decidedly unsatisfactory for this purpose.)

Personally, Subversion happens to be my preferred platform for most of my collaborative writing projects, being easier to set up (and easier to troubleshoot) than git, but with more version control functionality (e.g. merging, update annotation, and rollback) than Dropbox. My experience with git has been that while there are some advantages to that platform (particularly with regard to merging and with writing offline), the decentralised git model is not initially intuitive to grasp, and did cause a number of inconveniences for me when I started using it (for instance, I spent a while only updating my local repository and not pushing it out to a repository shared with my co-author, leading to a certain amount of confusion; also, when I switched to a new laptop, I had a non-trivial amount of difficulty in synchronising the repositories).

In the end, I went with Dropbox as being good enough for this particular project, as it is extremely easy to set up and very reliable, with the files in the repository easily accessible to casual participants. (It is true that older versions of files cannot be accessed in this way, but I view this as a minor inconvenience only, since as mentioned in the post above we can simply use other references than page numbers to locate various pieces of text, and there is little need for us to refer to outdated versions of the text.) Currently we have five people actively editing the Dropbox files, plus several more contributing corrections and suggestions; as with other Polymath projects, the set of participants is not known in advance, and in particular I did not know what version control platforms the participants were already comfortable with (not all mathematicians are “early adopters” of technology). As such, ease of use became a more significant concern than functionality, and this was the deciding factor in my choice to use Dropbox. (Also, the platforms are not completely exclusive; I know that one of the editors is synchronising the Dropbox folder with a local Subversion repository, for instance.)

3 September, 2013 at 8:22 am

Gergely HarcosTwo lines before Lemma 6.7, Polyá should be Pólya. In footnote 12, Weil should be Weyl, and the same typo occurs where the footnote is called.

[Corrected, thanks – T.]3 September, 2013 at 8:24 am

Gergely HarcosIn the abstract, Revesz should be Révész.

[Corrected, thanks – T.]3 September, 2013 at 12:45 pm

Terence TaoA brief update on where I’m at with the editing; I’m currently working on the pre-Deligne Type I/II estimates (typei-ii.tex), and plan to turn to the post-Deligne Type I estimate (typei-advanced.tex) next, which may require some coordination with Phillippe and Emmanuel who have been working on deligne.tex. Since all three of the Type I/II estimates we prove (the pre-Deligne Type I, the pre-Deligne Type II, and the post-Deligne Type I estimate) all involve the same initial reductions to a certain exponential sum estimate, I have decided to perform the reductions in a unified manner, and then estimate the three resulting exponential sums separately. This part of the paper may need some polishing, particularly with regards to the motivation of why we perform various manipulations in the argument; we are of course following Zhang’s argument in many places, but we can add some further commentary (e.g. motivating the dispersion method).

I’m not going to touch the other files much for now, although I do plan to collect a number of estimates on the divisor function (which were called “crude estimates” in previous blog posts) that end up being used repeatedly throughout the paper and place them in a lemma near the start of the paper (right now, each one is basically appearing next to where they are first used).

3 September, 2013 at 12:49 pm

Eytan PaldiIn chap. 8, it is not clear why the standard symbol “ appears differently.

4 September, 2013 at 12:40 pm

Terence TaoHmm, this seems to be a result of multiple authors working on the paper :) Some of the authors prefer ($\not =$) to ($\neq$), though strangely both versions look identical in this blog’s LaTeX renderer. This is of course a trivial issue, but I guess we should standardise it before we finalise the paper. (On a related note is the question of whether to use American English or Commonwealth English for the text; given that (as far as I am aware) I am the only active participant initially from a Commonwealth country, I am happy to defer here to the American English standard.)

3 September, 2013 at 5:52 pm

pigh3It is probably better to put Definition 1.6 before Lemmas 1.4 and 1.5, since they already use these notations.

[Good point; I’ve moved the definition up a bit. -T.]4 September, 2013 at 12:01 am

Wouter CastryckIn the paragraph following the statement of Theorem 2.3: “… for selected values of H …” should be “… for selected values of k_0 …”.

[Corrected, thanks – T.]4 September, 2013 at 9:07 am

Terence TaoAn update on a claim I had made previously, stating that the proof of the Type II estimate could be modified to give enough Type I estimates to give a proof of Zhang’s theorem that would be marginally shorter than the current “minimal proof”, albeit with a worse value of H. It turns out that the story is more complicated than this. If one adapts the Type II argument (which currently does not use the q-van der Corput A-process) to the Type I case, it turns out one gets Type I estimates whenever . However, this is not useful for us because needs to be at least as large as 1/10. If one uses the q-van der Corput A-process once (which is what one is currently doing for the existing Deligne-free Type I estimate), the numerology improves to , which allows to exceed 1/10 but not 1/6. So one would still need some Type III estimates to close the argument. In order to raise above 1/6 one would need to apply van der Corput twice instead of once; this could be done and would lead to some non-trivial range of , but this argument is not unambiguously simpler than the existing argument (requiring, among other things, double dense divisibility). I will still add a remark about this in the section of Type I/II estimates, though.

5 September, 2013 at 12:28 am

Aubrey de GreyThank you again Terry. (Note that the words “substitute for” are duplicated in line 4 of Remark 7.10 as currently written.) Relating this back to earlier discussions, am I correct in saying that this essentially means that one would need to elevate the Type II analysis to what was termed “Level 4” on the wiki before (or in conjunction with) adapting it to work as a Type I argument? I ask because of the two additional levels (5 and 6) applicable to Type II that were not explored yet; can we say that those levels do not meaningfully alter what you say here (e.g. because they can’t be applied without first applying Level 4, or because they won’t alter the numerology in a manner allowing sigma to exceed 1/6, or because they too would make the overall argument cease to be appreciably simpler than it already is)?

5 September, 2013 at 1:17 pm

Terence TaoYes, this is what one would have to do to make the Type II argument stretch to cover all cases. What I elected to do instead is to insert into the paper an older Type I estimate (what we called “Level 3” estimates, in contrast to the “Level 6” and “Level 5” Type I estimates that are the best we have in the Deligne-free and Deligne-using settings respectively). When paired with the existing Type II estimate, this gives a slightly simpler proof of Zhang’s theorem (and also the older Type I estimate can be used to motivate the more complicated Type I estimates we have).

I’ve finished with typei-ii.tex for now and will move on to typeiii.tex (basically importing what I wrote for the blog on this) before handing that section off to Philippe. While entering in the Type I and II arguments I discovered some minor errors in previous text (there were some factors of that were misplaced in the completion of sums step) but fortunately they did not affect the numerology.

5 September, 2013 at 2:11 pm

Terence TaoOK, I have pasted in the type III arguments, which ended up being easier than anticipated. I had to modify deligne.tex slightly to do so (the hyper-Kloosterman correlation estimate I needed was not quite there already). I noticed that the notation for the hyperKloosterman sum was a little different from what I expected (indexed by a field instead of a modulus, which makes sense in the prime power case but maybe not so much for the composite modulus case). Anyway it should be checked for notational consistency.

The Type III argument is unfortunately a lot messier than the clean sketch that Philippe provided some time ago. This is partly because we are also taking advantage of averaging in the “m” parameter, and partly because there are some complications due to the fact that we cannot always guarantee certain moduli to be coprime to each other. It may make sense to provide a sketch of a Type III argument (along the lines of the original sketch in http://blogs.ethz.ch/kowalski/2013/06/25/a-ternary-divisor-variation/ , or perhaps my comment at https://terrytao.wordpress.com/2013/06/23/the-distribution-of-primes-in-densely-divisible-moduli/#comment-236652 . Actually with regards to the latter, it may make sense to discuss how the type III approach given here does not seem to give good numerology for Type IV or Type V sums.)

Anyway, I’m “releasing” typeiii.tex (and deligne.tex) so that Philippe may work on it next; I’ll turn to typei-advanced.tex next. The paper is already at 124 pages and may eventually top out at 140-150 pages!

5 September, 2013 at 11:34 pm

PhMI noticed that the notation for the hyperKloosterman sum was a little different from what I expected (indexed by a field instead of a modulus, which makes sense in the prime power case but maybe not so much for the composite modulus case). Anyway it should be checked for notational consistency.

———————————-

I think we can safely switch to dependency in p only: the dependency in the field was in anticipation to checking that the sheaf in the improved type I estimate had no quadratic phase for which we would have needed to use Hooley over all finite extensions result (in fact that could have been avoided using the geoemtric reformulation of that result in Katz’s Orsay lectures and spectral sequences arguments) but since we have now a simpler alternative treatment of the prime modulus type I+ sum we can work only with the base field F_p only and write the dependency as that of the modulus

5 September, 2013 at 11:52 pm

PhMIt may make sense to provide a sketch of a Type III argument (along the lines of the original sketch in….

———

absolutely; maybe also displaying model cases for the treatment of type I, I+ and type II sums as well might be be a good option: maybe having all of them al together somewhere in an introductory section ()at the expense of making the paper even longer

6 September, 2013 at 2:18 pm

Terence TaoYes, this is a good idea. I’ll add a section giving heuristic proofs of the Type I/II and III estimates to the paper, after I have added in the advanced Type I estimate.

7 September, 2013 at 7:14 am

Eytan PaldiIt may be helpful to add (perhaps in the introduction) a diagram showing the connections and implications among the important theorems (and perhaps also an index for nonstandard notations and definitions.)

[Good idea; I’ll try to add something in this direction. -T.]7 September, 2013 at 9:47 am

Terence TaoI added a figure (Figure 1) indicating the general logical flow of the argument, although I had to oversimplify a little bit.

4 September, 2013 at 10:07 am

Gergely HarcosIn the proofs of Proposition 8.10 and Theorem 8.17 (Pages 95 and 98), Parceval should be Parseval.

[Corrected, thanks – T.]4 September, 2013 at 11:42 am

Terence TaoI’ve added a comment comparing the type of estimates that Zhang (and we) prove to those proven in the earlier works of Bombieri, Fouvry, Friedlander, and Iwaniec in Remark 2.9, but I am not 100% confident in my understanding of the literature (and in particular whether there is additional work that should be cited here), as well as the relevance of the theory of automorphic forms (my understanding here is that most of the previous work uses this theory, but this also largely restricts them to the regime of a fixed modulus a). Anyway, if someone who is familiar with the literature would have a look at that remark that would be great.

5 September, 2013 at 11:35 pm

PhMEmmanuel and I can look through but certainly Fouvry will help on this

8 September, 2013 at 10:27 am

Emmanuel KowalskiTypo: “fixed modulus a” —> “fixed residue class a”.

Concerning the remark, it is true that automorphic techniques, for the moment, can not deal with a maximum over the type of residue classes that arise in the Goldston-Pintz-Yildirim method.

[Corrected, thanks – T.]4 September, 2013 at 12:22 pm

Gergely HarcosIn the display before (7.19), should be . (Actually this was part of #5 in https://terrytao.wordpress.com/2013/07/07/the-distribution-of-primes-in-doubly-densely-divisible-moduli/#comment-238890 but it was missed somehow.)

I got busy with other things (e.g. semester starts), but hope to be able to go through the whole paper carefully when it becomes final.

[Corrected – and thanks for all your careful reading of the blog posts and texts! -T.]4 September, 2013 at 1:52 pm

Wouter CastryckHi, I was wondering whether it would make sense to explicitly include an admissible 632-tuple of diameter 4680 in the paper, in addition to a reference to the k-tuples database? It’s an important proof ingredient, and it would make the article somewhat more self-contained. We could describe it as “start from the interval [a,b], sieve 1 mod 2, sieve 0 mod all primes p up to …” and so on. This would be a more compact description than just listing the numbers, and it would make it more easy for the reader to verify admissibility (I think a patient person could do it by hand). But it would still be a large ugly chunk of text, so that’s the consideration to be made here.

4 September, 2013 at 3:07 pm

Terence TaoIf there was indeed a compact description, that would be nice, though my understanding is that after the first few primes the sieve no longer follows a nice pattern. Another option is to attach a plaintext file of the tuple to the arXiv submission (I’ve not done this before but I assume it is straightforward to do – some journals also accept supplementary data files of this sort).

As “eye candy”, one could also try to represent the tuple as an image, perhaps in 23 or so rows of 210 pixels (in order to see the mod 2, 3, 5, and 7 structure), or perhaps some other arrangement would be more aesthetic. (The image I had in mind was something like http://commons.wikimedia.org/wiki/File:Primes_-_distribution_-_up_to_3_x_17_primorial.png or the images in http://matheminutes.blogspot.com/2011/08/prime-numbers-why-all-fuss.html ). But perhaps some other participants may have some more visually appealing ideas for picturing the tuple.

4 September, 2013 at 6:10 pm

Andrew SutherlandOne can give a pretty compact description by specifying the residue classes to sieve in the interval [0,4680]. For example sieving just the 25 residue classes:

1(2) 2(3) 1(5) 1(7) 3(11) 11(13) 7(17) 7(19) 9(23) 15(29) 13(31) 6(37) 16(41) 19(43) 21(47) 46(53) 27(59) 5(61) 31(67) 33(71) 23(73) 39(83) 47(89) 59(97) 74(193)

yields an admissible 632-tuple.

4 September, 2013 at 6:19 pm

Andrew SutherlandA slightly more compact version would be: for the first 21 primes sieve the classes 1 2 1 1 3 11 7 7 9 15 13 6 16 19 21 46 27 5 31 33 23 (ordered by prime), then sieve 39(83) 47(89) 59(97) 74(193).

5 September, 2013 at 4:41 pm

Wouter CastryckHi, I have now included the above example at the beginning of Section 3, just because it’s the most compact description and because it matches with the version in the prime tuples database (I think?). I’m not sure if it’s ok like this, it’s just a suggestion…

But Drew’s second example below is indeed more conceptual. We could include it as an example in the greedy Schinzel sieve part.

5 September, 2013 at 5:15 pm

Andrew SutherlandThe tuple in the database does not match either example, but that it is easily changed. My suggestion would be to use the second example I gave (in my Sep 5 message below). I can then update the tuple listed in the database to be the translation of that that tuple from [742,5246] to [0,4680].

I actually have a file with all 426 examples in it (this is the same number of tuples that Engelsma found for k=632 with diameter 4680, so it seems reasonable to think it is a complete list). We could also provide a link to this file if we wish.

4 September, 2013 at 7:03 pm

Terence TaoThanks! Do you know if there is any particular structure to these residue classes, in particular if they resemble a shifted Schinzel sieve in the sense that one can send many of the residue classes to zero by shifting by a non-enormous shift? Of course by the Chinese remainder theorem there is some exponentially large shift after which one is just sieving out the multiples of 25 primes in an interval of diameter 4680, but I am curious to know if the example here has any kinship with the best examples we have for larger values of k_0 and H. (But perhaps k_0 is just too small for the asymptotic structure to kick in.)

4 September, 2013 at 9:13 pm

Terence TaoActually, if one subtracts 1961, the tuple looks a bit like an asymmetric Hensley-Richards sieve, viz. one is now starting with [-1961,2719] and sieving out 0(2), 0(3), 0(5), 0(7), 0(11), 0(13), 1(17), 3(19), 3 (23), -3 (29), 5 (31), 6 (37), -18 (41), -7 (43), -13 (47), -7 (53), 13 (59), -4 (61), 13 (67), -11 (71), -40 (83), 44 (89), 38 (97), 43 (193). The first six residue classes already seem to sieve out all but about 900 of the elements of the interval, leaving behind numbers that are plus or minus the product of one or two primes that are at least 17 (note that ), and the remaining residue classes look fairly random as far as I can tell. (But one could have sieved out 0 (193) instead of 43 (193) since .)

5 September, 2013 at 3:53 am

Andrew SutherlandThe example I gave above does not correspond to a Schinzel/greedy sieve with a small shift, but it is just one of 426 ways to got an admissible 632-tuple, and many of these *can* be obtained by starting with a shifted-Schinzel and then switching to a greedy sieve, as described in Section 3.5. But one has to break ties in just the right ways, simply picking the smallest (or largest) class in [0,p-1] doesn’t work, however the adjustment process described in Section 3.6.1 will find choices that do work. For example, here is another way to get an admissible 632-tuple:

Sieve the interval [746,5246] of the classes 1 mod 2 and 0 mod p for odd primes p up to 89, then sieve the classes 20(97) 96(101) 34(103) 88(107) 70(109) 0(113) 73(127) 10(131) 77(137) 70(139) 123(149) 75(157) 82(163) 144(167).

All of the sieved classes correspond to minimally occupied classes (greedy choices). It is not necessary to sieve mod 151.

5 September, 2013 at 12:59 pm

Terence TaoThanks for this second example! Conceptually this example would be a closer fit to the discussion of how we find tuples, so it may serve as a better example, even though more congruence classes need to be sieved out. (But we can certainly mention the fact that there are multiple solutions here, which do not bear any obvious relation to each other.)

5 September, 2013 at 3:57 am

Andrew SutherlandOne point of clarification in my second example: while the sieved classes mod primes greater than 89 are all greedy choices, the classes 0 mod 13, 17, 19, and 23 do not correspond to greedy choices. This illustrates why it is important to start with a Schinzel sieve at small primes (say less than the sqrt of the interval size) and then switch to greedy sieving, as explained in 3.5.

9 September, 2013 at 5:02 am

Andrew SutherlandThere is a minor typo in what I wrote above, the interval should be [746,5426] not [746,5246].

4 September, 2013 at 6:01 pm

pigh3Proof of Lemma 2.13(iii): induction on instead of ?

[Corrected, thanks – T.]4 September, 2013 at 10:23 pm

Gergely HarcosDue to the newly inserted case (i) in Theorems 7.1 and Theorems 7.7, the last sentences of these theorems as well as several intext references to the various cases of the theorems need to be updated ((i) became (ii), (ii) became (iii), (iii) became (iv)).

[Fixed, thanks – T.]5 September, 2013 at 11:58 am

Eytan PaldiIn the third line below the proof of prop. 7.12, the two terms should be inside parentheses (and perhaps also moved to the line above them.)

[Corrected, thanks -T.]5 September, 2013 at 5:25 pm

pigh3Definition 2.18(iii), “is said to be smooth” after is not needed?

[Corrected, thanks – T.]5 September, 2013 at 5:42 pm

pigh3End of first line of Remark 2.22, qualitat(l)ive misspelt.

[Corrected, thanks – T.]5 September, 2013 at 5:46 pm

pigh3Same remark, line -9, devisable should be divisable..

[Corrected, thanks – T.]5 September, 2013 at 8:44 pm

AnonymousI do not know if you have done the matter with some philosophy; but, when things are really pushed to their extremes, something qualitatively different would show up inevitably… (Say, a gate to another field).

Yiwei LI

10 September, 2013 at 1:28 am

AnonymousDear everyone,

First of all, I hold a positive attitude on the efforts from the polymath8 team, to make further progress on the hard problem. I’m not an expert on this kind of problem, but it does not prevent me from concerning its progress as a vice-professor working in some area of applied math. Quite often, researchers would keep away from an almost solved problem for understandable reasons (I do not comment if this problem is an almost solved one), while authors working on `big problems’ often encounter recognition problems, also for understandable reasons. I add that pushing things to their extremes might be a method for probing the borders of adjacent research areas. When the borders are found, a gate should not be far away… I felt free to leave my earlier comment, not for belittling the work here of cause, but sharing the perspectives from an outsider…

All the comments from me do not have implications for honors or something alike. It’s just for fun.(I leave my name just for being responsible).

Good will,

Yiwei LI

6 September, 2013 at 11:52 pm

Eytan PaldiIn 7.20 (and later in some related expressions) a (closing) “|” is missing.

[Corrected, thanks -T.]7 September, 2013 at 6:34 am

Daniel HillThere is a typo at 8.6 line 6: `Fourier tranform’ for ` Fourier transform’.

[Corrected, thanks – T.]7 September, 2013 at 8:00 am

Eytan PaldiIn the line above (10.17), the second “” should be deleted.

[Corrected, thanks -T.]8 September, 2013 at 11:40 am

Eytan PaldiIn the 14-th line below (10.21), the expression is too long for a single line.

[Fixed, thanks – T.]8 September, 2013 at 5:13 pm

pigh3p54, line -2, “he basic intuition” should be “the …”

Purely nitpicking: p55, 4 lines below (4.54), maybe is better than "1.19E-4", etc. Same in Tables 6&7, although I don't feel as bad about them when they are in tables.

[Corrected, thanks – T.]9 September, 2013 at 12:44 am

Eytan PaldiIn the second line below (10.22), a (closing) “|” is missing.

[Corrected, thanks -T.]9 September, 2013 at 8:37 am

Terence TaoI’ve just finished draft versions now of all the Type I,II,III estimates, so it looks like we’re at a point where most of the mathematical content needed in the paper has been written up in draft format at least. I needed one small addition to the Deligne section, specifically Theorem 8.18 which gives square root cancellation for the sum

where .

It turns out that I also need square root cancellation for the sum

Now this is an easier sum so presumably the arguments that control the first sum also control the second, but I didn’t actually put in the proof, as I thought it would be better left to one of the ell-adic experts :-).

A few small things came up when writing the proofs of the Type I,II,III estimates. Firstly, the notion of a smooth coefficient sequence wasn’t quite optimal (sometimes one wants to consider smooth sequences adapted to other intervals than [cN,CN], e.g. [M,M+N] instead) and I will have to think about how to set up the right definitions here. Related to this, the completion of sums lemma is also not stated in a way that is best suited for applications, and I will probably have to reformulate it.

When writing the proposition that gives iterated van der Corput estimates for exponential sums, I realised that actually we never use the van der Corput method more than once in the applications, so it is probably simpler to just give a simpler version of the proposition that does single van der Corput only, and add a remark that the estimate can be iterated but gives inferior numerology.

There was also some errors in the blog post on the improved Type I estimate which scared me for a while (I had put some terms inside an absolute value sign when instead they could only go outside) but fortunately it ended up not being an issue (these terms could stay outside the absolute value sign for a few more steps until a Cauchy-Schwarz was applied, at which point they could re-enter).

I’m no longer doing major editing on any one file, but will be reviewing the whole paper looking for things to clean up or make consistent with the rest of the paper. So it should be safe for others to make small edits to the various section files as well (except perhaps for narrow.tex, optimize.tex, and deligne.tex which are being edited by others).

9 September, 2013 at 8:48 am

Philippe MichelI will add it: but that is essentially done: to prove the bound for the product of K_f we had to show (via computing its FT) that the K_f does not contain a phase of degree <=2 and in particular it does not contain a linear phase

9 September, 2013 at 10:48 am

Philippe Micheldone

9 September, 2013 at 1:18 pm

Eytan PaldiThe lower bound may be added to table 1.

[Added, thanks – T.]9 September, 2013 at 4:34 pm

Eytan PaldiTable 4 should be moved outside of remark 2.17.

9 September, 2013 at 4:39 pm

Eytan PaldiTable 3 should be outside of theorem 2.6.

9 September, 2013 at 5:30 pm

Terence TaoHmm, I’m not sure how best to sort this out, because LaTeX uses its own algorithms to place tables; they can be overridden to some extent, but it might not be worth doing so because the format of the paper may well change if and when it becomes published (most journals and monograph series have their own in-house style files). There is also the possibility that this sort of problem solves itself as the paper continues to evolve (but I think it is getting close to “first draft” status – most of the sections are written now, and there doesn’t seem to be much of a call for any major reorganisation of the paper).

9 September, 2013 at 5:01 pm

Eytan PaldiIn the fourth line below (4.57), the exponents in the O-terms should be

(instead of ).

[Fixed, thanks – T.]9 September, 2013 at 6:26 pm

xfxieGoldston-Yıldırım-Pintz -> Goldston-Pintz-Yıldırım (one in abstract, one in Page 14, two lines above (2.6))

Yildirim -> Yıldırım (Page 3, three lines below Theorem 1.2)

[Fixed, thanks – T.]9 September, 2013 at 6:53 pm

Eytan PaldiIn (4.55), the RHS should be the maximum of its current value and , and in the line below it, should be replaced by .

(This is because is possible – as case (i) in a previous comment of mine.)

9 September, 2013 at 7:32 pm

xfxieChanged, thanks.

10 September, 2013 at 5:50 am

Wouter CastryckSmall typo on top of page 7, “… for any real number we write .”: that should be .

[Corrected, thanks – T.]10 September, 2013 at 6:22 am

pigh3Reference # 49, “Perel’muter” instead of “Perel\’muter”.

[Corrected, thanks – T.]10 September, 2013 at 8:20 am

Terence TaoRegarding where to submit this paper: I just got a email from Andrew Granville in his role as an editor of Algebra and Number Theory soliciting the paper, saying that the projected length of 150 or so pages is not going to be a problem. This seems like a good choice to me (A&NT is a sister journal of Analysis & PDE, where I am an editor, and is run by the low-cost publisher MSP). (For obvious reasons I would not suggest a journal that I myself would be editing, otherwise I would have proposed Forum of Mathematics, Sigma.)

We don’t have to decide immediately on this, since the paper is still not quite even at first draft stage (although it is getting close…), but we can certainly start a discussion on what to do with this paper now. Another possibility is Memoirs of the AMS, which is specifically focused on long monographs, but perhaps we now have the opposite problem that our paper is a little short for a monograph…

10 September, 2013 at 9:32 am

Philippe MichelI am member of the editorial board of ANT, but I am enough “diluted” amongst the contributors to polymath8 AND also amongst the rest of the editorial board of ANT that this should not cause a problem; therefore I would be happy with ANT (a great journal which accept only great papers)

10 September, 2013 at 10:15 am

Gergely HarcosI think that both choices would be excellent (ANT or Memoirs). I also think that the paper would be long enough for Memoirs (there have been several shorter papers there). Of course the invitation by Andrew and Philippe should be esteemed, so I also vote for ANT.

10 September, 2013 at 9:05 am

Eytan PaldiAt the end of chap. 10, it seems clearer to add that this completes the proof of prop. 10.2.

[Added, thanks – T.]10 September, 2013 at 10:43 am

Eytan PaldiIn the 20-th line of sec. 1.1., one word from “… established derived …” should be deleted.

[Fixed, thanks – T.]10 September, 2013 at 1:40 pm

Terence TaoI’ve written a draft for the section (Section 5.1) arguing that the Heath-Brown identity is essentially optimal for the purposes of reducing distribution estimates for the von Mangoldt function to the verification of estimates of various “Types” (Type I, Type II, Type III, etc.). This section is a little different from the rest of the paper in that it is written in a nonrigorous and informal fashion; it turns out that formalising exactly what a “Type” of estimate is, and whether a given decomposition allows one to reduce distribution estimates into such types, is rather tricky, and I didn’t want this section to overwhelm the rest of the paper since it is not part of the main argument (but rather an explanation as to why a certain component of that argument is not expected to be improvable). I’m still a little uncertain exactly what to do with this section, so any comments or suggestions on it will be welcome.

11 September, 2013 at 11:04 am

Aubrey de GreyOne possibility might be to move that subsection to Section 11. In fact, and in contrast to my original suggestion that it be just a short concluding summary, perhaps Section 11 could be the repository for all the various remarks and discussions (some of them extensive, some probably very brief, some of them already present elsewhere in the paper, some not) about why this or that aspect of the current argument is likely to be rather refractory to further improvement. The following (at least) seem to fit, though probably not in the following order:

– why Heath-Brown is optimal for partitioning into [a sigma-dependent number of] Types.

– what can be said quantitatively about the “Type V” barrier (and maybe, for completeness, similarly for Type IV). For example, on 20th July you noted that the numerology was unfavourable if one tried using either Type I or Type III approaches, and you elaborated on July 9th and 10th for Types I and III respectively, but I believe those discussions were based on Level 5 and Level 3 arguments respectively; readers trying to identify lowest-hanging fruit may thus appreciate guidance as to whether it is worth retrying these approaches starting from higher Levels (including those in Type III that have not been explored at all yet). I think all you’ve said about this on the blog was “The numerology changes a bit if we use the latest Type III estimates [i.e. level 4] but I think the general picture is more or less the same” – apologies if I missed something.

– a summary of why efforts to improve the constraints on kappas enough to wrestle k0 down to 631 now seem to be in vain.

– a summary of the Farkas/Pintz/Revesz argument for the optimality of their version of the GPY method.

– maybe something from Drew about the realistic chances of further improvements to the headline 4680 number for k0=632.

– possibly some remarks (if you, as the one who spearheaded getting it to where it is, feel you have anything to say on it) about how work on improving the Type I constraint has reached diminishing returns.

11 September, 2013 at 11:32 am

Aubrey de GreyImmensely tentative followup suggestion: another item that might fit here is whether there is any scope for further improving varpi by extending the argument that allowed it to be “split” into varpi and delta (which you first observed on June 4th). Readers may be inspired to look for ways to split it again. Is it “obvious” that there are none?

11 September, 2013 at 11:43 am

Eytan PaldiConcerning your remark about the upper bounds on the ‘s, I can say that I have found new upper bounds (of different type! – decaying faster for large ) for and , but the problem was that in order to use them one needs also a good upper bound on . Only very recently I found a way to represent the sum of the J-dim. integrals appearing in bound as a 1-dim. integral (I’m trying now to see its implication for .)

11 September, 2013 at 1:37 pm

Andrew SutherlandI can say a bit about the the 4680 bound for k0=632. The short version is that I think that there are reasons to believe that this bound may be tight. So far we have not been able to improve any of Engelsma’s bounds with k0 < 785, and we have found exactly the same 426 distinct admissible 632-tuples of diameter 4680 that Engelsma found, using entirely different methods (although to be fair, there are some specific diameters less than 4680 where we have found *more* admissible tuples than Engelsma found — I believe diameter 4178 is the smallest such case).

I do have some thoughts on improving the lower bounds for k0 in this range; here we can certainly do better than the 4104 lower bound currently listed in Table 6, even if we can't get all the way to 4680.

My post at http://sbseminar.wordpress.com/2013/07/02/the-quest-for-narrow-admissible-tuples/#comment-24335 gives a back-of-the-envelope estimate of the feasibility of proving lower bounds using a pruned exhaustive search, which for k0=632 is probably not feasible, but that doesn't mean something a more clever wouldn't work.

But I'd like to play around with this a bit more (perhaps over the coming weekend) before putting anything in the article.

11 September, 2013 at 2:10 pm

Terence TaoThese are all good ideas. I am still working my way through the proofreading of the manuscript (just ran through deligne.tex, and am now going to go through typeiii.tex – I see that exponential.tex is currently being edited, so I promise not to touch that (or to further touch deligne.tex)), but when I get to Section 11, I will try to put some stubs at least for the points you raise, and maybe move over the Heath-Brown optimality discussion there too.

12 September, 2013 at 9:31 am

PhMI am adding some extra mild material to deligne .tex and making some cosmetically edits here and there

whenever someone puts a boldface comment into the tex file is it possible to add a tag like %[TODO]

otherwise it is hard to see if there something the file being huge

[I added a TODO tag to the current boldface comments -T.]11 September, 2013 at 2:18 am

Andrew SutherlandThe inclusion/exclusion computation for using has finished, yielding an improved lower bound of 35,926,668. This value can now be added to Table 5.

11 September, 2013 at 2:20 am

Andrew SutherlandOf course I meant to write in my comment above.

12 September, 2013 at 3:40 am

Wouter CastryckOk, that has been added.

11 September, 2013 at 9:30 am

pigh3Farkas-Pintz-Revesz: In abstract, Pintz and Revesz are reversed; on p4, Revesz missing accents.

[Fixed, thanks – T.]11 September, 2013 at 2:36 pm

Eytan PaldiIn the line above prop. 10.2, perhaps “the previous section” should be replaced by “section 8” (or “sections 8 and 9”).

[Fixed, thanks -T.]12 September, 2013 at 3:02 am

Eytan PaldiIt is still unchanged.

[Oops; there had been a nearby reference to “previous section” that I had fixed instead. Now both should be fixed -T.]12 September, 2013 at 11:19 am

Eytan PaldiIn the line above proposition 10.2 it should be “section 8” (instead of “section 6”) – because prop. 10.2 proof depends mainly on theorem 8.18.

[Corrected, thanks – T.]11 September, 2013 at 8:08 pm

xfxieIn the second paragraph below Theorem 2.21:

parts (ii), (iii), and (iv) –> parts (iii, (iv), and (v)

parts (i) and (iii) –> parts (ii) and (iv)

[Fixed, thanks -T.]12 September, 2013 at 6:41 am

Aubrey de GreyI made a number of possibly misleading typos and omissions in the paragraph of my earlier post (https://terrytao.wordpress.com/2013/09/02/polymath8-writing-the-paper-ii/#comment-244691) that referred to the Type V barrier. Since it seems (as things stand) to be conceivable that this highlights an actual way forward, I thought it was worth a correction. The paragraph should ideally have read as follows:

“On 13th July (https://terrytao.wordpress.com/2013/07/07/the-distribution-of-primes-in-doubly-densely-divisible-moduli/#comment-238574), where you derived a constraint for Type IV that is comfortably dominated (for varpi in the region of current interest) by the current Type III constraint, you also recalled that the numerology for Type V was unfavourable if one tried attacking it using either Type I or Type III approaches, something you had explained in some detail on July 9th (https://terrytao.wordpress.com/2013/07/07/the-distribution-of-primes-in-doubly-densely-divisible-moduli/#comment-237995) for Type I and July 10th (https://terrytao.wordpress.com/2013/07/07/the-distribution-of-primes-in-doubly-densely-divisible-moduli/#comment-238177) for Type III. However, I believe those discussions were based on Type I Level 6 (“Deligne-free”) and Type III Level 3 constraints. Readers trying to identify lowest-hanging fruit may thus appreciate guidance as to whether it is worth retrying these approaches starting from higher Levels (including those in Type III that have not been explored at all yet).”

12 September, 2013 at 12:47 pm

Eytan PaldiThere are several sums (e.g. in section 7) containing indices with unspecified range. Perhaps a remark on this abbreviated notation is needed in the introduction.

[Perhaps you are referring to the summation conventions in (7.14) or (7.15)? -T.]13 September, 2013 at 2:13 am

Eytan PaldiFor example, in (7.8) it seems (but not explicitly stated) that the summation index p ranges over the primes (i.e. the default that certain letters like n ranges over the integers while others like p are understood to represent primes.)

Such general standard conventions (for certain letters) may be added to the basic notation subsection in the introduction.

[OK, added something in the intro about how summing over p implicitly runs over primes – T.]12 September, 2013 at 1:36 pm

Eytan PaldiIn the third line below remark 8.19, the “!” seems to be a typo.

12 September, 2013 at 4:15 pm

Eytan PaldiThe new upper bounds for and are (in a somewhat simplified approximated form):

where is the “G-ratio” function which is strictly increasing on with .

It follows that the new bounds decay exponentially with faster than the current bounds. (I’ll send the very simple details in my next comment.)

Concerning the new bound on , it seems that the resulting

(1 dim.) integral (representing the sum of J dim. integrals) may be evaluated exactly (I’m checking the details.)

BTW, I suggest to add a remark about the precision of the numerical data (in particular the numerical integrals) for the current upper bounds of the

‘s.

19 September, 2013 at 11:16 pm

Eytan PaldiDerivation of the new upper bounds on :

Remark: In the following derivation we use the known properties (lemma 4.6) of the weight function (defined by (4.32)) and the definition of the function and its properties in my comment above.

1. In order to estimate , we first estimate

Therefore

Since the function is decreasing on (0,1), integration by parts gives

2. To estimate , we estimate similarly

So

Similarly, integration by part gives

Remark: The new bounds above can be slightly improved, by observing that the function is decreasing on – which can be used in the final integration by parts – leading to similar bounds in which the factor is replaced by . (But simple analysis shows that for small the resulting gain is small.)

3. Comparing the new bounds to the current bounds:

Since the new bounds depend strongly on the function , we need to estimate its size. Since we start with the simpler :

From (9.5.4) in A&S, we have

Hence

so by (9.5.18) in A&S

(1)

In order to estimate we see from its definition (and lemma 4.5) that

(2)

whenever .

The last condition is equivalent to

(2') .

It is easy to verify that the last condition is equivalent to

(2'') .

From the definition of we have

Hence (2) takes the form

(3)

Whenever satisfies (2').

From the definition of we have

(4)

Using the above estimates we get

(5) , where

(5')

Which gives the estimates

(6)

where

(6')

Note that grows exponentially as

In addition, the second (-dependent) factor in decay exponentially with exponent (which is faster than the decay with exponent in the current bound for . There is a similar estimate for but with the same decay exponent () as for the current bound.

Remark: It seems that for sufficiently large the new bounds should perform better than the current ones (in particular the new bound for . Therefore I suggest to use them as well by taking the minimum between them and the current bounds. There is a possibility to get by using the new bounds.

20 September, 2013 at 1:44 pm

Eytan PaldiCorrections to my comment above:

1. In the remark at the end of part 2, it should be “replaced by

” (instead of “replaced by “).

2. In the line below (6′) it should be “grows exponentially” (instead of “decay exponentially”).

This is because – meaning that the estimate (5) of which use the second inequality in (3) (which is appropriate for small x) is too crude for for small values.

[Previous comment corrected – T.]20 September, 2013 at 4:41 pm

Terence TaoThanks for the calculations! Would it be possible to provide some code (similar to the Maple code we have been using for similar calculations in the past) for what these new bounds for give? I guess the first thing to try is to simply plug in the values of from Table 4 of the paper and see if there is any improvement there. (If not, this does not necessarily mean that these new bounds do not give an improvement, since perhaps the improvement comes from some other choice of parameters – but it would make it difficult for the new bound to end up giving an ultimately better bound, particularly since one also has to keep under control…)

20 September, 2013 at 5:56 pm

xfxieJust tried to implement the maple code for kappa1 and kappa2, seems the values can drop dramatically, at least for k_0=1782:

tn := sqrt(thetat)*j;

f := t->(t^(1-k0/2)*j*(tn))^2*t^(k0-2)/(k0-2)!;

gk :=int(f(t), t=0..(1-theta), numeric);

kappa1 := evalf(gk*exp(-eta*theta)/(eta*theta));

kappa2:= (k0-1) * gk *exp(-2*eta*theta)/(2*eta*theta);

If the implementation is correct, there is a solution using the modification:

http://www.cs.cmu.edu/~xfxie/project/admissible/k0/sol_varpi168_1782_1.mpl

20 September, 2013 at 7:08 pm

xfxieFound some bugs in the implementation:

eta := j^2 / (4 * (k0-1));

bj := evalf(BesselJ(k0-1,tn));

f := t->(bj*t^(1-k0/2))^2*t^(k0-2)/(k0-2)!;

gk :=int(f(t), t=0..(1-theta), numeric)/int(f(t), t=0..1, numeric);

kappa1 := gk*exp(-eta*theta)/(eta*theta);

kappa2:= (k0-1) * gk *exp(-2*eta*theta)/(2*eta*theta);

BUt now the new bounds produce worse results that old bounds …

20 September, 2013 at 7:37 pm

Eytan PaldiIt seems that there is a problem with the implementation of

– it is the G-ratio function (without numerical integration!) as done by in the computation of but now with argument (instead of used in the computation of . )

In addition, I suggest to take for the minimum between the current and new types of bounds.

21 September, 2013 at 6:15 am

xfxie@Eytan: Tried the following code (also updated in the upper link for 1782):

eta := j^2 / (4 * (k0-1));

gd := BesselJ(k0-1,j)^2;

f_tn := t -> sqrt(t)*j;

f_g := t -> t * (BesselJ(k0-2,f_tn(t))^2 – BesselJ(k0-3,f_tn(t))*BesselJ(k0-1,f_tn(t))) / gd;

gk := f_g(1-theta);

kappa1 := gk * exp(-eta*theta)/(eta*theta);

kappa2 := (k0-1) * gk *exp(-2*eta*theta)/(2*eta*theta);

But still the new bounds return worse results for kappa1, Kappa2, at least at k0=1783:

New : 1.15E-04, 1.37E-04

Old: 1.58E-07, 3.24E-10

21 September, 2013 at 10:03 am

Eytan PaldiThe performance of the new bounds for should be better than the current bounds provided that and

are sufficiently large (because the exponential decay is faster for the new bounds – but initially at small the performance of the new bounds seems to be poorer wrt the current bounds.)

Therefore, I suggest to find the “threshold” for above which the new bounds are better than the current ones. (for each fixed value of there should be such minimal value of such that for any larger value of , the new bounds should be better (at least for ) than the current ones.)

21 September, 2013 at 11:16 am

Eytan PaldiIt should be interesting to compare the above bounds to the direct approximation of via the numerical evaluation of the double integrals defining these parameters. (It seems that this numerical double integrals should be numerically stable due to the monotonicity of the integrands.) This should give us some insight about the true order of magnitude of these parameters and the crudeness of their bounds.

We may even consider replacing the current bounds by the numerical double integrals estimates (provided that we have confidence on their precision!)

20 September, 2013 at 6:51 pm

Eytan PaldiIt seems that these two new bounds should be better than the current ones for sufficiently large values (because of the faster exponential decay) but not necessarily for smaller values. So that both bounds may coexist (by taking for each choice of parameters the minimal bound). Another advantage of the new bounds is that they depend mainly on the explicit function G(x) (already in use for the current expression) without any need to evaluate numerical integrals (for which the actual numerical precision is not known in general) to a certain precision needed for the criterion.

I think that by such combination of the two types of bounds, there is a possibility to improve on some values (e.g. to get ) by updating the current optimization software (with the problem that the new cost function is not necessarily smooth.)

But I agree that a real progress should depend also on an improved bound for (I’m working on it now – using a probabilistic interpretation of the J-dim. integrals in its definition – I hope to finish it in a day or two)

12 September, 2013 at 7:43 pm

Terence TaoI’ve now written something for the final “improvements” section. I eventually decided not to be excessively technical with the specific improvements that we were speculating about in previous blog comments, because it is likely that they will only lead to small gains, and such speculations could soon become obsolete if the next set of people to work on this problem introduce some new ideas.

At this stage it looks like we are close to a first draft status for the paper, with most sections being more or less written up. Sections 1-5 look in particularly good shape, but Sections 6-11 still need some proofreading, and the notation for the Deligne sections need to be carefully harmonised with those in the rest of the paper. But I think we are nearing the end now…

13 September, 2013 at 3:15 am

PhMSpeaking of notations: regarding the type I,II sums, can we replace the notation with something like . It is usual to write functions of some modulus in the form (variable in the first place and the modulus after separated by a semi-colon) the proposed notation would be more in line with this. Of course in the improved type I estimate, a factor of the modulus becomes the variable but we can adapt the notations too when the moment comes.

[Notation changed for . In the improved type I estimates I suppressed the modulus entirely for . In the regular type I/II estimates I also have the exponential sums and , if you have other suggestions for the notation here I would be happy to implement them. -T.]13 September, 2013 at 7:56 am

Aubrey de GreyTerry, the new Section 11 is fantastic. I feel that you have judged the level of technical detail exactly right, so as to allow both specialists and non-specialists to stand back from the current proof and form clear views as to what aspect they might be best equipped to attack. Thank you.

Reading it myself, I am drawn more powerfully than ever to the Type V issue as the one which appears the “softest” (given that sub-optimal Type I and III methods were used to attack Type V back in July), to the extent that I wonder if I am just misunderstanding something fundamental with regard to the potential susceptibility of Type V numerology to such refinements in Type I and III numerology. Might it be possible to add to paragraph (vii) of Section 11 an explicit statement (even without derivation, and ignoring delta) of the exact Type IV and V constraints on (varpi, sigma) arising from those analyses, together with what Levels (equivalently, theorems or lemmas in the paper) were used as starting-points to derive them? My understanding of https://terrytao.wordpress.com/2013/07/07/the-distribution-of-primes-in-doubly-densely-divisible-moduli/#comment-238574 is that the comfortably loose Type IV constraint 14*varpi < 3*sigma arises from Type I Level 6 methods (right?), but I'm pretty sure that nowhere has there been any corresponding statement of what Type V constraint is obtained using either Type I or Type III methods (of any Level) – the relevant analyses from July 9th and 10th seem to stop just short of deriving those constraints. Without such statements, paragraph (vii) comes over (to me, anyway) as a sufficiently conspicuous open door, especially given what paragraph (x) says about Type III, as to make it almost incongruous that the project declared victory without first checking the impact on Type V of using the best Type I and/or Type III technology.

13 September, 2013 at 6:28 pm

Terence TaoYes, I think that the introduction of a new Type V argument that can establish estimates beyond what one can already achieve by refactoring a Type V sum into an existing sum would be a great breakthrough, not just for the immediate aim of improving H, but because it would probably have to introduce a new technique which is not currently in play. But it looks quite challenging; the problem here is morally a slightly harder version of the problem of estimating the distribution of the fifth divisor function in arithmetic progressions, whereas the state of the art is non-trivial control on only the third divisor function (and our Type III arguments are based on this state of the art).

I added a remark about how the Type III arguments don’t seem to extend well to , let alone . I also briefly mentioned the partial results on Type IV sums, although I’m not sure whether it’s worth trying to work out specific numerics here(if there is enough of a breakthrough in the subject to get Type V estimates, it is likely that other estimates will get improved as well, recording the current state of the art on Type IV seems a bit moot).

14 September, 2013 at 6:18 am

Wouter CastryckI have added two or three lines in the narrow admissible tuples part of Section 11, where I made reference to the fact that both Engelsma and us found exactly 426 admissible 632-tuples of width 4680 (see Drew’s post of 5 September above). @Drew, I was wondering whether this 426 is modulo translation, or if it is also modulo horizontal flipping ()?

14 September, 2013 at 7:43 am

Andrew SutherlandThe count of 426 is modulo translation (otherwise there would be infinitely many), but not flipping. None of the gap sequences are palindromic, so there are actually 213 pairs of tuples that correspond to negations of each other.

Another way to say this is that we (and independently, Engelsma) have found exactly 426 admissible 632-tuples that start with 0 and end with 4680.

I put a lexicographically sorted file of all 426 tuples (one tuple per line) at the URL below:

http://math.mit.edu/~primegaps/tuples/admissible_tuples_632_4680.txt

This location should be permanent, so we can link to this in the paper.

16 September, 2013 at 1:40 pm

Terence TaoThanks for the link; I put a mention of it in Theorem 3.1. We have a compact description of one of the 632-tuples; is it possible to locate a 1783-tuple with a similarly compact description? Of course, being three times as long, it may not be so particularly edifying, and perhaps it is better to just provide a link if it is messy (particularly since the Deligne-free result is of secondary importance), but on the off chance that there is a relatively nice description, it could be worth putting one in. (Also, how many tuples do we have of the minimal diameter in this case?)

16 September, 2013 at 6:06 pm

Andrew SutherlandHere is a shortish description of an admissible 1783-tuple with diameter 14950. Sieve the interval [1714,16664] by 1 mod 2 and then 0 mod p for odd primes p up to 211, then sieve the following classes:

188(223) 0(227) 222(229) 38(233) 146(239) 33(241) 0(251) 229(257) 21(263) 78(269) 140(271) 104(277) 106(281) 53(283) 141(293) 216(307) 12(311) 17(313) 252(317) 191(337) 269(347) 32(353) 142(359) 42(379) 345(383) 165(389) 221(409).

In contrast to the example I gave for 632, many of these (9 classes) don’t correspond to greedy choices.

I have roughly 25,000 distinct admissible 1783-tuples of this length, but I really haven’t made an extensive effort to find them the way I did with k0=632 — I expect there are many more.

16 September, 2013 at 6:28 pm

Terence TaoThanks! I added this description of the example to the paper.

14 September, 2013 at 6:43 am

Emmanuel KowalskiI haven’t yet started reviewing the whole paper but I intend to do this carefully once it has reached a relatively stable state.

13 September, 2013 at 1:29 pm

pigh32nd paragraph before sec 1.2(p5), “Weyl differencing” instead of “Weil …”?

3 lines below that, “establishing a” instead of “a establishing”?

[Corrected, thanks – T.]13 September, 2013 at 1:53 pm

andrescaicedoIs the z at the beginning of section 4 a typo?

[Fixed, thanks – T.]13 September, 2013 at 5:14 pm

pigh3Table 3, 6/700 should be 7/600.

[Corrected, thanks – T.]13 September, 2013 at 6:26 pm

pigh3Table 1: should be for product of square free primes in I.

[Corrected, thanks – T.]14 September, 2013 at 7:13 am

andrescaicedoIn section 1.1 (Overview), at the very beginning, the limit is really a lim inf. Also, the “shows” immediately after should probably be “showed”. And [27] should be mentioned explicitly, maybe right before the displayed equation.

[Corrected, thanks – T.]14 September, 2013 at 10:23 am

Aubrey de GreyThere may be a case for closing (or highlighting, if it can’t be easily closed) one further ostensible open door, relating to Remark 7.10. That Remark arises from Terry’s post https://terrytao.wordpress.com/2013/09/02/polymath8-writing-the-paper-ii/#comment-243825 in which he established, after more detailed inspection, that the initially promising-looking use of Type II methods to establish a Type I constraint that works for some positive varpi unfortunately requires either a value of sigma too low to avoid the Type III case or else a Type II “level” (namely 4) that makes the overall complexity of the proof comparable to that of the Deligne-free Type I argument. However, what is not clear at first sight is whether elevation of the Type II arguments all the way through the unexplored levels 4, 5 and 6 has any chance of translating into Type I numerology that improves on the current best Type I constraint. It might be worth mentioning something about this option: if it can be easily disposed of by means that I have overlooked then maybe a sentence in Remark 7.10 would suffice, but otherwise it arguably merits a new paragraph in Section 11.

14 September, 2013 at 12:31 pm

Aubrey de GreyQuick addendum: I guess that, for completeness, any such comment might best also note (if only to dismiss) the possibility that the repurposing of Type II methods to the Type I setting could be performed by more elaborate but numerically superior (in terms of the varpi thereby obtained) means than the replacement specified at the beginning of Remark 7.10.

15 September, 2013 at 12:28 pm

pigh3Line 2 of Sec 4.5 should refer to Section 4.4, not 4.5.

[Corrected, thanks – T.]15 September, 2013 at 6:05 pm

pigh31. 2nd display above (5.8), under the summation symbol, should be .

2. Next display, in the middle of convolution, should be ?

3. 6 lines above Sec 6.1, should be ?

[Corrected, thanks – T.]16 September, 2013 at 1:20 pm

RO'DKatz’s name appears weirdly in [44] in the bibliography

[Corrected, thanks – T.]16 September, 2013 at 6:38 pm

pigh31 line above (and also 5 lines below) (6.8), is it ?

[Fixed, thanks – T.]17 September, 2013 at 5:53 pm

Eytan PaldiThe exact expression for has a probabilistic interpretation from which I found a new upper bound for . I’m checking now the implication of this bound on . I’ll send tomorrow the derivation of the new bounds for .

17 September, 2013 at 6:14 pm

pigh3(6.15) through (6.16), all the should perhaps be ? (not the final )

Right underneath, the interval should be (q/2, q/2].

[Corrected, thanks – T.]18 September, 2013 at 11:34 am

Terence TaoWhile working on the Deligne section of the paper, I discovered that it is possible to avoid explicitly using the deep theorem of Deligne, Laumon, and Katz which roughly speaking asserts that the Fourier transform of an admissible (Fourier) sheaf is again an admissible sheaf, with bounds on the conductor. To recall, the only place where we use this fact (other than in taking as a black box the existence of hyper-Kloosterman sheaves) is to establish square root cancellation for the sums and where

.

To do this we observed that the Fourier transform of was basically pulled back by a quadratic function, and then used Deligne-Laumon-Katz to write as the trace weight for an admissible sheaf with bounded conductor.

But if we take the Fourier transform first, we observe that the desired square root cancellation is equivalent to square root cancellation for and . So actually we don’t need to be a good trace weight, it’s enough for to be a good trace weight. Basically, we only need to perform Fourier transforms on the analytic side, not on the sheaf-theoretic side.

The way the paper is currently structured, though, it’s a bit tricky to take advantage of this simplification (and the simplification is modest from a length perspective, one can basically delete Section 8.6 and rewrite some subsequent theorems and proofs a little bit). Perhaps it is better to leave the structure of the paper as is but add a remark that one can avoid Deligne-Laumon-Katz if one really wanted to (but one still needs to construct the hyper-Kloosterman sheaf Kl_3, so one can’t escape DLK-type machinery completely).

18 September, 2013 at 12:59 pm

Philippe MichelThat is absolutely true ! and in fact I have written a first more elementary version of the proof as you suggest using Parseval to make the computation on the dual Fourier side (one can probably see this from digging through the older versions of deligne.tex in the dropbox folder; I will look to exhume it tomorrow); however I have chosen over the week-end to go for the stronger version -that the function itself is a trace function rather than only its fourier transform- after Friday discussions with Paul: indeed one could try to look for further improvement by exploiting averaging over some other parameters ($l$ for instance which for some configurations could be long enough wrt the factorization of $u$ – in place to do the vdc-method on the d variable- to perform a Polya-vinogradov completion on the other).

This explain also why the new proof is a bit less soft as its make explicit the dependency in the additional parameters of $f(x,y)$ ($a,b,c,d,e$) which are potentially going to vary. It is not clear whether this will work or whether this would be really worth the effort but at least part of the necessary material will be in place if necessary (and more importantly this is geometrically fun !). I think it is also good to keep with the stronger version because there might be other manipulations which do not rely on correlation of two functions ( admissible to Fourier analysis or iteration thereof) but on multiple -say quadruple type- correlation for which I don’t see how the Parseval could be applied (see the second Fouvry-Iwaniec paper in the list of references for an example).

And as you say we are already using serious stuff anyway. Another thing is that now the vdC method has been fully axiomatized to general statement (Prop 8.19 in which the initial datum are the trace functions and their sheaves) which could used by other for other problems

18 September, 2013 at 2:29 pm

Terence TaoYes, I agree that the current setup (based on the sheaf interpretation of the function itself rather than the Fourier transform) is more conceptually natural and better suited for future improvements; I’ve added a remark about the alternate route at Remark 8.19 and made some other small tweaks (e.g. taking advantage of symmetries to reduce the amount of calculation needed to compute the Fourier transform of K_f) but kept the overall structure unchanged.

20 September, 2013 at 11:46 am

Emmanuel KowalskiAnother reason to keep the Fourier transform is that, in fact, the existence / admissibility / conductor bound for the Fourier transform at the level we need can be done using arguments which are really quite a bit easier than the Riemann Hypothesis, as done in Section 8 of our preprint on conductor bounds

http://www.math.ethz.ch/~kowalski/transforms.pdf

I think that the only thing needed in Polymath8 that this write-up does not explicitly state is the stability of (geometric) irreducibility, but this can be checked by the Plancherel formula and the “diophantine criterion for irreducibility” of Katz which we already state anyway (Lemma 5.10 in the paper; this depends on the Riemann Hypothesis). I’ll add this as an extra remark.

This does not give the full local analysis of singularities of the Fourier transform, but for most analytic applications, it will certainly be sufficient.

20 September, 2013 at 5:49 pm

pigh3small typo: in 6th display of sec 9, “K;” should be “Kl”.

[Corrected, thanks – T.]22 September, 2013 at 5:32 pm

Polymath8: Writing the paper, III | What's new[…] main purpose of this post is to roll over the discussion from the previous Polymath8 thread, which has become rather full with comments. We are still writing the paper, but it appears to […]

22 September, 2013 at 5:34 pm

Terence TaoI’m doing the usual rollover to refresh the thread; any comments on the polymath project that are not direct responses to comments already here should be made at the new thread.

28 September, 2013 at 4:56 am

Check in with Yitang Zhang « Pink Iguana[…] Polymath8: Writing the paper, II, here. III, here. The paper is quite large now (164 pages!) but it is fortunately rather modular, and […]

15 November, 2018 at 4:12 pm

Ella BerumPolymath8: Writing the paper, II | What[…]The church today doesn’t need more programs