You are currently browsing the category archive for the ‘question’ category.
Here’s a cute identity I discovered by accident recently. Observe that
and so one can conjecture that one has
when is even, and
when is odd. This is obvious in the even case since is a polynomial of degree , but I struggled for a while with the odd case before finding a slick three-line proof. (I was first trying to prove the weaker statement that was non-negative, but for some strange reason I was only able to establish this by working out the derivative exactly, rather than by using more analytic methods, such as convexity arguments.) I thought other readers might like the challenge (and also I’d like to see some other proofs), so rather than post my own proof immediately, I’ll see if anyone would like to supply their own proofs or thoughts in the comments. Also I am curious to know if this identity is connected to any other existing piece of mathematics.
The lonely runner conjecture is the following open problem:
Conjecture 1 Suppose one has runners on the unit circle , all starting at the origin and moving at different speeds. Then for each runner, there is at least one time for which that runner is “lonely” in the sense that it is separated by a distance at least from all other runners.
One can normalise the speed of the lonely runner to be zero, at which point the conjecture can be reformulated (after replacing by ) as follows:
Conjecture 2 Let be non-zero real numbers for some . Then there exists a real number such that the numbers are all a distance at least from the integers, thus where denotes the distance of to the nearest integer.
This conjecture has been proven for , but remains open for larger . The bound is optimal, as can be seen by looking at the case and applying the Dirichlet approximation theorem. Note that for each non-zero , the set has (Banach) density for any , and from this and the union bound we can easily find for which
for any , but it has proven to be quite challenging to remove the factor of to increase to . (As far as I know, even improving to for some absolute constant and sufficiently large remains open.)
The speeds in the above conjecture are arbitrary non-zero reals, but it has been known for some time that one can reduce without loss of generality to the case when the are rationals, or equivalently (by scaling) to the case where they are integers; see e.g. Section 4 of this paper of Bohman, Holzman, and Kleitman.
In this post I would like to remark on a slight refinement of this reduction, in which the speeds are integers of bounded size, where the bound depends on . More precisely:
Proposition 3 In order to prove the lonely runner conjecture, it suffices to do so under the additional assumption that the are integers of size at most , where is an (explicitly computable) absolute constant. (More precisely: if this restricted version of the lonely runner conjecture is true for all , then the original version of the conjecture is also true for all .)
In principle, this proposition allows one to verify the lonely runner conjecture for a given in finite time; however the number of cases to check with this proposition grows faster than exponentially in , and so this is unfortunately not a feasible approach to verifying the lonely runner conjecture for more values of than currently known.
One of the key tools needed to prove this proposition is the following additive combinatorics result. Recall that a generalised arithmetic progression (or ) in the reals is a set of the form
for some and ; the quantity is called the rank of the progression. If , the progression is said to be -proper if the sums with for are all distinct. We have
Lemma 4 (Progressions lie inside proper progressions) Let be a GAP of rank in the reals, and let . Then is contained in a -proper GAP of rank at most , with
Now let , and assume inductively that the lonely runner conjecture has been proven for all smaller values of , as well as for the current value of in the case that are integers of size at most for some sufficiently large . We will show that the lonely runner conjecture holds in general for this choice of .
let be non-zero real numbers. Let be a large absolute constant to be chosen later. From the above lemma applied to the GAP , one can find a -proper GAP of rank at most containing such that
in particular if is large enough depending on .
for some , , and . We thus have for , where is the linear map and are non-zero and lie in the box .
We now need an elementary lemma that allows us to create a “collision” between two of the via a linear projection, without making any of the collide with the origin:
Lemma 5 Let be non-zero vectors that are not all collinear with the origin. Then, after replacing one or more of the with their negatives if necessary, there exists a pair such that , and such that none of the is a scalar multiple of .
Proof: We may assume that , since the case is vacuous. Applying a generic linear projection to (which does not affect collinearity, or the property that a given is a scalar multiple of ), we may then reduce to the case .
By a rotation and relabeling, we may assume that lies on the negative -axis; by flipping signs as necessary we may then assume that all of the lie in the closed right half-plane. As the are not all collinear with the origin, one of the lies off of the -axis, by relabeling, we may assume that lies off of the axis and makes a minimal angle with the -axis. Then the angle of with the -axis is non-zero but smaller than any non-zero angle that any of the make with this axis, and so none of the are a scalar multiple of , and the claim follows.
We now return to the proof of the proposition. If the are all collinear with the origin, then lie in a one-dimensional arithmetic progression , and then by rescaling we may take the to be integers of magnitude at most , at which point we are done by hypothesis. Thus, we may assume that the are not all collinear with the origin, and so by the above lemma and relabeling we may assume that is non-zero, and that none of the are scalar multiples of .
We now define a variant of by the map
where the are real numbers that are linearly independent over , whose precise value will not be of importance in our argument. This is a linear map with the property that , so that consists of at most distinct real numbers, which are non-zero since none of the are scalar multiples of , and the are linearly independent over . As we are assuming inductively that the lonely runner conjecture holds for , we conclude (after deleting duplicates) that there exists at least one real number such that
We would like to “approximate” by to then conclude that there is at least one real number such that
It turns out that we can do this by a Fourier-analytic argument taking advantage of the -proper nature of . Firstly, we see from the Dirichlet approximation theorem that one has
for a set of reals of (Banach) density . Thus, by the triangle inequality, we have
for a set of reals of density .
Applying a smooth Fourier multiplier of Littlewood-Paley type, one can find a trigonometric polynomial
which takes values in , is for , and is no larger than for . We then have
where denotes the mean value of a quasiperiodic function on the reals . We expand the left-hand side out as
From the genericity of , we see that the constraint
By Fourier expansion and writing , we may write (4) as
The support of the implies that . Because of the -properness of , we see (for large enough) that the equation
and conversely that (7) implies that (6) holds for some with . From (3) we thus have
and conversely that (7) implies that (6) holds for some with . From (3) we thus have
In particular, there exists a such that
Since is bounded in magnitude by , and is bounded by , we thus have
for each , which by the size properties of implies that for all , giving the lonely runner conjecture for .
The (presumably) final article arising from the Polymath8 project has now been uploaded to the arXiv as “The “bounded gaps between primes” Polymath project – a retrospective“. This article, submitted to the Newsletter of the European Mathematical Society, consists of personal contributions from ten different participants (at varying levels of stage of career, and intensity of participation) on their own experiences with the project, and some thoughts as to what lessons to draw for any subsequent Polymath projects. (At present, I do not know of any such projects being proposed, but from recent experience I would imagine that some opportunity suitable for a Polymath approach will present itself at some point in the near future.)
This post will also serve as the latest (and probably last) of the Polymath8 threads (rolling over this previous post), to wrap up any remaining discussion about any aspect of this project.
I’ve just uploaded to the arXiv the D.H.J. Polymath paper “Variants of the Selberg sieve, and bounded intervals containing many primes“, which is the second paper to be produced from the Polymath8 project (the first one being discussed here). We’ll refer to this latter paper here as the Polymath8b paper, and the former as the Polymath8a paper. As with Polymath8a, the Polymath8b paper is concerned with the smallest asymptotic prime gap
where denotes the prime, as well as the more general quantities
In the breakthrough paper of Goldston, Pintz, and Yildirim, the bound was obtained under the strong hypothesis of the Elliott-Halberstam conjecture. An unconditional bound on , however, remained elusive until the celebrated work of Zhang last year, who showed that
The Polymath8a paper then improved this to . After that, Maynard introduced a new multidimensional Selberg sieve argument that gave the substantial improvement
unconditionally, and on the Elliott-Halberstam conjecture; furthermore, bounds on for higher were obtained for the first time, and specifically that for all , with the improvements and on the Elliott-Halberstam conjecture. (I had independently discovered the multidimensional sieve idea, although I did not obtain Maynard’s specific numerical results, and my asymptotic bounds were a bit weaker.)
In Polymath8b, we obtain some further improvements. Unconditionally, we have and , together with some explicit bounds on ; on the Elliott-Halberstam conjecture we have and some numerical improvements to the bounds; and assuming the generalised Elliott-Halberstam conjecture we have the bound , which is best possible from sieve-theoretic methods thanks to the parity problem obstruction.
There were a variety of methods used to establish these results. Maynard’s paper obtained a criterion for bounding which reduced to finding a good solution to a certain multidimensional variational problem. When the dimension parameter was relatively small (e.g. ), we were able to obtain good numerical solutions both by continuing the method of Maynard (using a basis of symmetric polynomials), or by using a Krylov iteration scheme. For large , we refined the asymptotics and obtained near-optimal solutions of the variational problem. For the bounds, we extended the reach of the multidimensional Selberg sieve (particularly under the assumption of the generalised Elliott-Halberstam conjecture) by allowing the function in the multidimensional variational problem to extend to a larger region of space than was previously admissible, albeit with some tricky new constraints on (and penalties in the variational problem). This required some unusual sieve-theoretic manipulations, notably an “epsilon trick”, ultimately relying on the elementary inequality , that allowed one to get non-trivial lower bounds for sums such as even if the sum had no non-trivial estimates available; and a way to estimate divisor sums such as even if was permitted to be comparable to or even exceed , by using the fundamental theorem of arithmetic to factorise (after restricting to the case when is almost prime). I hope that these sieve-theoretic tricks will be useful in future work in the subject.
With this paper, the Polymath8 project is almost complete; there is still a little bit of scope to push our methods further and get some modest improvement for instance to the bound, but this would require a substantial amount of effort, and it is probably best to instead wait for some new breakthrough in the subject to come along. One final task we are performing is to write up a retrospective article on both the 8a and 8b experiences, an incomplete writeup of which can be found here. If anyone wishes to contribute some commentary on these projects (whether you were an active contributor, an occasional contributor, or a silent “lurker” in the online discussion), please feel free to do so in the comments to this post.
This should be the final thread (for now, at least) for the Polymath8 project (encompassing the original Polymath8a paper, the nearly finished Polymath8b paper, and the retrospective paper), superseding the previous Polymath8b thread (which was quite full) and the Polymath8a/retrospective thread (which was more or less inactive).
On Polymath8a: I talked briefly with Andrew Granville, who is handling the paper for Algebra & Number Theory, and he said that a referee report should be coming in soon. Apparently length of the paper is a bit of an issue (not surprising, as it is 163 pages long) and there will be some suggestions to trim the size down a bit.
In view of the length issue at A&NT, I’m now leaning towards taking up Ken Ono’s offer to submit the Polymath8b paper to the new open access journal “Research in the Mathematical Sciences“. I think the paper is almost ready to be submitted (after the current participants sign off on it, of course), but it might be worth waiting on the Polymath8a referee report in case the changes suggested impact the 8b paper.
Finally, it is perhaps time to start working on the retrospective article, and collect some impressions from participants. I wrote up a quick draft of my own experiences, and also pasted in Pace Nielsen’s thoughts, as well as a contribution from an undergraduate following the project (Andrew Gibson). Hopefully we can collect a few more (either through comments on this blog, through email, or through Dropbox), and then start working on editing them together and finding some suitable concluding points to make about the Polymath8 project, and what lessons we can take from it for future projects of this type.
This is the eleventh thread for the Polymath8b project to obtain new bounds for the quantity
the previous thread may be found here.
The main focus is now on writing up the results, with a draft paper close to completion here (with the directory of source files here). Most of the sections are now written up more or less completely, with the exception of the appendix on narrow admissible tuples, which was awaiting the bounds on such tuples to stabilise. There is now also an acknowledgments section (linking to the corresponding page on the wiki, which participants should check to see if their affiliations etc. are posted correctly), and in the final remarks section there is now also some discussion about potential improvements to the bounds. I’ve also added some mention of a recent paper of Banks, Freiberg and Maynard which makes use of some of our results (in particular, that ). On the other hand, the portions of the writeup relating to potential improvements to the MPZ estimates have been commented out, as it appears that one cannot easily obtain the exponential sum estimates required to make those go through. (Perhaps, if there are significant new developments, one could incorporate them into a putative Polymath8c project, although at present I think there’s not much urgency to start over once again.)
Regarding the numerics in Section 7 of the paper, one thing which is missing at present is some links to code in case future readers wish to verify the results; alternatively one could include such code and data into the arXiv submission.
It’s about time to discuss possible journals to submit the paper to. Ken Ono has invited us to submit to his new journal, “Research in the Mathematical Sciences“. Another option would be to submit to the same journal “Algebra & Number Theory” that is currently handling our Polymath8a paper (no news on the submission there, but it is a very long paper), although I think the papers are independent enough that it is not absolutely necessary to place them in the same journal. A third natural choice is “Mathematics of Computation“, though I should note that when the Polymath4 paper was submitted there, the editors required us to use our real names instead of the D.H.J. Polymath pseudonym as it would have messed up their metadata system otherwise. (But I can check with the editor there before submitting to see if there is some workaround now, perhaps their policies have changed.) At present I have no strong preferences regarding journal selection, and would welcome further thoughts and proposals. (It is perhaps best to avoid the journals that I am editor or associate editor of, namely Amer. J. Math, Forum of Mathematics, Analysis & PDE, and Dynamics and PDE, due to conflict of interest (and in the latter two cases, specialisation to a different area of mathematics)).
This is the tenth thread for the Polymath8b project to obtain new bounds for the quantity
the previous thread may be found here.
Numerical progress on these bounds have slowed in recent months, although we have very recently lowered the unconditional bound on from 252 to 246 (see the wiki page for more detailed results). While there may still be scope for further improvement (particularly with respect to bounds for with , which we have not focused on for a while, it looks like we have reached the point of diminishing returns, and it is time to turn to the task of writing up the results.
A draft version of the paper so far may be found here (with the directory of source files here). Currently, the introduction and the sieve-theoretic portions of the paper are written up, although the sieve-theoretic arguments are surprisingly lengthy, and some simplification (or other reorganisation) may well be possible. Other portions of the paper that have not yet been written up include the asymptotic analysis of for large k (leading in particular to results for m=2,3,4,5), and a description of the quadratic programming that is used to estimate for small and medium k. Also we will eventually need an appendix to summarise the material from Polymath8a that we would use to generate various narrow admissible tuples.
One issue here is that our current unconditional bounds on for m=2,3,4,5 rely on a distributional estimate on the primes which we believed to be true in Polymath8a, but never actually worked out (among other things, there was some delicate algebraic geometry issues concerning the vanishing of certain cohomology groups that was never resolved). This issue does not affect the m=1 calculations, which only use the Bombieri-Vinogradov theorem or else assume the generalised Elliott-Halberstam conjecture. As such, we will have to rework the computations for these , given that the task of trying to attain the conjectured distributional estimate on the primes would be a significant amount of work that is rather disjoint from the rest of the Polymath8b writeup. One could simply dust off the old maple code for this (e.g. one could tweak the code here, with the constraint 1080*varpi/13+ 330*delta/13<1 being replaced by 600*varpi/7+180*delta/7<1), but there is also a chance that our asymptotic bounds for (currently given in messy detail here) could be sharpened. I plan to look at this issue fairly soon.
Also, there are a number of smaller observations (e.g. the parity problem barrier that prevents us from ever getting a better bound on than 6) that should also go into the paper at some point; the current outline of the paper as given in the draft is not necessarily comprehensive.
This is the ninth thread for the Polymath8b project to obtain new bounds for the quantity
The focus is now on bounding unconditionally (in particular, without resorting to the Elliott-Halberstam conjecture or its generalisations). We can bound whenever one can find a symmetric square-integrable function supported on the simplex such that
Our strategy for establishing this has been to restrict to be a linear combination of symmetrised monomials (restricted of course to ), where the degree is small; actually, it seems convenient to work with the slightly different basis where the are restricted to be even. The criterion (1) then becomes a large quadratic program with explicit but complicated rational coefficients. This approach has lowered down to , which led to the bound .
will suffice, whenever and is supported now on and obeys the vanishing marginal condition whenever . The latter is in particular obeyed when is supported on . A modification of the preceding strategy has lowered slightly to , giving the bound which is currently our best record.
However, the quadratic programs here have become extremely large and slow to run, and more efficient algorithms (or possibly more computer power) may be required to advance further.
This is the eighth thread for the Polymath8b project to obtain new bounds for the quantity
The big news since the last thread is that we have managed to obtain the (sieve-theoretically) optimal bound of assuming the generalised Elliott-Halberstam conjecture (GEH), which pretty much closes off that part of the story. Unconditionally, our bound on is still . This bound was obtained using the “vanilla” Maynard sieve, in which the cutoff was supported in the original simplex , and only Bombieri-Vinogradov was used. In principle, we can enlarge the sieve support a little bit further now; for instance, we can enlarge to , but then have to shrink the J integrals to , provided that the marginals vanish for . However, we do not yet know how to numerically work with these expanded problems.
Given the substantial progress made so far, it looks like we are close to the point where we should declare victory and write up the results (though we should take one last look to see if there is any room to improve the bounds). There is actually a fair bit to write up:
- Improvements to the Maynard sieve (pushing beyond the simplex, the epsilon trick, and pushing beyond the cube);
- Asymptotic bounds for and hence ;
- Explicit bounds for (using the Polymath8a results)
- on GEH (and parity obstructions to any further improvement).
I will try to create a skeleton outline of such a paper in the Polymath8 Dropbox folder soon. It shouldn’t be nearly as big as the Polymath8a paper, but it will still be quite sizeable.
This is the seventh thread for the Polymath8b project to obtain new bounds for the quantity
The current focus is on improving the upper bound on under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from to . Very recently, we have been able to exploit GEH more fully, leading to a promising new expansion of the sieve support region. The problem now reduces to the following:
Problem 1 Does there exist a (not necessarily convex) polytope with quantities , and a non-trivial square-integrable function supported on such that
- when ;
- when ;
- when ;
and such that we have the inequality
An affirmative answer to this question will imply on GEH. We are “within two percent” of this claim; we cannot quite reach yet, but have got as far as . However, we have not yet fully optimised in the above problem. In particular, the simplex
is now available, and should lead to some noticeable improvement in the numerology.
There is also a very slim chance that the twin prime conjecture is now provable on GEH. It would require an affirmative solution to the following problem:
Problem 2 Does there exist a (not necessarily convex) polytope with quantities , and a non-trivial square-integrable function supported on such that
- when ;
- when ;
and such that we have the inequality
We suspect that the answer to this question is negative, but have not formally ruled it out yet.
For the rest of this post, I will justify why positive answers to these sorts of variational problems are sufficient to get bounds on (or more generally ).