You are currently browsing the category archive for the ‘question’ category.
By an odd coincidence, I stumbled upon a second question in as many weeks about power series, and once again the only way I know how to prove the result is by complex methods; once again, I am leaving it here as a challenge to any interested readers, and I would be particularly interested in knowing of a proof that was not based on complex analysis (or thinly disguised versions thereof), or for a reference to previous literature where something like this identity has occured. (I suspect for instance that something like this may have shown up before in free probability, based on the answer to part (ii) of the problem.)
Here is a purely algebraic form of the problem:
where we use to denote the -fold derivative of with respect to the variable .
- (i) Show that can be formally recovered from by the formula
- (ii) There is a remarkable further formal identity relating with that does not explicitly involve any infinite summation. What is this identity?
To rigorously formulate part (i) of this problem, one could work in the commutative differential ring of formal infinite series generated by polynomial combinations of and its derivatives (with no constant term). Part (ii) is a bit trickier to formulate in this abstract ring; the identity in question is easier to state if are formal power series, or (even better) convergent power series, as it involves operations such as composition or inversion that can be more easily defined in those latter settings.
To illustrate Problem 1(i), let us compute up to third order in , using to denote any quantity involving four or more factors of and its derivatives, and similarly for other exponents than . Then we have
multiplying, we have
and hence after a lot of canceling
Thus Problem 1(i) holds up to errors of at least. In principle one can continue verifying Problem 1(i) to increasingly high order in , but the computations rapidly become quite lengthy, and I do not know of a direct way to ensure that one always obtains the required cancellation at the end of the computation.
Problem 1(i) can also be posed in formal power series: if
is a formal power series with no constant term with complex coefficients with , then one can verify that the series
makes sense as a formal power series with no constant term, thus
For instance it is not difficult to show that . If one further has , then it turns out that
as formal power series. Currently the only way I know how to show this is by first proving the claim for power series with a positive radius of convergence using the Cauchy integral formula, but even this is a bit tricky unless one has managed to guess the identity in (ii) first. (In fact, the way I discovered this problem was by first trying to solve (a variant of) the identity in (ii) by Taylor expansion in the course of attacking another problem, and obtaining the transform in Problem 1 as a consequence.)
The transform that takes to resembles both the exponential function
and Taylor’s formula
but does not seem to be directly connected to either (this is more apparent once one knows the identity in (ii)).
My colleague Tom Liggett recently posed to me the following problem about power series in one real variable . Observe that the power series
has very rapidly decaying coefficients (of order ), leading to an infinite radius of convergence; also, as the series converges to , the series decays very rapidly as approaches . The problem is whether this is essentially the only example of this type. More precisely:
Problem 1 Let be a bounded sequence of real numbers, and suppose that the power series
(which has an infinite radius of convergence) decays like as , in the sense that the function remains bounded as . Must the sequence be of the form for some constant ?
As it turns out, the problem has a very nice solution using complex analysis methods, which by coincidence I happen to be teaching right now. I am therefore posing as a challenge to my complex analysis students and to other readers of this blog to answer the above problem by complex methods; feel free to post solutions in the comments below (and in particular, if you don’t want to be spoiled, you should probably refrain from reading the comments). In fact, the only way I know how to solve this problem currently is by complex methods; I would be interested in seeing a purely real-variable solution that is not simply a thinly disguised version of a complex-variable argument.
(To be fair to my students, the complex variable argument does require one additional tool that is not directly covered in my notes. That tool can be found here.)
Over on the polymath blog, I’ve posted (on behalf of Dinesh Thakur) a new polymath proposal, which is to explain some numerically observed identities involving the irreducible polynomials in the polynomial ring over the finite field of characteristic two, the simplest of which is
(expanded in terms of Taylor series in ). Comments on the problem should be placed in the polymath blog post; if there is enough interest, we can start a formal polymath project on it.
Let denote the Liouville function. The prime number theorem is equivalent to the estimate
as , that is to say that exhibits cancellation on large intervals such as . This result can be improved to give cancellation on shorter intervals. For instance, using the known zero density estimates for the Riemann zeta function, one can establish that
as if for some fixed ; I believe this result is due to Ramachandra (see also Exercise 21 of this previous blog post), and in fact one could obtain a better error term on the right-hand side that for instance gained an arbitrary power of . On the Riemann hypothesis (or the weaker density hypothesis), it was known that the could be lowered to .
Early this year, there was a major breakthrough by Matomaki and Radziwill, who (among other things) showed that the asymptotic (1) was in fact valid for any with that went to infinity as , thus yielding cancellation on extremely short intervals. This has many further applications; for instance, this estimate, or more precisely its extension to other “non-pretentious” bounded multiplicative functions, was a key ingredient in my recent solution of the Erdös discrepancy problem, as well as in obtaining logarithmically averaged cases of Chowla’s conjecture, such as
It is of interest to twist the above estimates by phases such as the linear phase . In 1937, Davenport showed that
from which one can see that this is another averaged form of Chowla’s conjecture (stronger than the one I was able to prove with Matomaki and Radziwill, but a consequence of the unaveraged Chowla conjecture). If one inserted such a bound into the machinery I used to solve the Erdös discrepancy problem, it should lead to further averaged cases of Chowla’s conjecture, such as
though I have not fully checked the details of this implication. It should also have a number of new implications for sign patterns of the Liouville function, though we have not explored these in detail yet.
One can write (4) equivalently in the form
uniformly for all -dependent phases . In contrast, (3) is equivalent to the subcase of (6) when the linear phase coefficient is independent of . This dependency of on seems to necessitate some highly nontrivial additive combinatorial analysis of the function in order to establish (4) when is small. To date, this analysis has proven to be elusive, but I would like to record what one can do with more classical methods like Vaughan’s identity, namely:
The values of in this range are far too large to yield implications such as new cases of the Chowla conjecture, but it appears that the exponent is the limit of “classical” methods (at least as far as I was able to apply them), in the sense that one does not do any combinatorial analysis on the function , nor does one use modern equidistribution results on “Type III sums” that require deep estimates on Kloosterman-type sums. The latter may shave a little bit off of the exponent, but I don’t see how one would ever hope to go below without doing some non-trivial combinatorics on the function . UPDATE: I have come across this paper of Zhan which uses mean-value theorems for L-functions to lower the exponent to .
Let me now sketch the proof of the proposition, omitting many of the technical details. We first remark that known estimates on sums of the Liouville function (or similar functions such as the von Mangoldt function) in short arithmetic progressions, based on zero-density estimates for Dirichlet -functions, can handle the “major arc” case of (4) (or (6)) where is restricted to be of the form for (the exponent here being of the same numerology as the exponent in the classical result of Ramachandra, tied to the best zero density estimates currently available); for instance a modification of the arguments in this recent paper of Koukoulopoulos would suffice. Thus we can restrict attention to “minor arc” values of (or , using the interpretation of (6)).
Next, one breaks up (or the closely related Möbius function) into Dirichlet convolutions using one of the standard identities (e.g. Vaughan’s identity or Heath-Brown’s identity), as discussed for instance in this previous post (which is focused more on the von Mangoldt function, but analogous identities exist for the Liouville and Möbius functions). The exact choice of identity is not terribly important, but the upshot is that can be decomposed into terms, each of which is either of the “Type I” form
for some coefficients that are roughly of logarithmic size on the average, and scales with and , or else of the “Type II” form
for some coefficients that are roughly of logarithmic size on the average, and scales with and . As discussed in the previous post, the exponent is a natural barrier in these identities if one is unwilling to also consider “Type III” type terms which are roughly of the shape of the third divisor function .
A Type I sum makes a contribution to that can be bounded (via Cauchy-Schwarz) in terms of an expression such as
The inner sum exhibits a lot of cancellation unless is within of an integer. (Here, “a lot” should be loosely interpreted as “gaining many powers of over the trivial bound”.) Since is significantly larger than , standard Vinogradov-type manipulations (see e.g. Lemma 13 of these previous notes) show that this bad case occurs for many only when is “major arc”, which is the case we have specifically excluded. This lets us dispose of the Type I contributions.
A Type II sum makes a contribution to roughly of the form
We can break this up into a number of sums roughly of the form
for ; note that the range is non-trivial because is much larger than . Applying the usual bilinear sum Cauchy-Schwarz methods (e.g. Theorem 14 of these notes) we conclude that there is a lot of cancellation unless one has for some . But with , is well below the threshold for the definition of major arc, so we can exclude this case and obtain the required cancellation.
A natural generalisation of Chowla’s conjecture was proposed by Elliott. For simplicity we will only consider Elliott’s conjecture for the pair correlations
for any Dirichlet character and any real number . In the language of “pretentious number theory”, as developed by Granville and Soundararajan, the hypothesis (2) asserts that the completely multiplicative function does not “pretend” to be like the completely multiplicative function for any character and real number . A condition of this form is necessary; for instance, if is precisely equal to and has period , then is equal to as and (1) clearly fails. The prime number theorem in arithmetic progressions implies that the Liouville function obeys (2), and so the Elliott conjecture contains the Chowla conjecture as a special case.
As it turns out, Elliott’s conjecture is false as stated, with the counterexample having the property that “pretends” locally to be the function for in various intervals , where and go to infinity in a certain prescribed sense. See this paper of Matomaki, Radziwill, and myself for details. However, we view this as a technicality, and continue to believe that certain “repaired” versions of Elliott’s conjecture still hold. For instance, our counterexample does not apply when is restricted to be real-valued rather than complex, and we believe that Elliott’s conjecture is valid in this setting. Returning to the complex-valued case, we still expect the asymptotic (1) provided that the condition (2) is replaced by the stronger condition
as for all fixed Dirichlet characters . In our paper we supported this claim by establishing a certain “averaged” version of this conjecture; see that paper for further details. (See also this recent paper of Frantzikinakis and Host which establishes a different averaged version of this conjecture.)
One can make a stronger “non-asymptotic” version of this corrected Elliott conjecture, in which the parameter does not go to infinity, or equivalently that the function is permitted to depend on :
Conjecture 1 (Non-asymptotic Elliott conjecture) Let , let be sufficiently large depending on , and let be sufficiently large depending on . Suppose that is a completely multiplicative function with magnitude bounded by , such that
for all Dirichlet characters of period at most . Then one has
for all natural numbers .
The -dependent factor in the constraint is necessary, as can be seen by considering the completely multiplicative function (for instance). Again, the results in my previous paper with Matomaki and Radziwill can be viewed as establishing an averaged version of this conjecture.
Meanwhile, we have the following conjecture that is the focus of the Polymath5 project:
It is instructive to compute some near-counterexamples to Conjecture 2 that illustrate the difficulty of the Erdös discrepancy problem. The first near-counterexample is that of a non-principal Dirichlet character that takes values in rather than . For this function, one has from the complete multiplicativity of that
If denotes the period of , then has mean zero on every interval of length , and thus
Thus has bounded discrepancy.
Of course, this is not a true counterexample to Conjecture 2 because can take the value . Let us now consider the following variant example, which is the simplest member of a family of examples studied by Borwein, Choi, and Coons. Let be the non-principal Dirichlet character of period (thus equals when , when , and when ), and define the completely multiplicative function by setting when and . This is about the simplest modification one can make to the previous near-counterexample to eliminate the zeroes. Now consider the sum
with for some large . Writing with coprime to and at most , we can write this sum as
Now observe that . The function has mean zero on every interval of length three, and is equal to mod , and thus
for every , and thus
Thus also has unbounded discrepancy, but only barely so (it grows logarithmically in ). These examples suggest that the main “enemy” to proving Conjecture 2 comes from completely multiplicative functions that somehow “pretend” to be like a Dirichlet character but do not vanish at the zeroes of that character. (Indeed, the special case of Conjecture 2 when is completely multiplicative is already open, appears to be an important subcase.)
All of these conjectures remain open. However, I would like to record in this blog post the following striking connection, illustrating the power of the Elliott conjecture (particularly in its nonasymptotic formulation):
The argument relies heavily on two observations that were previously made in connection with the Polymath5 project. The first is a Fourier-analytic reduction that replaces the Erdos Discrepancy Problem with an averaged version for completely multiplicative functions . An application of Cauchy-Schwarz then shows that any counterexample to that version will violate the conclusion of Conjecture 1, so if one assumes that conjecture then must pretend to be like a function of the form . One then uses (a generalisation) of a second argument from Polymath5 to rule out this case, basically by reducing matters to a more complicated version of the Borwein-Choi-Coons analysis. Details are provided below the fold.
There is some hope that the Chowla and Elliott conjectures can be attacked, as the parity barrier which is so impervious to attack for the twin prime conjecture seems to be more permeable in this setting. (For instance, in my previous post I raised a possible approach, based on establishing expander properties of a certain random graph, which seems to get around the parity problem, in principle at least.)
(Update, Sep 25: fixed some treatment of error terms, following a suggestion of Andrew Granville.)
Here’s a cute identity I discovered by accident recently. Observe that
and so one can conjecture that one has
when is even, and
when is odd. This is obvious in the even case since is a polynomial of degree , but I struggled for a while with the odd case before finding a slick three-line proof. (I was first trying to prove the weaker statement that was non-negative, but for some strange reason I was only able to establish this by working out the derivative exactly, rather than by using more analytic methods, such as convexity arguments.) I thought other readers might like the challenge (and also I’d like to see some other proofs), so rather than post my own proof immediately, I’ll see if anyone would like to supply their own proofs or thoughts in the comments. Also I am curious to know if this identity is connected to any other existing piece of mathematics.
The lonely runner conjecture is the following open problem:
Conjecture 1 Suppose one has runners on the unit circle , all starting at the origin and moving at different speeds. Then for each runner, there is at least one time for which that runner is “lonely” in the sense that it is separated by a distance at least from all other runners.
One can normalise the speed of the lonely runner to be zero, at which point the conjecture can be reformulated (after replacing by ) as follows:
Conjecture 2 Let be non-zero real numbers for some . Then there exists a real number such that the numbers are all a distance at least from the integers, thus where denotes the distance of to the nearest integer.
This conjecture has been proven for , but remains open for larger . The bound is optimal, as can be seen by looking at the case and applying the Dirichlet approximation theorem. Note that for each non-zero , the set has (Banach) density for any , and from this and the union bound we can easily find for which
for any , but it has proven to be quite challenging to remove the factor of to increase to . (As far as I know, even improving to for some absolute constant and sufficiently large remains open.)
The speeds in the above conjecture are arbitrary non-zero reals, but it has been known for some time that one can reduce without loss of generality to the case when the are rationals, or equivalently (by scaling) to the case where they are integers; see e.g. Section 4 of this paper of Bohman, Holzman, and Kleitman.
In this post I would like to remark on a slight refinement of this reduction, in which the speeds are integers of bounded size, where the bound depends on . More precisely:
Proposition 3 In order to prove the lonely runner conjecture, it suffices to do so under the additional assumption that the are integers of size at most , where is an (explicitly computable) absolute constant. (More precisely: if this restricted version of the lonely runner conjecture is true for all , then the original version of the conjecture is also true for all .)
In principle, this proposition allows one to verify the lonely runner conjecture for a given in finite time; however the number of cases to check with this proposition grows faster than exponentially in , and so this is unfortunately not a feasible approach to verifying the lonely runner conjecture for more values of than currently known.
One of the key tools needed to prove this proposition is the following additive combinatorics result. Recall that a generalised arithmetic progression (or ) in the reals is a set of the form
for some and ; the quantity is called the rank of the progression. If , the progression is said to be -proper if the sums with for are all distinct. We have
Lemma 4 (Progressions lie inside proper progressions) Let be a GAP of rank in the reals, and let . Then is contained in a -proper GAP of rank at most , with
Now let , and assume inductively that the lonely runner conjecture has been proven for all smaller values of , as well as for the current value of in the case that are integers of size at most for some sufficiently large . We will show that the lonely runner conjecture holds in general for this choice of .
let be non-zero real numbers. Let be a large absolute constant to be chosen later. From the above lemma applied to the GAP , one can find a -proper GAP of rank at most containing such that
in particular if is large enough depending on .
for some , , and . We thus have for , where is the linear map and are non-zero and lie in the box .
We now need an elementary lemma that allows us to create a “collision” between two of the via a linear projection, without making any of the collide with the origin:
Lemma 5 Let be non-zero vectors that are not all collinear with the origin. Then, after replacing one or more of the with their negatives if necessary, there exists a pair such that , and such that none of the is a scalar multiple of .
Proof: We may assume that , since the case is vacuous. Applying a generic linear projection to (which does not affect collinearity, or the property that a given is a scalar multiple of ), we may then reduce to the case .
By a rotation and relabeling, we may assume that lies on the negative -axis; by flipping signs as necessary we may then assume that all of the lie in the closed right half-plane. As the are not all collinear with the origin, one of the lies off of the -axis, by relabeling, we may assume that lies off of the axis and makes a minimal angle with the -axis. Then the angle of with the -axis is non-zero but smaller than any non-zero angle that any of the make with this axis, and so none of the are a scalar multiple of , and the claim follows.
We now return to the proof of the proposition. If the are all collinear with the origin, then lie in a one-dimensional arithmetic progression , and then by rescaling we may take the to be integers of magnitude at most , at which point we are done by hypothesis. Thus, we may assume that the are not all collinear with the origin, and so by the above lemma and relabeling we may assume that is non-zero, and that none of the are scalar multiples of .
with for ; by relabeling we may assume without loss of generality that is non-zero, and furthermore that
where is a natural number and have no common factor.
We now define a variant of by the map
where the are real numbers that are linearly independent over , whose precise value will not be of importance in our argument. This is a linear map with the property that , so that consists of at most distinct real numbers, which are non-zero since none of the are scalar multiples of , and the are linearly independent over . As we are assuming inductively that the lonely runner conjecture holds for , we conclude (after deleting duplicates) that there exists at least one real number such that
We would like to “approximate” by to then conclude that there is at least one real number such that
It turns out that we can do this by a Fourier-analytic argument taking advantage of the -proper nature of . Firstly, we see from the Dirichlet approximation theorem that one has
for a set of reals of (Banach) density . Thus, by the triangle inequality, we have
for a set of reals of density .
Applying a smooth Fourier multiplier of Littlewood-Paley type, one can find a trigonometric polynomial
which takes values in , is for , and is no larger than for . We then have
where denotes the mean value of a quasiperiodic function on the reals . We expand the left-hand side out as
From the genericity of , we see that the constraint
and is the Dirichlet series
By Fourier expansion and writing , we may write (4) as
In particular, there exists a such that
Since is bounded in magnitude by , and is bounded by , we thus have
for each , which by the size properties of implies that for all , giving the lonely runner conjecture for .
The (presumably) final article arising from the Polymath8 project has now been uploaded to the arXiv as “The “bounded gaps between primes” Polymath project – a retrospective“. This article, submitted to the Newsletter of the European Mathematical Society, consists of personal contributions from ten different participants (at varying levels of stage of career, and intensity of participation) on their own experiences with the project, and some thoughts as to what lessons to draw for any subsequent Polymath projects. (At present, I do not know of any such projects being proposed, but from recent experience I would imagine that some opportunity suitable for a Polymath approach will present itself at some point in the near future.)
This post will also serve as the latest (and probably last) of the Polymath8 threads (rolling over this previous post), to wrap up any remaining discussion about any aspect of this project.
I’ve just uploaded to the arXiv the D.H.J. Polymath paper “Variants of the Selberg sieve, and bounded intervals containing many primes“, which is the second paper to be produced from the Polymath8 project (the first one being discussed here). We’ll refer to this latter paper here as the Polymath8b paper, and the former as the Polymath8a paper. As with Polymath8a, the Polymath8b paper is concerned with the smallest asymptotic prime gap
where denotes the prime, as well as the more general quantities
In the breakthrough paper of Goldston, Pintz, and Yildirim, the bound was obtained under the strong hypothesis of the Elliott-Halberstam conjecture. An unconditional bound on , however, remained elusive until the celebrated work of Zhang last year, who showed that
The Polymath8a paper then improved this to . After that, Maynard introduced a new multidimensional Selberg sieve argument that gave the substantial improvement
unconditionally, and on the Elliott-Halberstam conjecture; furthermore, bounds on for higher were obtained for the first time, and specifically that for all , with the improvements and on the Elliott-Halberstam conjecture. (I had independently discovered the multidimensional sieve idea, although I did not obtain Maynard’s specific numerical results, and my asymptotic bounds were a bit weaker.)
In Polymath8b, we obtain some further improvements. Unconditionally, we have and , together with some explicit bounds on ; on the Elliott-Halberstam conjecture we have and some numerical improvements to the bounds; and assuming the generalised Elliott-Halberstam conjecture we have the bound , which is best possible from sieve-theoretic methods thanks to the parity problem obstruction.
There were a variety of methods used to establish these results. Maynard’s paper obtained a criterion for bounding which reduced to finding a good solution to a certain multidimensional variational problem. When the dimension parameter was relatively small (e.g. ), we were able to obtain good numerical solutions both by continuing the method of Maynard (using a basis of symmetric polynomials), or by using a Krylov iteration scheme. For large , we refined the asymptotics and obtained near-optimal solutions of the variational problem. For the bounds, we extended the reach of the multidimensional Selberg sieve (particularly under the assumption of the generalised Elliott-Halberstam conjecture) by allowing the function in the multidimensional variational problem to extend to a larger region of space than was previously admissible, albeit with some tricky new constraints on (and penalties in the variational problem). This required some unusual sieve-theoretic manipulations, notably an “epsilon trick”, ultimately relying on the elementary inequality , that allowed one to get non-trivial lower bounds for sums such as even if the sum had no non-trivial estimates available; and a way to estimate divisor sums such as even if was permitted to be comparable to or even exceed , by using the fundamental theorem of arithmetic to factorise (after restricting to the case when is almost prime). I hope that these sieve-theoretic tricks will be useful in future work in the subject.
With this paper, the Polymath8 project is almost complete; there is still a little bit of scope to push our methods further and get some modest improvement for instance to the bound, but this would require a substantial amount of effort, and it is probably best to instead wait for some new breakthrough in the subject to come along. One final task we are performing is to write up a retrospective article on both the 8a and 8b experiences, an incomplete writeup of which can be found here. If anyone wishes to contribute some commentary on these projects (whether you were an active contributor, an occasional contributor, or a silent “lurker” in the online discussion), please feel free to do so in the comments to this post.
This should be the final thread (for now, at least) for the Polymath8 project (encompassing the original Polymath8a paper, the nearly finished Polymath8b paper, and the retrospective paper), superseding the previous Polymath8b thread (which was quite full) and the Polymath8a/retrospective thread (which was more or less inactive).
On Polymath8a: I talked briefly with Andrew Granville, who is handling the paper for Algebra & Number Theory, and he said that a referee report should be coming in soon. Apparently length of the paper is a bit of an issue (not surprising, as it is 163 pages long) and there will be some suggestions to trim the size down a bit.
In view of the length issue at A&NT, I’m now leaning towards taking up Ken Ono’s offer to submit the Polymath8b paper to the new open access journal “Research in the Mathematical Sciences“. I think the paper is almost ready to be submitted (after the current participants sign off on it, of course), but it might be worth waiting on the Polymath8a referee report in case the changes suggested impact the 8b paper.
Finally, it is perhaps time to start working on the retrospective article, and collect some impressions from participants. I wrote up a quick draft of my own experiences, and also pasted in Pace Nielsen’s thoughts, as well as a contribution from an undergraduate following the project (Andrew Gibson). Hopefully we can collect a few more (either through comments on this blog, through email, or through Dropbox), and then start working on editing them together and finding some suitable concluding points to make about the Polymath8 project, and what lessons we can take from it for future projects of this type.