You are currently browsing the category archive for the ‘math.NT’ category.

Kaisa Matomäki, Maksym Radziwill, and I just uploaded to the arXiv our paper “Fourier uniformity of bounded multiplicative functions in short intervals on average“. This paper is the outcome of our attempts during the MSRI program in analytic number theory last year to attack the local Fourier uniformity conjecture for the Liouville function . This conjecture generalises a landmark result of Matomäki and Radziwill, who show (among other things) that one has the asymptotic

whenever and goes to infinity as . Informally, this says that the Liouville function has small mean for almost all short intervals . The remarkable thing about this theorem is that there is no lower bound on how goes to infinity with ; one can take for instance . This lack of lower bound was crucial when I applied this result (or more precisely, a generalisation of this result to arbitrary non-pretentious bounded multiplicative functions) a few years ago to solve the Erdös discrepancy problem, as well as a logarithmically averaged two-point Chowla conjecture, for instance it implies that

The local Fourier uniformity conjecture asserts the stronger asymptotic

under the same hypotheses on and . As I worked out in a previous paper, this conjecture would imply a logarithmically averaged three-point Chowla conjecture, implying for instance that

This particular bound also follows from some slightly different arguments of Joni Teräväinen and myself, but the implication would also work for other non-pretentious bounded multiplicative functions, whereas the arguments of Joni and myself rely more heavily on the specific properties of the Liouville function (in particular that for all primes ).

There is also a higher order version of the local Fourier uniformity conjecture in which the linear phase is replaced with a polynomial phase such as , or more generally a nilsequence ; as shown in my previous paper, this conjecture implies (and is in fact equivalent to, after logarithmic averaging) a logarithmically averaged version of the full Chowla conjecture (not just the two-point or three-point versions), as well as a logarithmically averaged version of the Sarnak conjecture.

The main result of the current paper is to obtain some cases of the local Fourier uniformity conjecture:

Theorem 1The asymptotic (2) is true when for a fixed .

Previously this was known for by the work of Zhan (who in fact proved the stronger pointwise assertion for in this case). In a previous paper with Kaisa and Maksym, we also proved a weak version

of (2) for any growing arbitrarily slowly with ; this is stronger than (1) (and is in fact proven by a variant of the method) but significantly weaker than (2), because in the latter the worst-case is permitted to depend on the parameter, whereas in (3) must remain independent of .

Unfortunately, the restriction is not strong enough to give applications to Chowla-type conjectures (one would need something more like for this). However, it can still be used to control some sums that had not previously been manageable. For instance, a quick application of the circle method lets one use the above theorem to derive the asymptotic

whenever for a fixed , where is the von Mangoldt function. Amusingly, the seemingly simpler question of establishing the expected asymptotic for

is only known in the range (from the work of Zaccagnini). Thus we have a rare example of a number theory sum that becomes *easier* to control when one inserts a Liouville function!

We now give an informal description of the strategy of proof of the theorem (though for numerous technical reasons, the actual proof deviates in some respects from the description given here). If (2) failed, then for many values of we would have the lower bound

for some frequency . We informally describe this correlation between and by writing

for (informally, one should view this as asserting that “behaves like” a constant multiple of ). For sake of discussion, suppose we have this relationship for *all* , not just *many*.

As mentioned before, the main difficulty here is to understand how varies with . As it turns out, the multiplicativity properties of the Liouville function place a significant constraint on this dependence. Indeed, if we let be a fairly small prime (e.g. of size for some ), and use the identity for the Liouville function to conclude (at least heuristically) from (4) that

for . (In practice, we will have this sort of claim for *many* primes rather than *all* primes , after using tools such as the Turán-Kubilius inequality, but we ignore this distinction for this informal argument.)

Now let and be primes comparable to some fixed range such that

and

on essentially the same range of (two nearby intervals of length ). This suggests that the frequencies and should be close to each other modulo , in particular one should expect the relationship

Comparing this with (5) one is led to the expectation that should depend inversely on in some sense (for instance one can check that

would solve (6) if ; by Taylor expansion, this would correspond to a global approximation of the form ). One now has a problem of an additive combinatorial flavour (or of a “local to global” flavour), namely to leverage the relation (6) to obtain global control on that resembles (7).

A key obstacle in solving (6) efficiently is the fact that one only knows that and are close modulo , rather than close on the real line. One can start resolving this problem by the Chinese remainder theorem, using the fact that we have the freedom to shift (say) by an arbitrary integer. After doing so, one can arrange matters so that one in fact has the relationship

whenever and obey (5). (This may force to become extremely large, on the order of , but this will not concern us.)

Now suppose that we have and primes such that

For every prime , we can find an such that is within of both and . Applying (8) twice we obtain

and

and thus by the triangle inequality we have

for all ; hence by the Chinese remainder theorem

In practice, in the regime that we are considering, the modulus is so huge we can effectively ignore it (in the spirit of the Lefschetz principle); so let us pretend that we in fact have

whenever and obey (9).

Now let be an integer to be chosen later, and suppose we have primes such that the difference

is small but non-zero. If is chosen so that

(where one is somewhat loose about what means) then one can then find real numbers such that

for , with the convention that . We then have

which telescopes to

and thus

and hence

In particular, for each , we expect to be able to write

for some . This quantity can vary with ; but from (10) and a short calculation we see that

whenever obey (9) for some .

Now imagine a “graph” in which the vertices are elements of , and two elements are joined by an edge if (9) holds for some . Because of exponential sum estimates on , this graph turns out to essentially be an “expander” in the sense that any two vertices can be connected (in multiple ways) by fairly short paths in this graph (if one allows one to modify one of or by ). As a consequence, we can assume that this quantity is essentially constant in (cf. the application of the ergodic theorem in this previous blog post), thus we now have

for most and some . By Taylor expansion, this implies that

on for most , thus

But this can be shown to contradict the Matomäki-Radziwill theorem (because the multiplicative function is known to be non-pretentious).

Joni Teräväinen and I have just uploaded to the arXiv our paper “The structure of correlations of multiplicative functions at almost all scales, with applications to the Chowla and Elliott conjectures“. This is a sequel to our previous paper that studied logarithmic correlations of the form

where were bounded multiplicative functions, were fixed shifts, was a quantity going off to infinity, and was a generalised limit functional. Our main technical result asserted that these correlations were necessarily the uniform limit of periodic functions . Furthermore, if (weakly) pretended to be a Dirichlet character , then the could be chosen to be –isotypic in the sense that whenever are integers with coprime to the periods of and ; otherwise, if did not weakly pretend to be any Dirichlet character , then vanished completely. This was then used to verify several cases of the logarithmically averaged Elliott and Chowla conjectures.

The purpose of this paper was to investigate the extent to which the methods could be extended to non-logarithmically averaged settings. For our main technical result, we now considered the unweighted averages

where is an additional parameter. Our main result was now as follows. If did not weakly pretend to be a twisted Dirichlet character , then converged to zero on (doubly logarithmic) average as . If instead did pretend to be such a twisted Dirichlet character, then converged on (doubly logarithmic) average to a limit of -isotypic functions . Thus, roughly speaking, one has the approximation

for most .

Informally, this says that at almost all scales (where “almost all” means “outside of a set of logarithmic density zero”), the non-logarithmic averages behave much like their logarithmic counterparts except for a possible additional twisting by an Archimedean character (which interacts with the Archimedean parameter in much the same way that the Dirichlet character interacts with the non-Archimedean parameter ). One consequence of this is that most of the recent results on the logarithmically averaged Chowla and Elliott conjectures can now be extended to their non-logarithmically averaged counterparts, so long as one excludes a set of exceptional scales of logarithmic density zero. For instance, the Chowla conjecture

is now established for either odd or equal to , so long as one excludes an exceptional set of scales.

In the logarithmically averaged setup, the main idea was to combine two very different pieces of information on . The first, coming from recent results in ergodic theory, was to show that was well approximated in some sense by a nilsequence. The second was to use the “entropy decrement argument” to obtain an approximate isotopy property of the form

for “most” primes and integers . Combining the two facts, one eventually finds that only the almost periodic components of the nilsequence are relevant.

In the current situation, each is approximated by a nilsequence, but the nilsequence can vary with (although there is some useful “Lipschitz continuity” of this nilsequence with respect to the parameter). Meanwhile, the entropy decrement argument gives an approximation basically of the form

for “most” . The arguments then proceed largely as in the logarithmically averaged case. A key lemma to handle the dependence on the new parameter is the following cohomological statement: if one has a map that was a quasimorphism in the sense that for all and some small , then there exists a real number such that for all small . This is achieved by applying a standard “cocycle averaging argument” to the cocycle .

It would of course be desirable to not have the set of exceptional scales. We only know of one (implausible) scenario in which we can do this, namely when one has far fewer (in particular, subexponentially many) sign patterns for (say) the Liouville function than predicted by the Chowla conjecture. In this scenario (roughly analogous to the “Siegel zero” scenario in multiplicative number theory), the entropy of the Liouville sign patterns is so small that the entropy decrement argument becomes powerful enough to control all scales rather than almost all scales. On the other hand, this scenario seems to be self-defeating, in that it allows one to establish a large number of cases of the Chowla conjecture, and the full Chowla conjecture is inconsistent with having unusually few sign patterns. Still it hints that future work in this direction may need to split into “low entropy” and “high entropy” cases, in analogy to how many arguments in multiplicative number theory have to split into the “Siegel zero” and “no Siegel zero” cases.

This is a sequel to this previous blog post, in which we discussed the effect of the heat flow evolution

on the zeroes of a time-dependent family of polynomials , with a particular focus on the case when the polynomials had real zeroes. Here (inspired by some discussions I had during a recent conference on the Riemann hypothesis in Bristol) we record the analogous theory in which the polynomials instead have zeroes on a circle , with the heat flow slightly adjusted to compensate for this. As we shall discuss shortly, a key example of this situation arises when is the numerator of the zeta function of a curve.

More precisely, let be a natural number. We will say that a polynomial

of degree (so that ) obeys the *functional equation* if the are all real and

for all , thus

and

for all non-zero . This means that the zeroes of (counting multiplicity) lie in and are symmetric with respect to complex conjugation and inversion across the circle . We say that this polynomial *obeys the Riemann hypothesis* if all of its zeroes actually lie on the circle . For instance, in the case, the polynomial obeys the Riemann hypothesis if and only if .

Such polynomials arise in number theory as follows: if is a projective curve of genus over a finite field , then, as famously proven by Weil, the associated local zeta function (as defined for instance in this previous blog post) is known to take the form

where is a degree polynomial obeying both the functional equation and the Riemann hypothesis. In the case that is an elliptic curve, then and takes the form , where is the number of -points of minus . The Riemann hypothesis in this case is a famous result of Hasse.

Another key example of such polynomials arise from rescaled characteristic polynomials

of matrices in the compact symplectic group . These polynomials obey both the functional equation and the Riemann hypothesis. The Sato-Tate conjecture (in higher genus) asserts, roughly speaking, that “typical” polyomials arising from the number theoretic situation above are distributed like the rescaled characteristic polynomials (1), where is drawn uniformly from with Haar measure.

Given a polynomial of degree with coefficients

we can evolve it in time by the formula

thus for . Informally, as one increases , this evolution accentuates the effect of the extreme monomials, particularly, and at the expense of the intermediate monomials such as , and conversely as one decreases . This family of polynomials obeys the heat-type equation

In view of the results of Marcus, Spielman, and Srivastava, it is also very likely that one can interpret this flow in terms of expected characteristic polynomials involving conjugation over the compact symplectic group , and should also be tied to some sort of “” version of Brownian motion on this group, but we have not attempted to work this connection out in detail.

It is clear that if obeys the functional equation, then so does for any other time . Now we investigate the evolution of the zeroes. Suppose at some time that the zeroes of are distinct, then

From the inverse function theorem we see that for times sufficiently close to , the zeroes of continue to be distinct (and vary smoothly in ), with

Differentiating this at any not equal to any of the , we obtain

and

and

Inserting these formulae into (2) (expanding as ) and canceling some terms, we conclude that

for sufficiently close to , and not equal to . Extracting the residue at , we conclude that

which we can rearrange as

If we make the change of variables (noting that one can make depend smoothly on for sufficiently close to ), this becomes

Intuitively, this equation asserts that the phases repel each other if they are real (and attract each other if their difference is imaginary). If obeys the Riemann hypothesis, then the are all real at time , then the Picard uniqueness theorem (applied to and its complex conjugate) then shows that the are also real for sufficiently close to . If we then define the entropy functional

then the above equation becomes a gradient flow

which implies in particular that is non-increasing in time. This shows that as one evolves time forward from , there is a uniform lower bound on the separation between the phases , and hence the equation can be solved indefinitely; in particular, obeys the Riemann hypothesis for all if it does so at time . Our argument here assumed that the zeroes of were simple, but this assumption can be removed by the usual limiting argument.

For any polynomial obeying the functional equation, the rescaled polynomials converge locally uniformly to as . By Rouche’s theorem, we conclude that the zeroes of converge to the equally spaced points on the circle . Together with the symmetry properties of the zeroes, this implies in particular that obeys the Riemann hypothesis for all sufficiently large positive . In the opposite direction, when , the polynomials converge locally uniformly to , so if , of the zeroes converge to the origin and the other converge to infinity. In particular, fails the Riemann hypothesis for sufficiently large negative . Thus (if ), there must exist a real number , which we call the *de Bruijn-Newman constant* of the original polynomial , such that obeys the Riemann hypothesis for and fails the Riemann hypothesis for . The situation is a bit more complicated if vanishes; if is the first natural number such that (or equivalently, ) does not vanish, then by the above arguments one finds in the limit that of the zeroes go to the origin, go to infinity, and the remaining zeroes converge to the equally spaced points . In this case the de Bruijn-Newman constant remains finite except in the degenerate case , in which case .

For instance, consider the case when and for some real with . Then the quadratic polynomial

has zeroes

and one easily checks that these zeroes lie on the circle when , and are on the real axis otherwise. Thus in this case we have (with if ). Note how as increases to , the zeroes repel each other and eventually converge to , while as decreases to , the zeroes collide and then separate on the real axis, with one zero going to the origin and the other to infinity.

The arguments in my paper with Brad Rodgers (discussed in this previous post) indicate that for a “typical” polynomial of degree that obeys the Riemann hypothesis, the expected time to relaxation to equilibrium (in which the zeroes are equally spaced) should be comparable to , basically because the average spacing is and hence by (3) the typical velocity of the zeroes should be comparable to , and the diameter of the unit circle is comparable to , thus requiring time comparable to to reach equilibrium. Taking contrapositives, this suggests that the de Bruijn-Newman constant should typically take on values comparable to (since typically one would not expect the initial configuration of zeroes to be close to evenly spaced). I have not attempted to formalise or prove this claim, but presumably one could do some numerics (perhaps using some of the examples of given previously) to explore this further.

This is the eighth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

Significant progress has been made since the last update; by implementing the “barrier” method to establish zero free regions for by leveraging the extensive existing numerical verification of the Riemann hypothesis (which establishes zero free regions for ), we have been able to improve our upper bound on from 0.48 to 0.28. Furthermore, there appears to be a bit of further room to improve the bounds further by tweaking the parameters used in the argument (we are currently using ); the most recent idea is to try to use exponential sum estimates to improve the bounds on the derivative of the approximation to that is used in the barrier method, which currently is the most computationally intensive step of the argument.

This is the seventh “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The most recent news is that we appear to have completed the verification that is free of zeroes when and , which implies that . For very large (for instance when the quantity is at least ) this can be done analytically; for medium values of (say when is between and ) this can be done by numerically evaluating a fast approximation to and using the argument principle in a rectangle; and most recently it appears that we can also handle small values of , in part due to some new, and significantly faster, numerical ways to evaluate in this range.

One obvious thing to do now is to experiment with lowering the parameters and and see what happens. However there are two other potential ways to bound which may also be numerically feasible. One approach is based on trying to exclude zeroes of in a region of the form , and for some moderately large (this acts as a “barrier” to prevent zeroes from flowing into the region at time , assuming that they were not already there at time ). This require significantly less numerical verification in the aspect, but more numerical verification in the aspect, so it is not yet clear whether this is a net win.

Another, rather different approach, is to study the evolution of statistics such as over time. One has fairly good control on such quantities at time zero, and their time derivative looks somewhat manageable, so one may be able to still have good control on this quantity at later times . However for this approach to work, one needs an effective version of the Riemann-von Mangoldt formula for , which at present is only available asymptotically (or at time ). This approach may be able to avoid almost all numerical computation, except for numerical verification of the Riemann hypothesis, for which we can appeal to existing literature.

Participants are also welcome to add any further summaries of the situation in the comments below.

This is the sixth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The last two threads have been focused primarily on the test problem of showing that whenever . We have been able to prove this for most regimes of , or equivalently for most regimes of the natural number parameter . In many of these regimes, a certain explicit approximation to was used, together with a non-zero normalising factor ; see the wiki for definitions. The explicit upper bound

has been proven for certain explicit expressions (see here) depending on . In particular, if satisfies the inequality

then is non-vanishing thanks to the triangle inequality. (In principle we have an even more accurate approximation available, but it is looking like we will not need it for this test problem at least.)

We have explicit upper bounds on , , ; see this wiki page for details. They are tabulated in the range here. For , the upper bound for is monotone decreasing, and is in particular bounded by , while and are known to be bounded by and respectively (see here).

Meanwhile, the quantity can be lower bounded by

for certain explicit coefficients and an explicit complex number . Using the triangle inequality to lower bound this by

we can obtain a lower bound of for , which settles the test problem in this regime. One can get more efficient lower bounds by multiplying both Dirichlet series by a suitable Euler product mollifier; we have found for to be good choices to get a variety of further lower bounds depending only on , see this table and this wiki page. Comparing this against our tabulated upper bounds for the error terms we can handle the range .

In the range , we have been able to obtain a suitable lower bound (where exceeds the upper bound for ) by numerically evaluating at a mesh of points for each choice of , with the mesh spacing being adaptive and determined by and an upper bound for the derivative of ; the data is available here.

This leaves the final range (roughly corresponding to ). Here we can numerically evaluate to high accuracy at a fine mesh (see the data here), but to fill in the mesh we need good upper bounds on . It seems that we can get reasonable estimates using some contour shifting from the original definition of (see here). We are close to finishing off this remaining region and thus solving the toy problem.

Beyond this, we need to figure out how to show that for as well. General theory lets one do this for , leaving the region . The analytic theory that handles and should also handle this region; for presumably the argument principle will become relevant.

The full argument also needs to be streamlined and organised; right now it sprawls over many wiki pages and github code files. (A very preliminary writeup attempt has begun here). We should also see if there is much hope of extending the methods to push much beyond the bound of that we would get from the above calculations. This would also be a good time to start discussing whether to move to the writing phase of the project, or whether there are still fruitful research directions for the project to explore.

Participants are also welcome to add any further summaries of the situation in the comments below.

This is the fifth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We have almost finished off the test problem of showing that whenever . We have two useful approximations for , which we have denoted and , and a normalising quantity that is asymptotically equal to the above expressions; see the wiki page for definitions. In practice, the approximation seems to be accurate within about one or two significant figures, whilst the approximation is accurate to about three or four. We have an effective upper bound

where the expressions are quite small in practice ( is typically about two orders of magnitude smaller than the main term once is moderately large, and the error terms are even smaller). See this page for details. In principle we could also obtain an effective upper bound for (the term would be replaced by something smaller).

The ratio takes the form of a difference of two Dirichlet series, where is a phase whose value is explicit but perhaps not terribly important, and the coefficients are explicit and relatively simple ( is , and is approximately ). To bound this away from zero, we have found it advantageous to mollify this difference by multiplying by an Euler product to cancel much of the initial oscillation; also one can take advantage of the fact that the are real and the are (approximately) real. See this page for details. The upshot is that we seem to be getting good lower bounds for the size of this difference of Dirichlet series starting from about or so. The error terms are already quite small by this stage, so we should soon be able to rigorously keep from vanishing at this point. We also have a scheme for lower bounding the difference of Dirichlet series below this range, though it is not clear at present how far we can continue this before the error terms become unmanageable. For very small we may have to explore some faster ways to compute the expression , which is still difficult to compute directly with high accuracy. One will also need to bound the somewhat unwieldy expressions by something more manageable. For instance, right now these quantities depend on the continuous variable ; it would be preferable to have a quantity that depends only on the parameter , as this could be computed numerically for all in the remaining range of interest quite quickly.

As before, any other mathematical discussion related to the project is also welcome here, for instance any summaries of previous discussion that was not covered in this post.

This is the fourth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing https://terrytao.wordpress.com/2018/01/24/polymath-proposal-upper-bounding-the-de-bruijn-newman-constant/. Progress will be summarised at this Polymath wiki page.

We are getting closer to finishing off the following test problem: can one show that whenever , ? This would morally show that . A wiki page for this problem has now been created here. We have obtained a number of approximations to (see wiki page), though numeric evidence indicates that the approximations are all very close to each other. (Many of these approximations come with a correction term , but thus far it seems that we may be able to avoid having to use this refinement to the approximations.) The effective approximation also comes with an effective error bound

for some explicit (but somewhat messy) error terms : see this wiki page for details. The original approximations can be considered deprecated at this point in favour of the (slightly more complicated) approximation ; the approximation is a simplified version of which is not quite as accurate but might be useful for testing purposes.

It is convenient to normalise everything by an explicit non-zero factor . Asymptotically, converges to 1; numerically, it appears that its magnitude (and also its real part) stays roughly between 0.4 and 3 in the range , and we seem to be able to keep it (or at least the toy counterpart ) away from zero starting from about (here it seems that there is a useful trick of multiplying by Euler-type factors like to cancel off some of the oscillation). Also, the bounds on the error seem to be of size about 0.1 or better in these ranges also. So we seem to be on track to be able to rigorously eliminate zeroes starting from about or so. We have not discussed too much what to do with the small values of ; at some point our effective error bounds will become unusable, and we may have to find some more faster ways to compute .

In addition to this main direction of inquiry, there have been additional discussions on the dynamics of zeroes, and some numerical investigations of the behaviour Lehmer pairs under heat flow. Contributors are welcome to summarise any findings from these discussions from previous threads (or on any other related topic, e.g. improvements in the code) in the comments below.

Kevin Ford, Sergei Konyagin, James Maynard, Carl Pomerance, and I have uploaded to the arXiv our paper “Long gaps in sieved sets“, submitted to J. Europ. Math. Soc..

This paper originated from the MSRI program in analytic number theory last year, and was centred around variants of the question of finding large gaps between primes. As discussed for instance in this previous post, it is now known that within the set of primes , one can find infinitely many adjacent elements whose gap obeys a lower bound of the form

where denotes the -fold iterated logarithm. This compares with the trivial bound of that one can obtain from the prime number theorem and the pigeonhole principle. Several years ago, Pomerance posed the question of whether analogous improvements to the trivial bound can be obtained for such sets as

Here there is the obvious initial issue that this set is not even known to be infinite (this is the fourth Landau problem), but let us assume for the sake of discussion that this set is indeed infinite, so that we have an infinite number of gaps to speak of. Standard sieve theory techniques give upper bounds for the density of that is comparable (up to an absolute constant) to the prime number theorem bounds for , so again we can obtain a trivial bound of for the gaps of . In this paper we improve this to

for an absolute constant ; this is not as strong as the corresponding bound for , but still improves over the trivial bound. In fact we can handle more general “sifted sets” than just . Recall from the sieve of Eratosthenes that the elements of in, say, the interval can be obtained by removing from one residue class modulo for each prime up to , namely the class mod . In a similar vein, the elements of in can be obtained by removing for each prime up to zero, one, or two residue classes modulo , depending on whether is a quadratic residue modulo . On the average, one residue class will be removed (this is a very basic case of the Chebotarev density theorem), so this sieving system is “one-dimensional on the average”. Roughly speaking, our arguments apply to any other set of numbers arising from a sieving system that is one-dimensional on average. (One can consider other dimensions also, but unfortunately our methods seem to give results that are worse than a trivial bound when the dimension is less than or greater than one.)

The standard “Erdős-Rankin” method for constructing long gaps between primes proceeds by trying to line up some residue classes modulo small primes so that they collectively occupy a long interval. A key tool in doing so are the smooth number estimates of de Bruijn and others, which among other things assert that if one removes from an interval such as all the residue classes mod for between and for some fixed , then the set of survivors has exceptionally small density (roughly of the order of , with the precise density given by the Dickman function), in marked contrast to the situation in which one randomly removes one residue class for each such prime , in which the density is more like . One generally exploits this phenomenon to sieve out almost all the elements of a long interval using some of the primes available, and then using the remaining primes to cover up the remaining elements that have not already been sifted out. In the more recent work on this problem, advanced combinatorial tools such as hypergraph covering lemmas are used for the latter task.

In the case of , there does not appear to be any analogue of smooth numbers, in the sense that there is no obvious way to arrange the residue classes so that they have significantly fewer survivors than a random arrangement. Instead we adopt the following semi-random strategy to cover an interval by residue classes. Firstly, we randomly remove residue classes for primes up to some intermediate threshold (smaller than by a logarithmic factor), leaving behind a preliminary sifted set . Then, for each prime between and another intermediate threshold , we remove a residue class mod that maximises (or nearly maximises) its intersection with . This ends up reducing the number of survivors to be significantly below what one would achieve if one selects residue classes randomly, particularly if one also uses the hypergraph covering lemma from our previous paper. Finally, we cover each the remaining survivors by a residue class from a remaining available prime.

This is the second “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this previous thread. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We now have the following proposition (see this page for a proof sketch) that looks like it can give a numerically feasible approach to bound :

Proposition 1Suppose that one has parameters obeying the following properties:

- All the zeroes of with are real.
- There are no zeroes with in the region .
- There are no zeroes with and .
Then one has .

The first hypothesis is already known for up to about (we should find out exactly what we can reach here). Preliminary calculations suggest that we can obtain the third item provided that . The second hypothesis requires good numerical calculation for , to which we now turn.

The initial definition of is given by the formula

where

This formula has proven numerically computable to acceptable error up until about the first hundred zeroes of , but degrades after that, and so other exact or approximate formulae for are needed. One possible exact formula that could be useful is

where

and

and can be chosen arbitrarily. We are still trying to see if this can be implemented numerically to give better accuracy than the previous formula.

It seems particularly promising to develop a generalisation of the Riemann-Siegel approximate functional equation for . Preliminary computations suggest in particular that we have the approximation

where

Some very preliminary numerics suggest that this formula is reasonably accurate even for moderate values of , though further numerical verification is needed. As a proof of concept, one could take this approximation as exact for the purposes of seeing what ranges of one can feasibly compute with (and for extremely large values of , we will presumably have to introduce some version of the Odlyzko-Schönhage algorithm. Of course, to obtain a rigorous result, we will eventually need a rigorous version of this formula with explicit error bounds. It may also be necessary to add more terms to the approximation to reduce the size of the error.

Sujit Nair has kindly summarised for me the current state of affairs with the numerics as follows:

—

- We need a real milestone and work backward to set up intermediate goals. This will definitely help bring in focus!
- So far, we have some utilities to compute zeroes of with a nonlinear solver which uses roots of as an initial condition. The solver is a wrapper around MINPACK’s implementation of Powell’s method. There is some room for optimization. For example, we aren’t providing the solver with the analytical Jacobian which speeds up the computation and increases accuracy.
- We have some results in the output folder which contains the first 1000 roots of for some small values of , etc. They need some more organization and visualization.

We have a decent initial start but we have some ways to go. Moving forward, here is my proposition for some areas of focus. We should expand and prioritize after some open discussion.

- Short term Optimize the existing framework and target to have the first million zeros of (for a reasonable range of ) and the corresponding plots. With better engineering practice and discipline, I am confident we can get to a few tens of millions range. Some things which will help include parallelization, iterative approaches (using zeroes of to compute zeroes of ), etc.
- Medium term We need to explore better ways to represent the zeros and compute them. An analogy is the computation of Riemann zeroes up to height . It is computed by computing the sign changes of (page 119 of Edwards) and by exploiting the speed up with the Riemann-Siegel formulation (over Euler-Maclaurin). For larger values of , I am not sure the root solver based approach is going to work to understand the gaps between zeroes.
- Long term We also need a better understanding of the errors involved in the computation — truncation, hardware/software, etc.

## Recent Comments