You are currently browsing the category archive for the ‘math.NT’ category.

This is the seventh thread for the Polymath8b project to obtain new bounds for the quantity

either for small values of (in particular ) or asymptotically as . The previous thread may be found here. The currently best known bounds on can be found at the wiki page.

The current focus is on improving the upper bound on under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from to . Very recently, we have been able to exploit GEH more fully, leading to a promising new expansion of the sieve support region. The problem now reduces to the following:

Problem 1Does there exist a (not necessarily convex) polytope with quantities , and a non-trivial square-integrable function supported on such that

- when ;
- when ;
- when ;
and such that we have the inequality

An affirmative answer to this question will imply on GEH. We are “within two percent” of this claim; we cannot quite reach yet, but have got as far as . However, we have not yet fully optimised in the above problem. In particular, the simplex

is now available, and should lead to some noticeable improvement in the numerology.

There is also a *very* slim chance that the twin prime conjecture is now provable on GEH. It would require an affirmative solution to the following problem:

Problem 2Does there exist a (not necessarily convex) polytope with quantities , and a non-trivial square-integrable function supported on such that

- when ;
- when ;
and such that we have the inequality

We suspect that the answer to this question is negative, but have not formally ruled it out yet.

For the rest of this post, I will justify why positive answers to these sorts of variational problems are sufficient to get bounds on (or more generally ).

This is the fourth thread for the Polymath8b project to obtain new bounds for the quantity

either for small values of (in particular ) or asymptotically as . The previous thread may be found here. The currently best known bounds on are:

- (Maynard) Assuming the Elliott-Halberstam conjecture, .
- (Polymath8b, tentative) . Assuming Elliott-Halberstam, .
- (Polymath8b, tentative) . Assuming Elliott-Halberstam, .
- (Polymath8b, tentative) . (Presumably a comparable bound also holds for on Elliott-Halberstam, but this has not been computed.)
- (Polymath8b) for sufficiently large . Assuming Elliott-Halberstam, for sufficiently large .

While the bound on the Elliott-Halberstam conjecture has not improved since the start of the Polymath8b project, there is reason to hope that it will soon fall, hopefully to . This is because we have begun to exploit more fully the fact that when using “multidimensional Selberg-GPY” sieves of the form

with

where , it is not necessary for the smooth function to be supported on the simplex

but can in fact be allowed to range on larger sets. First of all, may instead be supported on the slightly larger polytope

However, it turns out that more is true: given a sufficiently general version of the Elliott-Halberstam conjecture at the given value of , one may work with functions supported on more general domains , so long as the sumset is contained in the non-convex region

and also provided that the restriction

More precisely, if is a smooth function, not identically zero, with the above properties for some , and the ratio

is larger than , then the claim holds (assuming ), and in particular .

I’ll explain why one can do this below the fold. Taking this for granted, we can rewrite this criterion in terms of the mixed derivative , the upshot being that if one can find a smooth function supported on that obeys the vanishing marginal conditions

and

then holds. (To equate these two formulations, it is convenient to assume that is a downset, in the sense that whenever , the entire box lie in , but one can easily enlarge to be a downset without destroying the containment of in the non-convex region (1).) One initially requires to be smooth, but a limiting argument allows one to relax to bounded measurable . (To approximate a rough by a smooth while retaining the required moment conditions, one can first apply a slight dilation and translation so that the marginals of are supported on a slightly smaller version of the simplex , and then convolve by a smooth approximation to the identity to make smooth, while keeping the marginals supported on .)

We are now exploring various choices of to work with, including the prism

and the symmetric region

By suitably subdividing these regions into polytopes, and working with piecewise polynomial functions that are polynomial of a specified degree on each subpolytope, one can phrase the problem of optimising (4) as a quadratic program, which we have managed to work with for . Extending this program to , there is a decent chance that we will be able to obtain on EH.

We have also been able to numerically optimise quite accurately for medium values of (e.g. ), which has led to improved values of without EH. For large , we now also have the asymptotic with explicit error terms (details here) which have allowed us to slightly improve the numerology, and also to get explicit numerology for the first time.

Mertens’ theorems are a set of classical estimates concerning the asymptotic distribution of the prime numbers:

Theorem 1 (Mertens’ theorems)In the asymptotic limit , we havewhere is the Euler-Mascheroni constant, defined by requiring that

The third theorem (3) is usually stated in exponentiated form

but in the logarithmic form (3) we see that it is strictly stronger than (2), in view of the asymptotic .

Remarkably, these theorems can be proven without the assistance of the prime number theorem

which was proven about two decades after Mertens’ work. (But one can certainly use versions of the prime number theorem with good error term, together with summation by parts, to obtain good estimates on the various errors in Mertens’ theorems.) Roughly speaking, the reason for this is that Mertens’ theorems only require control on the Riemann zeta function in the neighbourhood of the pole at , whereas (as discussed in this previous post) the prime number theorem requires control on the zeta function on (a neighbourhood of) the line . Specifically, Mertens’ theorem is ultimately deduced from the Euler product formula

valid in the region (which is ultimately a Fourier-Dirichlet transform of the fundamental theorem of arithmetic), and following crude asymptotics:

Proposition 2 (Simple pole)For sufficiently close to with , we have

*Proof:* For as in the proposition, we have for any natural number and , and hence

Summing in and using the identity , we obtain the first claim. Similarly, we have

and by summing in and using the identity (the derivative of the previous identity) we obtain the claim.

The first two of Mertens’ theorems (1), (2) are relatively easy to prove, and imply the third theorem (3) except with replaced by an unspecified absolute constant. To get the specific constant requires a little bit of additional effort. From (4), one might expect that the appearance of arises from the refinement

that one can obtain to (6). However, it turns out that the connection is not so much with the zeta function, but with the Gamma function, and specifically with the identity (which is of course related to (7) through the functional equation for zeta, but can be proven without any reference to zeta functions). More specifically, we have the following asymptotic for the exponential integral:

Proposition 3 (Exponential integral asymptotics)For sufficiently small , one has

A routine integration by parts shows that this asymptotic is equivalent to the identity

which is the identity mentioned previously.

*Proof:* We start by using the identity to express the harmonic series as

or on summing the geometric series

Since , we thus have

making the change of variables , this becomes

As , converges pointwise to and is pointwise dominated by . Taking limits as using dominated convergence, we conclude that

or equivalently

The claim then follows by bounding the portion of the integral on the left-hand side.

Below the fold I would like to record how Proposition 2 and Proposition 3 imply Theorem 1; the computations are utterly standard, and can be found in most analytic number theory texts, but I wanted to write them down for my own benefit (I always keep forgetting, in particular, how the third of Mertens’ theorems is proven).

This is the third thread for the Polymath8b project to obtain new bounds for the quantity

either for small values of (in particular ) or asymptotically as . The previous thread may be found here. The currently best known bounds on are:

- (Maynard) Assuming the Elliott-Halberstam conjecture, .
- (Polymath8b, tentative) . Assuming Elliott-Halberstam, .
- (Polymath8b, tentative) . Assuming Elliott-Halberstam, .
- (Polymath8b) for sufficiently large . Assuming Elliott-Halberstam, for sufficiently large .

Much of the current focus of the Polymath8b project is on the quantity

where ranges over square-integrable functions on the simplex

with being the quadratic forms

and

It was shown by Maynard that one has whenever , where is the narrowest diameter of an admissible -tuple. As discussed in the previous post, we have slight improvements to this implication, but they are currently difficult to implement, due to the need to perform high-dimensional integration. The quantity does seem however to be close to the theoretical limit of what the Selberg sieve method can achieve for implications of this type (at the Bombieri-Vinogradov level of distribution, at least); it seems of interest to explore more general sieves, although we have not yet made much progress in this direction.

The best asymptotic bounds for we have are

which we prove below the fold. The upper bound holds for all ; the lower bound is only valid for sufficiently large , and gives the upper bound on Elliott-Halberstam.

For small , the upper bound is quite competitive, for instance it provides the upper bound in the best values

and

we have for and . The situation is a little less clear for medium values of , for instance we have

and so it is not yet clear whether (which would imply ). See this wiki page for some further upper and lower bounds on .

The best lower bounds are not obtained through the asymptotic analysis, but rather through quadratic programming (extending the original method of Maynard). This has given significant numerical improvements to our best bounds (in particular lowering the bound from to ), but we have not yet been able to combine this method with the other potential improvements (enlarging the simplex, using MPZ distributional estimates, and exploiting upper bounds on two-point correlations) due to the computational difficulty involved.

This is the second thread for the Polymath8b project to obtain new bounds for the quantity

either for small values of (in particular ) or asymptotically as . The previous thread may be found here. The currently best known bounds on are:

- (Maynard) .
- (Polymath8b, tentative) .
- (Polymath8b, tentative) for sufficiently large .
- (Maynard) Assuming the Elliott-Halberstam conjecture, , , and .

Following the strategy of Maynard, the bounds on proceed by combining four ingredients:

- Distribution estimates or for the primes (or related objects);
- Bounds for the minimal diameter of an admissible -tuple;
- Lower bounds for the optimal value to a certain variational problem;
- Sieve-theoretic arguments to convert the previous three ingredients into a bound on .

Accordingly, the most natural routes to improve the bounds on are to improve one or more of the above four ingredients.

Ingredient 1 was studied intensively in Polymath8a. The following results are known or conjectured (see the Polymath8a paper for notation and proofs):

- (Bombieri-Vinogradov) is true for all .
- (Polymath8a) is true for .
- (Polymath8a, tentative) is true for .
- (Elliott-Halberstam conjecture) is true for all .

Ingredient 2 was also studied intensively in Polymath8a, and is more or less a solved problem for the values of of interest (with exact values of for , and quite good upper bounds for for , available at this page). So the main focus currently is on improving Ingredients 3 and 4.

For Ingredient 3, the basic variational problem is to understand the quantity

for bounded measurable functions, not identically zero, on the simplex

with being the quadratic forms

and

Equivalently, one has

where is the positive semi-definite bounded self-adjoint operator

so is the operator norm of . Another interpretation of is that the probability that a rook moving randomly in the unit cube stays in simplex for moves is asymptotically .

We now have a fairly good asymptotic understanding of , with the bounds

holding for sufficiently large . There is however still room to tighten the bounds on for small ; I’ll summarise some of the ideas discussed so far below the fold.

For Ingredient 4, the basic tool is this:

Thus, for instance, it is known that and , and this together with the Bombieri-Vinogradov inequality gives . This result is proven in Maynard’s paper and an alternate proof is also given in the previous blog post.

We have a number of ways to relax the hypotheses of this result, which we also summarise below the fold.

For each natural number , let denote the quantity

where denotes the prime. In other words, is the least quantity such that there are infinitely many intervals of length that contain or more primes. Thus, for instance, the twin prime conjecture is equivalent to the assertion that , and the prime tuples conjecture would imply that is equal to the diameter of the narrowest admissible tuple of cardinality (thus we conjecturally have , , , , , and so forth; see this web page for further continuation of this sequence).

In 2004, Goldston, Pintz, and Yildirim established the bound conditional on the Elliott-Halberstam conjecture, which remains unproven. However, no unconditional finiteness of was obtained (although they famously obtained the non-trivial bound ), and even on the Elliot-Halberstam conjecture no finiteness result on the higher was obtained either (although they were able to show on this conjecture). In the recent breakthrough of Zhang, the unconditional bound was obtained, by establishing a weak partial version of the Elliott-Halberstam conjecture; by refining these methods, the Polymath8 project (which I suppose we could retroactively call the Polymath8a project) then lowered this bound to .

With the very recent preprint of James Maynard, we have the following further substantial improvements:

Theorem 1 (Maynard’s theorem)Unconditionally, we have the following bounds:

- .
- for an absolute constant and any .
If one assumes the Elliott-Halberstam conjecture, we have the following improved bounds:

- .
- .
- for an absolute constant and any .

The final conclusion on Elliott-Halberstam is not explicitly stated in Maynard’s paper, but follows easily from his methods, as I will describe below the fold. (At around the same time as Maynard’s work, I had also begun a similar set of calculations concerning , but was only able to obtain the slightly weaker bound unconditionally.) In the converse direction, the prime tuples conjecture implies that should be comparable to . Granville has also obtained the slightly weaker explicit bound for any by a slight modification of Maynard’s argument.

The arguments of Maynard avoid using the difficult partial results on (weakened forms of) the Elliott-Halberstam conjecture that were established by Zhang and then refined by Polymath8; instead, the main input is the classical Bombieri-Vinogradov theorem, combined with a sieve that is closer in spirit to an older sieve of Goldston and Yildirim, than to the sieve used later by Goldston, Pintz, and Yildirim on which almost all subsequent work is based.

The aim of the Polymath8b project is to obtain improved bounds on , and higher values of , either conditional on the Elliott-Halberstam conjecture or unconditional. The likeliest routes for doing this are by optimising Maynard’s arguments and/or combining them with some of the results from the Polymath8a project. This post is intended to be the first research thread for that purpose. To start the ball rolling, I am going to give below a presentation of Maynard’s results, with some minor technical differences (most significantly, I am using the Goldston-Pintz-Yildirim variant of the Selberg sieve, rather than the traditional “elementary Selberg sieve” that is used by Maynard (and also in the Polymath8 project), although it seems that the numerology obtained by both sieves is essentially the same). An alternate exposition of Maynard’s work has just been completed also by Andrew Granville.

I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“. This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”. There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also.

The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method. More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem. One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet.

This is something of a first draft, so comments and suggestions are even more welcome than usual. (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.)

Define a *partition* of to be a finite or infinite multiset of real numbers in the interval (that is, an unordered set of real numbers in , possibly with multiplicity) whose total sum is : . For instance, is a partition of . Such partitions arise naturally when trying to decompose a large object into smaller ones, for instance:

- (Prime factorisation) Given a natural number , one can decompose it into prime factors (counting multiplicity), and then the multiset
is a partition of .

- (Cycle decomposition) Given a permutation on labels , one can decompose into cycles , and then the multiset
is a partition of .

- (Normalisation) Given a multiset of positive real numbers whose sum is finite and non-zero, the multiset
is a partition of .

In the spirit of the universality phenomenon, one can ask what is the natural distribution for what a “typical” partition should look like; thus one seeks a natural probability distribution on the space of all partitions, analogous to (say) the gaussian distributions on the real line, or GUE distributions on point processes on the line, and so forth. It turns out that there is one natural such distribution which is related to all three examples above, known as the *Poisson-Dirichlet distribution*. To describe this distribution, we first have to deal with the problem that it is not immediately obvious how to cleanly parameterise the space of partitions, given that the cardinality of the partition can be finite or infinite, that multiplicity is allowed, and that we would like to identify two partitions that are permutations of each other

One way to proceed is to random partition as a type of point process on the interval , with the constraint that , in which case one can study statistics such as the counting functions

(where the cardinality here counts multiplicity). This can certainly be done, although in the case of the Poisson-Dirichlet process, the formulae for the joint distribution of such counting functions is moderately complicated. Another way to proceed is to order the elements of in decreasing order

with the convention that one pads the sequence by an infinite number of zeroes if is finite; this identifies the space of partitions with an infinite dimensional simplex

However, it turns out that the process of ordering the elements is not “smooth” (basically because functions such as and are not smooth) and the formulae for the joint distribution in the case of the Poisson-Dirichlet process is again complicated.

It turns out that there is a better (or at least “smoother”) way to enumerate the elements of a partition than the ordered method, although it is random rather than deterministic. This procedure (which I learned from this paper of Donnelly and Grimmett) works as follows.

- Given a partition , let be an element of chosen at random, with each element having a probability of being chosen as (so if occurs with multiplicity , the net probability that is chosen as is actually ). Note that this is well-defined since the elements of sum to .
- Now suppose is chosen. If is empty, we set all equal to zero and stop. Otherwise, let be an element of chosen at random, with each element having a probability of being chosen as . (For instance, if occurred with some multiplicity in , then can equal with probability .)
- Now suppose are both chosen. If is empty, we set all equal to zero and stop. Otherwise, let be an element of , with ech element having a probability of being chosen as .
- We continue this process indefinitely to create elements .

We denote the random sequence formed from a partition in the above manner as the *random normalised enumeration* of ; this is a random variable in the infinite unit cube , and can be defined recursively by the formula

with drawn randomly from , with each element chosen with probability , except when in which case we instead have

Note that one can recover from any of its random normalised enumerations by the formula

with the convention that one discards any zero elements on the right-hand side. Thus can be viewed as a (stochastic) parameterisation of the space of partitions by the unit cube , which is a simpler domain to work with than the infinite-dimensional simplex mentioned earlier.

Note that this random enumeration procedure can also be adapted to the three models described earlier:

- Given a natural number , one can randomly enumerate its prime factors by letting each prime factor of be equal to with probability , then once is chosen, let each remaining prime factor of be equal to with probability , and so forth.
- Given a permutation , one can randomly enumerate its cycles by letting each cycle in be equal to with probability , and once is chosen, let each remaining cycle be equalto with probability , and so forth. Alternatively, one traverse the elements of in random order, then let be the first cycle one encounters when performing this traversal, let be the next cycle (not equal to one encounters when performing this traversal, and so forth.
- Given a multiset of positive real numbers whose sum is finite, we can randomly enumerate the elements of this sequence by letting each have a probability of being set equal to , and then once is chosen, let each remaining have a probability of being set equal to , and so forth.

We then have the following result:

Proposition 1 (Existence of the Poisson-Dirichlet process)There exists a random partition whose random enumeration has the uniform distribution on , thus are independently and identically distributed copies of the uniform distribution on .

A random partition with this property will be called the *Poisson-Dirichlet process*. This process, first introduced by Kingman, can be described explicitly using (1) as

where are iid copies of the uniform distribution of , although it is not immediately obvious from this definition that is indeed uniformly distributed on . We prove this proposition below the fold.

An equivalent definition of a Poisson-Dirichlet process is a random partition with the property that

where is a random element of with each having a probability of being equal to , is a uniform variable on that is independent of , and denotes equality of distribution. This can be viewed as a sort of stochastic self-similarity property of : if one randomly removes one element from and rescales, one gets a new copy of .

It turns out that each of the three ways to generate partitions listed above can lead to the Poisson-Dirichlet process, either directly or in a suitable limit. We begin with the third way, namely by normalising a Poisson process to have sum :

Proposition 2 (Poisson-Dirichlet processes via Poisson processes)Let , and let be a Poisson process on with intensity function . Then the sum is almost surely finite, and the normalisation is a Poisson-Dirichlet process.

Again, we prove this proposition below the fold. Now we turn to the second way (a topic, incidentally, that was briefly touched upon in this previous blog post):

Proposition 3 (Large cycles of a typical permutation)For each natural number , let be a permutation drawn uniformly at random from . Then the random partition converges in the limit to a Poisson-Dirichlet process in the following sense: given any fixed sequence of intervals (independent of ), the joint discrete random variable converges in distribution to .

Finally, we turn to the first way:

Proposition 4 (Large prime factors of a typical number)Let , and let be a random natural number chosen according to one of the following three rules:

- (Uniform distribution) is drawn uniformly at random from the natural numbers in .
- (Shifted uniform distribution) is drawn uniformly at random from the natural numbers in .
- (Zeta distribution) Each natural number has a probability of being equal to , where and .
Then converges as to a Poisson-Dirichlet process in the same fashion as in Proposition 3.

The process was first studied by Billingsley (and also later by Knuth-Trabb Pardo and by Vershik, but the formulae were initially rather complicated; the proposition above is due to of Donnelly and Grimmett, although the third case of the proposition is substantially easier and appears in the earlier work of Lloyd. We prove the proposition below the fold.

The previous two propositions suggests an interesting analogy between large random integers and large random permutations; see this ICM article of Vershik and this non-technical article of Granville (which, incidentally, was once adapted into a play) for further discussion.

As a sample application, consider the problem of estimating the number of integers up to which are not divisible by any prime larger than (i.e. they are -smooth), where is a fixed real number. This is essentially (modulo some inessential technicalities concerning the distinction between the intervals and ) the probability that avoids , which by the above theorem converges to the probability that avoids . Below the fold we will show that this function is given by the Dickman function, defined by setting for and for , thus recovering the classical result of Dickman that .

I thank Andrew Granville and Anatoly Vershik for showing me the nice link between prime factors and the Poisson-Dirichlet process. The material here is standard, and (like many of the other notes on this blog) was primarily written for my own benefit, but it may be of interest to some readers. In preparing this article I found this exposition by Kingman to be helpful.

Note: this article will emphasise the computations rather than rigour, and in particular will rely on informal use of infinitesimals to avoid dealing with stochastic calculus or other technicalities. We adopt the convention that we will neglect higher order terms in infinitesimal calculations, e.g. if is infinitesimal then we will abbreviate simply as .

As in all previous posts in this series, we adopt the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on unless explicitly declared to be “fixed”. The asymptotic notation is then defined relative to this parameter. A quantity is said to be *of polynomial size* if one has , and *bounded* if . We also write for , and for .

The purpose of this (rather technical) post is both to roll over the polymath8 research thread from this previous post, and also to record the details of the latest improvement to the Type I estimates (based on exploiting additional averaging and using Deligne’s proof of the Weil conjectures) which lead to a slight improvement in the numerology.

In order to obtain this new Type I estimate, we need to strengthen the previously used properties of “dense divisibility” or “double dense divisibility” as follows.

Definition 1 (Multiple dense divisibility)Let . For each natural number , we define a notion of -tuply -dense divisibility recursively as follows:

- Every natural number is -tuply -densely divisible.
- If and is a natural number, we say that is -tuply -densely divisible if, whenever are natural numbers with , and , one can find a factorisation with such that is -tuply -densely divisible and is -tuply -densely divisible.
We let denote the set of -tuply -densely divisible numbers. We abbreviate “-tuply densely divisible” as “densely divisible”, “-tuply densely divisible” as “doubly densely divisible”, and so forth; we also abbreviate as .

Given any finitely supported sequence and any primitive residue class , we define the discrepancy

We now recall the key concept of a coefficient sequence, with some slight tweaks in the definitions that are technically convenient for this post.

Definition 2Acoefficient sequenceis a finitely supported sequence that obeys the boundsfor all , where is the divisor function.

- (i) A coefficient sequence is said to be
located at scalefor some if it is supported on an interval of the form for some .- (ii) A coefficient sequence located at scale for some is said to
obey the Siegel-Walfisz theoremif one has- (iii) A coefficient sequence is said to be
smooth at scalefor some is said to besmoothif it takes the form for some smooth function supported on an interval of size and obeying the derivative boundsfor all fixed (note that the implied constant in the notation may depend on ).

Note that we allow sequences to be smooth at scale without being located at scale ; for instance if one arbitrarily translates of a sequence that is both smooth and located at scale , it will remain smooth at this scale but may not necessarily be located at this scale any more. Note also that we allow the smoothness scale of a coefficient sequence to be less than one. This is to allow for the following convenient rescaling property: if is smooth at scale , , and is an integer, then is smooth at scale , even if is less than one.

Now we adapt the Type I estimate to the -tuply densely divisible setting.

Definition 3 (Type I estimates)Let , , and be fixed quantities, and let be a fixed natural number. We let be an arbitrary bounded subset of , let , and let a primitive congruence class. We say that holds if, whenever are quantities withfor some fixed , and are coefficient sequences located at scales respectively, with obeying a Siegel-Walfisz theorem, we have

for any fixed . Here, as in previous posts, denotes the square-free natural numbers whose prime factors lie in .

The main theorem of this post is then

Theorem 4 (Improved Type I estimate)We have wheneverand

In practice, the first condition here is dominant. Except for weakening double dense divisibility to quadruple dense divisibility, this improves upon the previous Type I estimate that established under the stricter hypothesis

As in previous posts, Type I estimates (when combined with existing Type II and Type III estimates) lead to distribution results of Motohashi-Pintz-Zhang type. For any fixed and , we let denote the assertion that

for any fixed , any bounded , and any primitive , where is the von Mangoldt function.

*Proof:* Setting sufficiently close to , we see from the above theorem that holds whenever

and

The second condition is implied by the first and can be deleted.

From this previous post we know that (which we define analogously to from previous sections) holds whenever

while holds with sufficiently close to whenever

Again, these conditions are implied by (8). The claim then follows from the Heath-Brown identity and dyadic decomposition as in this previous post.

As before, we let denote the claim that given any admissible -tuple , there are infinitely many translates of that contain at least two primes.

This follows from the Pintz sieve, as discussed below the fold. Combining this with the best known prime tuples, we obtain that there are infinitely many prime gaps of size at most , improving slightly over the previous record of .

[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]

The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function , defined by

for and extended meromorphically to other values of , and asserts that the only zeroes of in the critical strip lie on the critical line .

One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number has a unique factorisation into primes. Taking logarithms, we obtain the identity

for any natural number , where is the von Mangoldt function, thus when is a power of a prime and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that

formally at least. Writing , the right-hand side factors as

whereas the left-hand side is (formally, at least) equal to . We conclude the identity

(formally, at least). If we integrate this, we are formally led to the identity

or equivalently to the exponential identity

which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as has a simple pole at and zeroes at various places on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form

(where we will be intentionally vague about what is hiding in the terms) and so we expect an expansion of the form

and hence on integrating in we formally have

and thus we have the heuristic approximation

Comparing this with (3), we are led to a heuristic form of the *explicit formula*

When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function to obtain the formula

which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that for all zeroes , it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that

as , giving a near-optimal “square root cancellation” for the sum . Conversely, if one can somehow establish a bound of the form

for any fixed , then the explicit formula can be used to then deduce that all zeroes of have real part at most , which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form

can be automatically amplified to the stronger bound

with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line , and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem

see e.g. this previous blog post for more discussion.

The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character . The analogue of the Riemann zeta function is then the (1), which encoded the fundamental theorem of arithmetic, can be twisted by to obtain

and essentially the same manipulations as before eventually lead to the exponential identity

which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form

for non-principal , where now ranges over the zeroes of in the critical strip, rather than the zeroes of ; a more accurate formulation, following (5), would be

(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet -function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of in the critical strip also lie on the critical line, then we obtain the bound

for any non-principal Dirichlet character , again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound

(where denotes a quantity that goes to zero as for any fixed ). Next, one can consider other number systems than the natural numbers and integers . For instance, one can replace the integers with rings of integers in other number fields (i.e. finite extensions of ), such as the quadratic extensions of the rationals for various square-free integers , in which case the ring of integers would be the ring of quadratic integers for a suitable generator (it turns out that one can take if , and if ). Here, it is not immediately obvious what the analogue of the natural numbers is in this setting, since rings such as do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number generates a principal ideal in the integers, and conversely every non-trivial ideal in the integers is associated to precisely one natural number in this fashion, namely the norm of that ideal. So one can identify the natural numbers with the ideals of . Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if is prime, and are integers, then if and only if one of or is true. Finally, even in number systems (such as ) in which the classical version of the fundamental theorem of arithmetic fail (e.g. ), we have *the fundamental theorem of arithmetic for ideals*: every ideal in a Dedekind domain (which includes the ring of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals (although these ideals might not necessarily be principal). For instance, in , the principal ideal factors as the product of four prime (but non-principal) ideals , , , . (Note that the first two ideals are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function

where the summation is over all non-trivial ideals in . One can also define a von Mangoldt function , defined as when is a power of a prime ideal , and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),

which leads as before to an exponential identity

and an explicit formula of the heuristic form

in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form

where is the conductor of (which, in the case of number fields, is the absolute value of the discriminant of ) and is the degree of the extension of over . As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound

where denotes a quantity that goes to zero as (holding fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.

As was the case with the Dirichlet -functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.

Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line and a finite field of some order . The polynomial functions on the affine line are just the usual polynomial ring , which then play the role of the integers (or ) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm of a polynomial is the order of , which can be computed explicitly as

Because of this, we will normalise things slightly differently here and use in place of in what follows. The (local) zeta function is then defined as

where ranges over monic polynomials, and the von Mangoldt function is defined to equal when is a power of a monic irreducible polynomial , and zero otherwise. Note that because is always a power of , the zeta function here is in fact periodic with period . Because of this, it is customary to make a change of variables , so that

and is the renormalised zeta function

We have the analogue of (1) (or (7) or (11)):

which leads as before to an exponential identity

analogous to (2), (8), or (12). It also leads to the explicit formula

where are the zeroes of the original zeta function (counting each residue class of the period just once), or equivalently

where are the reciprocals of the roots of the normalised zeta function (or to put it another way, are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining

As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus

for an explicit integer (independent of ) arising from any potential pole of at . In the case of the affine line , the situation is particularly simple, because the zeta function is easy to compute. Indeed, since there are exactly monic polynomials of a given degree , we see from (14) that

so in fact there are no zeroes whatsoever, and no pole at either, so we have an exact prime number theorem for this function field:

Among other things, this tells us that the number of irreducible monic polynomials of degree is .

We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial through its roots, which are a finite set of points in the algebraic closure of the finite field (or more suggestively, as points on the affine line ). The number of such points (counting multiplicity) is the degree of , and from the factor theorem, the set of points determines the monic polynomial (or, if one removes the monic hypothesis, it determines the polynomial projectively). These points have an action of the Galois group . It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map , which fixes the elements of the original finite field but permutes the other elements of . Thus the roots of a given polynomial split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.

Now consider the degree finite field extension of (it is a classical fact that there is exactly one such extension up to isomorphism for each ); this is a subfield of of order . (Here we are performing a standard abuse of notation by overloading the subscripts in the notation; thus denotes the field of order , while denotes the extension of of order , so that we in fact have if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point in this extension (or, more suggestively, the affine line over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of . Since the Frobenius action is periodic of period on , the degree of this minimal polynomial must divide . Conversely, every monic irreducible polynomial of degree dividing produces distinct zeroes that lie in (here we use the classical fact that finite fields are perfect) and hence in . We have thus partitioned into Frobenius orbits (also known as *closed points*), with each monic irreducible polynomial of degree dividing contributing an orbit of size . From this we conclude a geometric interpretation of the left-hand side of (18):

The identity (18) thus is equivalent to the thoroughly boring fact that the number of -points on the affine line is equal to . However, things become much more interesting if one then replaces the affine line by a more general (geometrically) irreducible curve defined over ; for instance one could take to be an ellpitic curve

for some suitable , although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of -rational points removed). The analogue of is then the coordinate ring of (for instance, in the case of the elliptic curve (20) it would be ), with polynomials in this ring producing a set of roots in the curve that is again invariant with respect to the Frobenius action (acting on the and coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function

and a von Mangoldt function as before, where would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve ; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points in , or equivalently an effective divisor of ; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be *rational* in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of . With this dictionary, the zeta function becomes

where the sum is over effective rational divisors of (with being the degree of an effective divisor ), or equivalently

The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes

thus this sum is simply counting the number of -points of . The analogue of the exponential identity (16) (or (2), (8), or (12)) is then

and the analogue of the explicit formula (17) (or (5), (10) or (13)) is

where runs over the (reciprocal) zeroes of (counting multiplicity), and is an integer independent of . (As it turns out, equals when is a projective curve, and more generally equals when is a projective curve with rational points deleted.)

To evaluate , one needs to count the number of effective divisors of a given degree on the curve . Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when is projective) that is in fact a rational function, with a finite number of zeroes, and a simple pole at both and , with similar results when one deletes some rational points from ; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have

for two complex numbers depending on and .

The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of lie on the critical line, or equivalently that all the roots in (22) have modulus , so that (22) then gives the asymptotic

where the implied constant depends only on the genus of (and on the number of points removed from ). For instance, for elliptic curves we have the *Hasse bound*

As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.

then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the *proofs* of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large , which then amplifies to the optimal bound (23) for all (and in particular for ). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with -dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no -dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of .

Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet -function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve and an additive character , thus for all . Given a rational effective divisor , the sum is Frobenius-invariant and thus lies in . By abuse of notation, we may thus define on such divisors by

and observe that is multiplicative in the sense that for rational effective divisors . One can then define for any non-trivial ideal by replacing that ideal with the associated rational effective divisor; for instance, if is a polynomial in the coefficient ring of , with zeroes at , then is . Again, we have the multiplicativity property . If we then form the twisted normalised zeta function

then by twisting the previous analysis, we eventually arrive at the exponential identity

in analogy with (21) (or (2), (8), (12), or (16)), where the *companion sums* are defined by

where the trace of an element in the plane is defined by the formula

In particular, is the exponential sum

which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum

as a special case, where . (NOTE: the sign conventions for the companion sum are not consistent across the literature, sometimes it is which is referred to as the companion sum.)

If is non-principal (and is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that is a rational function of , with no pole at , and one then gets an explicit formula of the form

for the companion sums, where are the reciprocals of the zeroes of , in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form

for all and some complex numbers depending on , where we have abbreviated as . As before, the Riemann hypothesis for then gives a square root cancellation bound of the form

for the companion sums (and in particular gives the very explicit Weil bound for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound

As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.

One can also twist the zeta function on a curve by a multiplicative character by similar arguments, except that instead of forming the sum of all the components of an effective divisor , one takes the product instead, and similarly one replaces the trace

by the norm

Again, see Chapter 11 of Iwaniec-Kowalski for details.

Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of *-adic sheaves* on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to -adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an -adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.

## Recent Comments