You are currently browsing the category archive for the ‘math.NT’ category.

This is the third thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}.
  • (Polymath8b, tentative) {H_1 \leq 330}. Assuming Elliott-Halberstam, {H_2 \leq 330}.
  • (Polymath8b, tentative) {H_2 \leq 484{,}126}. Assuming Elliott-Halberstam, {H_4 \leq 493{,}408}.
  • (Polymath8b) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}. Assuming Elliott-Halberstam, {H_m \ll e^{2m} m \log m} for sufficiently large {m}.

Much of the current focus of the Polymath8b project is on the quantity

\displaystyle M_k = M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

where {F} ranges over square-integrable functions on the simplex

\displaystyle {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}

with {I_k, J_k^{(m)}} being the quadratic forms

\displaystyle I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k

and

\displaystyle J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_m)^2

\displaystyle dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.

It was shown by Maynard that one has {H_m \leq H(k)} whenever {M_k > 4m}, where {H(k)} is the narrowest diameter of an admissible {k}-tuple. As discussed in the previous post, we have slight improvements to this implication, but they are currently difficult to implement, due to the need to perform high-dimensional integration. The quantity {M_k} does seem however to be close to the theoretical limit of what the Selberg sieve method can achieve for implications of this type (at the Bombieri-Vinogradov level of distribution, at least); it seems of interest to explore more general sieves, although we have not yet made much progress in this direction.

The best asymptotic bounds for {M_k} we have are

\displaystyle \log k - \log\log\log k + O(1) \leq M_k \leq \frac{k}{k-1} \log k \ \ \ \ \ (1)

 

which we prove below the fold. The upper bound holds for all {k > 1}; the lower bound is only valid for sufficiently large {k}, and gives the upper bound {H_m \ll e^{2m} \log m} on Elliott-Halberstam.

For small {k}, the upper bound is quite competitive, for instance it provides the upper bound in the best values

\displaystyle 1.845 \leq M_4 \leq 1.848

and

\displaystyle 2.001162 \leq M_5 \leq 2.011797

we have for {M_4} and {M_5}. The situation is a little less clear for medium values of {k}, for instance we have

\displaystyle 3.95608 \leq M_{59} \leq 4.148

and so it is not yet clear whether {M_{59} > 4} (which would imply {H_1 \leq 300}). See this wiki page for some further upper and lower bounds on {M_k}.

The best lower bounds are not obtained through the asymptotic analysis, but rather through quadratic programming (extending the original method of Maynard). This has given significant numerical improvements to our best bounds (in particular lowering the {H_1} bound from {600} to {330}), but we have not yet been able to combine this method with the other potential improvements (enlarging the simplex, using MPZ distributional estimates, and exploiting upper bounds on two-point correlations) due to the computational difficulty involved.

Read the rest of this entry »

This is the second thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) {H_1 \leq 600}.
  • (Polymath8b, tentative) {H_2 \leq 484,276}.
  • (Polymath8b, tentative) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}.
  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}, {H_2 \leq 600}, and {H_m \ll m^3 e^{2m}}.

Following the strategy of Maynard, the bounds on {H_m} proceed by combining four ingredients:

  1. Distribution estimates {EH[\theta]} or {MPZ[\varpi,\delta]} for the primes (or related objects);
  2. Bounds for the minimal diameter {H(k)} of an admissible {k}-tuple;
  3. Lower bounds for the optimal value {M_k} to a certain variational problem;
  4. Sieve-theoretic arguments to convert the previous three ingredients into a bound on {H_m}.

Accordingly, the most natural routes to improve the bounds on {H_m} are to improve one or more of the above four ingredients.

Ingredient 1 was studied intensively in Polymath8a. The following results are known or conjectured (see the Polymath8a paper for notation and proofs):

  • (Bombieri-Vinogradov) {EH[\theta]} is true for all {0 < \theta < 1/2}.
  • (Polymath8a) {MPZ[\varpi,\delta]} is true for {\frac{600}{7} \varpi + \frac{180}{7}\delta < 1}.
  • (Polymath8a, tentative) {MPZ[\varpi,\delta]} is true for {\frac{1080}{13} \varpi + \frac{330}{13} \delta < 1}.
  • (Elliott-Halberstam conjecture) {EH[\theta]} is true for all {0 < \theta < 1}.

Ingredient 2 was also studied intensively in Polymath8a, and is more or less a solved problem for the values of {k} of interest (with exact values of {H(k)} for {k \leq 342}, and quite good upper bounds for {H(k)} for {k < 5000}, available at this page). So the main focus currently is on improving Ingredients 3 and 4.

For Ingredient 3, the basic variational problem is to understand the quantity

\displaystyle  M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

for {F: {\cal R}_k \rightarrow {\bf R}} bounded measurable functions, not identically zero, on the simplex

\displaystyle  {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}

with {I_k, J_k^{(m)}} being the quadratic forms

\displaystyle  I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k

and

\displaystyle  J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_i)^2 dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.

Equivalently, one has

\displaystyle  M_k({\cal R}_k) := \sup_F \frac{\int_{{\cal R}_k} F {\cal L}_k F}{\int_{{\cal R}_k} F^2}

where {{\cal L}_k: L^2({\cal R}_k) \rightarrow L^2({\cal R}_k)} is the positive semi-definite bounded self-adjoint operator

\displaystyle  {\cal L}_k F(t_1,\ldots,t_k) = \sum_{m=1}^k \int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_{m-1},s,t_{m+1},\ldots,t_k)\ ds,

so {M_k} is the operator norm of {{\cal L}}. Another interpretation of {M_k({\cal R}_k)} is that the probability that a rook moving randomly in the unit cube {[0,1]^k} stays in simplex {{\cal R}_k} for {n} moves is asymptotically {(M_k({\cal R}_k)/k + o(1))^n}.

We now have a fairly good asymptotic understanding of {M_k({\cal R}_k)}, with the bounds

\displaystyle  \log k - 2 \log\log k -2 \leq M_k({\cal R}_k) \leq \log k + \log\log k + 2

holding for sufficiently large {k}. There is however still room to tighten the bounds on {M_k({\cal R}_k)} for small {k}; I’ll summarise some of the ideas discussed so far below the fold.

For Ingredient 4, the basic tool is this:

Theorem 1 (Maynard) If {EH[\theta]} is true and {M_k({\cal R}_k) > \frac{2m}{\theta}}, then {H_m \leq H(k)}.

Thus, for instance, it is known that {M_{105} > 4} and {H(105)=600}, and this together with the Bombieri-Vinogradov inequality gives {H_1\leq 600}. This result is proven in Maynard’s paper and an alternate proof is also given in the previous blog post.

We have a number of ways to relax the hypotheses of this result, which we also summarise below the fold.

Read the rest of this entry »

For each natural number {m}, let {H_m} denote the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

where {p_n} denotes the {n\textsuperscript{th}} prime. In other words, {H_m} is the least quantity such that there are infinitely many intervals of length {H_m} that contain {m+1} or more primes. Thus, for instance, the twin prime conjecture is equivalent to the assertion that {H_1 = 2}, and the prime tuples conjecture would imply that {H_m} is equal to the diameter of the narrowest admissible tuple of cardinality {m+1} (thus we conjecturally have {H_1 = 2}, {H_2 = 6}, {H_3 = 8}, {H_4 = 12}, {H_5 = 16}, and so forth; see this web page for further continuation of this sequence).

In 2004, Goldston, Pintz, and Yildirim established the bound {H_1 \leq 16} conditional on the Elliott-Halberstam conjecture, which remains unproven. However, no unconditional finiteness of {H_1} was obtained (although they famously obtained the non-trivial bound {p_{n+1}-p_n = o(\log p_n)}), and even on the Elliot-Halberstam conjecture no finiteness result on the higher {H_m} was obtained either (although they were able to show {p_{n+2}-p_n=o(\log p_n)} on this conjecture). In the recent breakthrough of Zhang, the unconditional bound {H_1 \leq 70,000,000} was obtained, by establishing a weak partial version of the Elliott-Halberstam conjecture; by refining these methods, the Polymath8 project (which I suppose we could retroactively call the Polymath8a project) then lowered this bound to {H_1 \leq 4,680}.

With the very recent preprint of James Maynard, we have the following further substantial improvements:

Theorem 1 (Maynard’s theorem) Unconditionally, we have the following bounds:

  • {H_1 \leq 600}.
  • {H_m \leq C m^3 e^{4m}} for an absolute constant {C} and any {m \geq 1}.

If one assumes the Elliott-Halberstam conjecture, we have the following improved bounds:

  • {H_1 \leq 12}.
  • {H_2 \leq 600}.
  • {H_m \leq C m^3 e^{2m}} for an absolute constant {C} and any {m \geq 1}.

The final conclusion {H_m \leq C m^3 e^{2m}} on Elliott-Halberstam is not explicitly stated in Maynard’s paper, but follows easily from his methods, as I will describe below the fold. (At around the same time as Maynard’s work, I had also begun a similar set of calculations concerning {H_m}, but was only able to obtain the slightly weaker bound {H_m \leq C \exp( C m )} unconditionally.) In the converse direction, the prime tuples conjecture implies that {H_m} should be comparable to {m \log m}. Granville has also obtained the slightly weaker explicit bound {H_m \leq e^{8m+5}} for any {m \geq 1} by a slight modification of Maynard’s argument.

The arguments of Maynard avoid using the difficult partial results on (weakened forms of) the Elliott-Halberstam conjecture that were established by Zhang and then refined by Polymath8; instead, the main input is the classical Bombieri-Vinogradov theorem, combined with a sieve that is closer in spirit to an older sieve of Goldston and Yildirim, than to the sieve used later by Goldston, Pintz, and Yildirim on which almost all subsequent work is based.

The aim of the Polymath8b project is to obtain improved bounds on {H_1, H_2}, and higher values of {H_m}, either conditional on the Elliott-Halberstam conjecture or unconditional. The likeliest routes for doing this are by optimising Maynard’s arguments and/or combining them with some of the results from the Polymath8a project. This post is intended to be the first research thread for that purpose. To start the ball rolling, I am going to give below a presentation of Maynard’s results, with some minor technical differences (most significantly, I am using the Goldston-Pintz-Yildirim variant of the Selberg sieve, rather than the traditional “elementary Selberg sieve” that is used by Maynard (and also in the Polymath8 project), although it seems that the numerology obtained by both sieves is essentially the same). An alternate exposition of Maynard’s work has just been completed also by Andrew Granville.

Read the rest of this entry »

I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“.  This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”.   There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also.

The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method.  More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem.  One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet.

This is something of a first draft, so comments and suggestions are even more welcome than usual.  (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.)

Define a partition of {1} to be a finite or infinite multiset {\Sigma} of real numbers in the interval {I \in (0,1]} (that is, an unordered set of real numbers in {I}, possibly with multiplicity) whose total sum is {1}: {\sum_{t \in \Sigma}t = 1}. For instance, {\{1/2,1/4,1/8,1/16,\ldots\}} is a partition of {1}. Such partitions arise naturally when trying to decompose a large object into smaller ones, for instance:

  1. (Prime factorisation) Given a natural number {n}, one can decompose it into prime factors {n = p_1 \ldots p_k} (counting multiplicity), and then the multiset

    \displaystyle  \Sigma_{PF}(n) := \{ \frac{\log p_1}{\log n}, \ldots,\frac{\log p_k}{\log n} \}

    is a partition of {1}.

  2. (Cycle decomposition) Given a permutation {\sigma \in S_n} on {n} labels {\{1,\ldots,n\}}, one can decompose {\sigma} into cycles {C_1,\ldots,C_k}, and then the multiset

    \displaystyle  \Sigma_{CD}(\sigma) := \{ \frac{|C_1|}{n}, \ldots, \frac{|C_k|}{n} \}

    is a partition of {1}.

  3. (Normalisation) Given a multiset {\Gamma} of positive real numbers whose sum {S := \sum_{x\in \Gamma}x} is finite and non-zero, the multiset

    \displaystyle  \Sigma_N( \Gamma) := \frac{1}{S} \cdot \Gamma = \{ \frac{x}{S}: x \in \Gamma \}

    is a partition of {1}.

In the spirit of the universality phenomenon, one can ask what is the natural distribution for what a “typical” partition should look like; thus one seeks a natural probability distribution on the space of all partitions, analogous to (say) the gaussian distributions on the real line, or GUE distributions on point processes on the line, and so forth. It turns out that there is one natural such distribution which is related to all three examples above, known as the Poisson-Dirichlet distribution. To describe this distribution, we first have to deal with the problem that it is not immediately obvious how to cleanly parameterise the space of partitions, given that the cardinality of the partition can be finite or infinite, that multiplicity is allowed, and that we would like to identify two partitions that are permutations of each other

One way to proceed is to random partition {\Sigma} as a type of point process on the interval {I}, with the constraint that {\sum_{x \in \Sigma} x = 1}, in which case one can study statistics such as the counting functions

\displaystyle  N_{[a,b]} := |\Sigma \cap [a,b]| = \sum_{x \in\Sigma} 1_{[a,b]}(x)

(where the cardinality here counts multiplicity). This can certainly be done, although in the case of the Poisson-Dirichlet process, the formulae for the joint distribution of such counting functions is moderately complicated. Another way to proceed is to order the elements of {\Sigma} in decreasing order

\displaystyle  t_1 \geq t_2 \geq t_3 \geq \ldots \geq 0,

with the convention that one pads the sequence {t_n} by an infinite number of zeroes if {\Sigma} is finite; this identifies the space of partitions with an infinite dimensional simplex

\displaystyle  \{ (t_1,t_2,\ldots) \in [0,1]^{\bf N}: t_1 \geq t_2 \geq \ldots; \sum_{n=1}^\infty t_n = 1 \}.

However, it turns out that the process of ordering the elements is not “smooth” (basically because functions such as {(x,y) \mapsto \max(x,y)} and {(x,y) \mapsto \min(x,y)} are not smooth) and the formulae for the joint distribution in the case of the Poisson-Dirichlet process is again complicated.

It turns out that there is a better (or at least “smoother”) way to enumerate the elements {u_1,(1-u_1)u_2,(1-u_1)(1-u_2)u_3,\ldots} of a partition {\Sigma} than the ordered method, although it is random rather than deterministic. This procedure (which I learned from this paper of Donnelly and Grimmett) works as follows.

  1. Given a partition {\Sigma}, let {u_1} be an element of {\Sigma} chosen at random, with each element {t\in \Sigma} having a probability {t} of being chosen as {u_1} (so if {t \in \Sigma} occurs with multiplicity {m}, the net probability that {t} is chosen as {u_1} is actually {mt}). Note that this is well-defined since the elements of {\Sigma} sum to {1}.
  2. Now suppose {u_1} is chosen. If {\Sigma \backslash \{u_1\}} is empty, we set {u_2,u_3,\ldots} all equal to zero and stop. Otherwise, let {u_2} be an element of {\frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})} chosen at random, with each element {t \in \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})} having a probability {t} of being chosen as {u_2}. (For instance, if {u_1} occurred with some multiplicity {m>1} in {\Sigma}, then {u_2} can equal {\frac{u_1}{1-u_1}} with probability {(m-1)u_1/(1-u_1)}.)
  3. Now suppose {u_1,u_2} are both chosen. If {\Sigma \backslash \{u_1,u_2\}} is empty, we set {u_3, u_4, \ldots} all equal to zero and stop. Otherwise, let {u_3} be an element of {\frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}, with ech element {t \in \frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})} having a probability {t} of being chosen as {u_3}.
  4. We continue this process indefinitely to create elements {u_1,u_2,u_3,\ldots \in [0,1]}.

We denote the random sequence {Enum(\Sigma) := (u_1,u_2,\ldots) \in [0,1]^{\bf N}} formed from a partition {\Sigma} in the above manner as the random normalised enumeration of {\Sigma}; this is a random variable in the infinite unit cube {[0,1]^{\bf N}}, and can be defined recursively by the formula

\displaystyle  Enum(\Sigma) = (u_1, Enum(\frac{1}{1-u_1} \cdot (\Sigma\backslash \{u_1\})))

with {u_1} drawn randomly from {\Sigma}, with each element {t \in \Sigma} chosen with probability {t}, except when {\Sigma =\{1\}} in which case we instead have

\displaystyle  Enum(\{1\}) = (1, 0,0,\ldots).

Note that one can recover {\Sigma} from any of its random normalised enumerations {Enum(\Sigma) := (u_1,u_2,\ldots)} by the formula

\displaystyle  \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\} \ \ \ \ \ (1)

with the convention that one discards any zero elements on the right-hand side. Thus {Enum} can be viewed as a (stochastic) parameterisation of the space of partitions by the unit cube {[0,1]^{\bf N}}, which is a simpler domain to work with than the infinite-dimensional simplex mentioned earlier.

Note that this random enumeration procedure can also be adapted to the three models described earlier:

  1. Given a natural number {n}, one can randomly enumerate its prime factors {n =p'_1 p'_2 \ldots p'_k} by letting each prime factor {p} of {n} be equal to {p'_1} with probability {\frac{\log p}{\log n}}, then once {p'_1} is chosen, let each remaining prime factor {p} of {n/p'_1} be equal to {p'_2} with probability {\frac{\log p}{\log n/p'_1}}, and so forth.
  2. Given a permutation {\sigma\in S_n}, one can randomly enumerate its cycles {C'_1,\ldots,C'_k} by letting each cycle {C} in {\sigma} be equal to {C'_1} with probability {\frac{|C|}{n}}, and once {C'_1} is chosen, let each remaining cycle {C} be equalto {C'_2} with probability {\frac{|C|}{n-|C'_1|}}, and so forth. Alternatively, one traverse the elements of {\{1,\ldots,n\}} in random order, then let {C'_1} be the first cycle one encounters when performing this traversal, let {C'_2} be the next cycle (not equal to {C'_1} one encounters when performing this traversal, and so forth.
  3. Given a multiset {\Gamma} of positive real numbers whose sum {S := \sum_{x\in\Gamma} x} is finite, we can randomly enumerate {x'_1,x'_2,\ldots} the elements of this sequence by letting each {x \in \Gamma} have a {\frac{x}{S}} probability of being set equal to {x'_1}, and then once {x'_1} is chosen, let each remaining {x \in \Gamma\backslash \{x'_1\}} have a {\frac{x_i}{S-x'_1}} probability of being set equal to {x'_2}, and so forth.

We then have the following result:

Proposition 1 (Existence of the Poisson-Dirichlet process) There exists a random partition {\Sigma} whose random enumeration {Enum(\Sigma) = (u_1,u_2,\ldots)} has the uniform distribution on {[0,1]^{\bf N}}, thus {u_1,u_2,\ldots} are independently and identically distributed copies of the uniform distribution on {[0,1]}.

A random partition {\Sigma} with this property will be called the Poisson-Dirichlet process. This process, first introduced by Kingman, can be described explicitly using (1) as

\displaystyle  \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\},

where {u_1,u_2,\ldots} are iid copies of the uniform distribution of {[0,1]}, although it is not immediately obvious from this definition that {Enum(\Sigma)} is indeed uniformly distributed on {[0,1]^{\bf N}}. We prove this proposition below the fold.

An equivalent definition of a Poisson-Dirichlet process is a random partition {\Sigma} with the property that

\displaystyle  (u_1, \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})) \equiv (U, \Sigma) \ \ \ \ \ (2)

where {u_1} is a random element of {\Sigma} with each {t \in\Sigma} having a probability {t} of being equal to {u_1}, {U} is a uniform variable on {[0,1]} that is independent of {\Sigma}, and {\equiv} denotes equality of distribution. This can be viewed as a sort of stochastic self-similarity property of {\Sigma}: if one randomly removes one element from {\Sigma} and rescales, one gets a new copy of {\Sigma}.

It turns out that each of the three ways to generate partitions listed above can lead to the Poisson-Dirichlet process, either directly or in a suitable limit. We begin with the third way, namely by normalising a Poisson process to have sum {1}:

Proposition 2 (Poisson-Dirichlet processes via Poisson processes) Let {a>0}, and let {\Gamma_a} be a Poisson process on {(0,+\infty)} with intensity function {t \mapsto \frac{1}{t} e^{-at}}. Then the sum {S :=\sum_{x \in \Gamma_a} x} is almost surely finite, and the normalisation {\Sigma_N(\Gamma_a) = \frac{1}{S} \cdot \Gamma_a} is a Poisson-Dirichlet process.

Again, we prove this proposition below the fold. Now we turn to the second way (a topic, incidentally, that was briefly touched upon in this previous blog post):

Proposition 3 (Large cycles of a typical permutation) For each natural number {n}, let {\sigma} be a permutation drawn uniformly at random from {S_n}. Then the random partition {\Sigma_{CD}(\sigma)} converges in the limit {n \rightarrow\infty} to a Poisson-Dirichlet process {\Sigma_{PF}} in the following sense: given any fixed sequence of intervals {[a_1,b_1],\ldots,[a_k,b_k] \subset I} (independent of {n}), the joint discrete random variable {(N_{[a_1,b_1]}(\Sigma_{CD}(\sigma)),\ldots,N_{[a_k,b_k]}(\Sigma_{CD}(\sigma)))} converges in distribution to {(N_{[a_1,b_1]}(\Sigma),\ldots,N_{[a_k,b_k]}(\Sigma))}.

Finally, we turn to the first way:

Proposition 4 (Large prime factors of a typical number) Let {x > 0}, and let {N_x} be a random natural number chosen according to one of the following three rules:

  1. (Uniform distribution) {N_x} is drawn uniformly at random from the natural numbers in {[1,x]}.
  2. (Shifted uniform distribution) {N_x} is drawn uniformly at random from the natural numbers in {[x,2x]}.
  3. (Zeta distribution) Each natural number {n} has a probability {\frac{1}{\zeta(s)}\frac{1}{n^s}} of being equal to {N_x}, where {s := 1 +\frac{1}{\log x}}and {\zeta(s):=\sum_{n=1}^\infty \frac{1}{n^s}}.

Then {\Sigma_{PF}(N_x)} converges as {x \rightarrow \infty} to a Poisson-Dirichlet process {\Sigma} in the same fashion as in Proposition 3.

The process {\Sigma_{PF}(N_x)} was first studied by Billingsley (and also later by Knuth-Trabb Pardo and by Vershik, but the formulae were initially rather complicated; the proposition above is due to of Donnelly and Grimmett, although the third case of the proposition is substantially easier and appears in the earlier work of Lloyd. We prove the proposition below the fold.

The previous two propositions suggests an interesting analogy between large random integers and large random permutations; see this ICM article of Vershik and this non-technical article of Granville (which, incidentally, was once adapted into a play) for further discussion.

As a sample application, consider the problem of estimating the number {\pi(x,x^{1/u})} of integers up to {x} which are not divisible by any prime larger than {x^{1/u}} (i.e. they are {x^{1/u}}-smooth), where {u>0} is a fixed real number. This is essentially (modulo some inessential technicalities concerning the distinction between the intervals {[x,2x]} and {[1,x]}) the probability that {\Sigma} avoids {[1/u,1]}, which by the above theorem converges to the probability {\rho(u)} that {\Sigma_{PF}} avoids {[1/u,1]}. Below the fold we will show that this function is given by the Dickman function, defined by setting {\rho(u)=1} for {u < 1} and {u\rho'(u) = \rho(u-1)} for {u \geq 1}, thus recovering the classical result of Dickman that {\pi(x,x^{1/u}) = (\rho(u)+o(1))x}.

I thank Andrew Granville and Anatoly Vershik for showing me the nice link between prime factors and the Poisson-Dirichlet process. The material here is standard, and (like many of the other notes on this blog) was primarily written for my own benefit, but it may be of interest to some readers. In preparing this article I found this exposition by Kingman to be helpful.

Note: this article will emphasise the computations rather than rigour, and in particular will rely on informal use of infinitesimals to avoid dealing with stochastic calculus or other technicalities. We adopt the convention that we will neglect higher order terms in infinitesimal calculations, e.g. if {dt} is infinitesimal then we will abbreviate {dt + o(dt)} simply as {dt}.

Read the rest of this entry »

As in all previous posts in this series, we adopt the following asymptotic notation: {x} is a parameter going off to infinity, and all quantities may depend on {x} unless explicitly declared to be “fixed”. The asymptotic notation {O(), o(), \ll} is then defined relative to this parameter. A quantity {q} is said to be of polynomial size if one has {q = O(x^{O(1)})}, and bounded if {q=O(1)}. We also write {X \lessapprox Y} for {X \ll x^{o(1)} Y}, and {X \sim Y} for {X \ll Y \ll X}.

The purpose of this (rather technical) post is both to roll over the polymath8 research thread from this previous post, and also to record the details of the latest improvement to the Type I estimates (based on exploiting additional averaging and using Deligne’s proof of the Weil conjectures) which lead to a slight improvement in the numerology.

In order to obtain this new Type I estimate, we need to strengthen the previously used properties of “dense divisibility” or “double dense divisibility” as follows.

Definition 1 (Multiple dense divisibility) Let {y \geq 1}. For each natural number {k \geq 0}, we define a notion of {k}-tuply {y}-dense divisibility recursively as follows:

  • Every natural number {n} is {0}-tuply {y}-densely divisible.
  • If {k \geq 1} and {n} is a natural number, we say that {n} is {k}-tuply {y}-densely divisible if, whenever {i,j \geq 0} are natural numbers with {i+j=k-1}, and {1 \leq R \leq n}, one can find a factorisation {n = qr} with {y^{-1} R \leq r \leq R} such that {q} is {i}-tuply {y}-densely divisible and {r} is {j}-tuply {y}-densely divisible.

We let {{\mathcal D}^{(k)}_y} denote the set of {k}-tuply {y}-densely divisible numbers. We abbreviate “{1}-tuply densely divisible” as “densely divisible”, “{2}-tuply densely divisible” as “doubly densely divisible”, and so forth; we also abbreviate {{\mathcal D}^{(1)}_y} as {{\mathcal D}_y}.

Given any finitely supported sequence {\alpha: {\bf N} \rightarrow {\bf C}} and any primitive residue class {a\ (q)}, we define the discrepancy

\displaystyle \Delta(\alpha; a \ (q)) := \sum_{n: n = a\ (q)} \alpha(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1} \alpha(n).

We now recall the key concept of a coefficient sequence, with some slight tweaks in the definitions that are technically convenient for this post.

Definition 2 A coefficient sequence is a finitely supported sequence {\alpha: {\bf N} \rightarrow {\bf R}} that obeys the bounds

\displaystyle  |\alpha(n)| \ll \tau^{O(1)}(n) \log^{O(1)}(x) \ \ \ \ \ (1)

for all {n}, where {\tau} is the divisor function.

  • (i) A coefficient sequence {\alpha} is said to be located at scale {N} for some {N \geq 1} if it is supported on an interval of the form {[cN, CN]} for some {1 \ll c < C \ll 1}.
  • (ii) A coefficient sequence {\alpha} located at scale {N} for some {N \geq 1} is said to obey the Siegel-Walfisz theorem if one has

    \displaystyle  | \Delta(\alpha 1_{(\cdot,q)=1}; a\ (r)) | \ll \tau(qr)^{O(1)} N \log^{-A} x \ \ \ \ \ (2)

    for any {q,r \geq 1}, any fixed {A}, and any primitive residue class {a\ (r)}.

  • (iii) A coefficient sequence {\alpha} is said to be smooth at scale {N} for some {N > 0} is said to be smooth if it takes the form {\alpha(n) = \psi(n/N)} for some smooth function {\psi: {\bf R} \rightarrow {\bf C}} supported on an interval of size {O(1)} and obeying the derivative bounds

    \displaystyle  |\psi^{(j)}(t)| \lesssim \log^{O(1)} x \ \ \ \ \ (3)

    for all fixed {j \geq 0} (note that the implied constant in the {O()} notation may depend on {j}).

Note that we allow sequences to be smooth at scale {N} without being located at scale {N}; for instance if one arbitrarily translates of a sequence that is both smooth and located at scale {N}, it will remain smooth at this scale but may not necessarily be located at this scale any more. Note also that we allow the smoothness scale {N} of a coefficient sequence to be less than one. This is to allow for the following convenient rescaling property: if {n \mapsto \psi(n)} is smooth at scale {N}, {q \geq 1}, and {a} is an integer, then {n \mapsto \psi(qn+a)} is smooth at scale {N/q}, even if {N/q} is less than one.

Now we adapt the Type I estimate to the {k}-tuply densely divisible setting.

Definition 3 (Type I estimates) Let {0 < \varpi < 1/4}, {0 < \delta < 1/4+\varpi}, and {0 < \sigma < 1/2} be fixed quantities, and let {k \geq 1} be a fixed natural number. We let {I} be an arbitrary bounded subset of {{\bf R}}, let {P_I := \prod_{p \in I} p}, and let {a\ (P_I)} a primitive congruence class. We say that {Type^{(k)}_I[\varpi,\delta,\sigma]} holds if, whenever {M, N \gg 1} are quantities with

\displaystyle  M N \sim x \ \ \ \ \ (4)

and

\displaystyle  x^{1/2-\sigma} \lessapprox N \lessapprox x^{1/2-2\varpi-c} \ \ \ \ \ (5)

for some fixed {c>0}, and {\alpha,\beta} are coefficient sequences located at scales {M,N} respectively, with {\beta} obeying a Siegel-Walfisz theorem, we have

\displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^{(k)}: q \leq x^{1/2+2\varpi}} |\Delta(\alpha * \beta; a\ (q))| \ll x \log^{-A} x \ \ \ \ \ (6)

for any fixed {A>0}. Here, as in previous posts, {{\mathcal S}_I} denotes the square-free natural numbers whose prime factors lie in {I}.

The main theorem of this post is then

Theorem 4 (Improved Type I estimate) We have {Type^{(4)}_I[\varpi,\delta,\sigma]} whenever

\displaystyle  \frac{160}{3} \varpi + 16 \delta + \frac{34}{9} \sigma < 1

and

\displaystyle  64\varpi + 18\delta + 2\sigma < 1.

In practice, the first condition here is dominant. Except for weakening double dense divisibility to quadruple dense divisibility, this improves upon the previous Type I estimate that established {Type^{(2)}_I[\varpi,\delta,\sigma]} under the stricter hypothesis

\displaystyle  56 \varpi + 16 \delta + 4 \sigma < 1.

As in previous posts, Type I estimates (when combined with existing Type II and Type III estimates) lead to distribution results of Motohashi-Pintz-Zhang type. For any fixed {\varpi, \delta > 0} and {k \geq 1}, we let {MPZ^{(k)}[\varpi,\delta]} denote the assertion that

\displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^{(k)}: q \leq x^{1/2+2\varpi}} |\Delta(\Lambda 1_{[x,2x]}; a\ (q))| \ll x \log^{-A} x \ \ \ \ \ (7)

for any fixed {A > 0}, any bounded {I}, and any primitive {a\ (P_I)}, where {\Lambda} is the von Mangoldt function.

Corollary 5 We have {MPZ^{(4)}[\varpi,\delta]} whenever

\displaystyle  \frac{600}{7} \varpi + \frac{180}{7} \delta < 1 \ \ \ \ \ (8)

Proof: Setting {\sigma} sufficiently close to {1/10}, we see from the above theorem that {Type^{(4)}_{II}[\varpi,\delta]} holds whenever

\displaystyle  \frac{600}{7} \varpi + \frac{180}{7} \delta < 1

and

\displaystyle  80 \varpi + \frac{45}{2} \delta < 1.

The second condition is implied by the first and can be deleted.

From this previous post we know that {Type^{(4)}_{II}[\varpi,\delta]} (which we define analogously to {Type'_{II}[\varpi,\delta], Type''_{II}[\varpi,\delta]} from previous sections) holds whenever

\displaystyle  68 \varpi + 14 \delta < 1

while {Type^{(4)}_{III}[\varpi,\delta,\sigma]} holds with {\sigma} sufficiently close to {1/10} whenever

\displaystyle  70 \varpi + 5 \delta < 1.

Again, these conditions are implied by (8). The claim then follows from the Heath-Brown identity and dyadic decomposition as in this previous post. \Box

As before, we let {DHL[k_0,2]} denote the claim that given any admissible {k_0}-tuple {{\mathcal H}}, there are infinitely many translates of {{\mathcal H}} that contain at least two primes.

Corollary 6 We have {DHL[k_0,2]} with {k_0 = 632}.

This follows from the Pintz sieve, as discussed below the fold. Combining this with the best known prime tuples, we obtain that there are infinitely many prime gaps of size at most {4,680}, improving slightly over the previous record of {5,414}.

Read the rest of this entry »

[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]

The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function {\zeta}, defined by

\displaystyle  \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}

for {\hbox{Re}(s)>1} and extended meromorphically to other values of {s}, and asserts that the only zeroes of {\zeta} in the critical strip {\{ s: 0 \leq \hbox{Re}(s) \leq 1 \}} lie on the critical line {\{ s: \hbox{Re}(s)=\frac{1}{2} \}}.

One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number {n} has a unique factorisation {n = p_1^{a_1} \ldots p_k^{a_k}} into primes. Taking logarithms, we obtain the identity

\displaystyle  \log n = \sum_{d|n} \Lambda(d) \ \ \ \ \ (1)

for any natural number {n}, where {\Lambda} is the von Mangoldt function, thus {\Lambda(n) = \log p} when {n} is a power of a prime {p} and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that

\displaystyle  \sum_{n=1}^\infty \frac{\log n}{n^s} = \sum_{n=1}^\infty \sum_{d|n} \frac{\Lambda(d)}{n^s},

formally at least. Writing {n=dm}, the right-hand side factors as

\displaystyle (\sum_{d=1}^\infty \frac{\Lambda(d)}{d^s}) (\sum_{m=1}^\infty \frac{1}{m^s}) = \zeta(s) \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s}

whereas the left-hand side is (formally, at least) equal to {-\zeta'(s)}. We conclude the identity

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)},

(formally, at least). If we integrate this, we are formally led to the identity

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = \log \zeta(s)

or equivalently to the exponential identity

\displaystyle  \zeta(s) = \exp( \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} ) \ \ \ \ \ (2)

which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as {\zeta} has a simple pole at {s=1} and zeroes at various places {s=\rho} on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form

\displaystyle  \zeta(s) = \frac{1}{s-1} \times \prod_\rho (s-\rho) \times \ldots

(where we will be intentionally vague about what is hiding in the {\ldots} terms) and so we expect an expansion of the form

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = - \log(s-1) + \sum_\rho \log(s-\rho) + \ldots. \ \ \ \ \ (3)

Note that

\displaystyle  \frac{1}{s-\rho} = \int_1^\infty t^{\rho-s} \frac{dt}{t}

and hence on integrating in {s} we formally have

\displaystyle  \log(s-\rho) = -\int_1^\infty t^{\rho-s-1} \frac{dt}{\log t}

and thus we have the heuristic approximation

\displaystyle  \log(s-\rho) \approx - \sum_{n=1}^\infty \frac{n^{\rho-s-1}}{\log n}.

Comparing this with (3), we are led to a heuristic form of the explicit formula

\displaystyle  \Lambda(n) \approx 1 - \sum_\rho n^{\rho-1}. \ \ \ \ \ (4)

When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function {1_{[0,x]}(n)} to obtain the formula

\displaystyle  \sum_{n \leq x} \Lambda(n) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (5)

which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that {\hbox{Re}(\rho) = 1/2} for all zeroes {\rho}, it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x ) \ \ \ \ \ (6)

as {x \rightarrow \infty}, giving a near-optimal “square root cancellation” for the sum {\sum_{n \leq x} \Lambda(n)-1}. Conversely, if one can somehow establish a bound of the form

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+\epsilon} )

for any fixed {\epsilon}, then the explicit formula can be used to then deduce that all zeroes {\rho} of {\zeta} have real part at most {1/2+\epsilon}, which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+o(1)} )

can be automatically amplified to the stronger bound

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x )

with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line {\hbox{Re}(s)=1}, and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem

\displaystyle  \sum_{n \leq x} \Lambda(n) =x + o(x);

see e.g. this previous blog post for more discussion.

The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character {\chi: {\bf Z} \rightarrow {\bf R}}. The analogue of the Riemann zeta function is then the (1), which encoded the fundamental theorem of arithmetic, can be twisted by {\chi} to obtain

\displaystyle  \chi(n) \log n = \sum_{d|n} \chi(d) \Lambda(d) \chi(\frac{n}{d}) \ \ \ \ \ (7)

and essentially the same manipulations as before eventually lead to the exponential identity

\displaystyle  L(s,\chi) = \exp( \sum_{n=1}^\infty \frac{\chi(n) \Lambda(n)}{\log n} n^{-s} ). \ \ \ \ \ (8)

which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form

\displaystyle  \chi(n) \Lambda(n) \approx - \sum_\rho n^{\rho-1} \ \ \ \ \ (9)

for non-principal {\chi}, where {\rho} now ranges over the zeroes of {L(s,\chi)} in the critical strip, rather than the zeroes of {\zeta(s)}; a more accurate formulation, following (5), would be

\displaystyle  \sum_{n \leq x} \chi(n) \Lambda(n) \approx - \sum_\rho \frac{x^{\rho}}{\rho}. \ \ \ \ \ (10)

(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet {L}-function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of {L(s,\chi)} in the critical strip also lie on the critical line, then we obtain the bound

\displaystyle  \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2} \log(x) \log(xq) )

for any non-principal Dirichlet character {\chi}, again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound

\displaystyle  \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2+o(1)} )

(where {o(1)} denotes a quantity that goes to zero as {x \rightarrow \infty} for any fixed {q}). Next, one can consider other number systems than the natural numbers {{\bf N}} and integers {{\bf Z}}. For instance, one can replace the integers {{\bf Z}} with rings {{\mathcal O}_K} of integers in other number fields {K} (i.e. finite extensions of {{\bf Q}}), such as the quadratic extensions {K = {\bf Q}[\sqrt{D}]} of the rationals for various square-free integers {D}, in which case the ring of integers would be the ring of quadratic integers {{\mathcal O}_K = {\bf Z}[\omega]} for a suitable generator {\omega} (it turns out that one can take {\omega = \sqrt{D}} if {D=2,3\hbox{ mod } 4}, and {\omega = \frac{1+\sqrt{D}}{2}} if {D=1 \hbox{ mod } 4}). Here, it is not immediately obvious what the analogue of the natural numbers {{\bf N}} is in this setting, since rings such as {{\bf Z}[\omega]} do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number {n} generates a principal ideal {(n) = \{ an: a \in {\bf Z} \}} in the integers, and conversely every non-trivial ideal {{\mathfrak n}} in the integers is associated to precisely one natural number {n} in this fashion, namely the norm {N({\mathfrak n}) := |{\bf Z} / {\mathfrak n}|} of that ideal. So one can identify the natural numbers with the ideals of {{\bf Z}}. Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if {p} is prime, and {a,b} are integers, then {ab \in (p)} if and only if one of {a \in (p)} or {b \in (p)} is true. Finally, even in number systems (such as {{\bf Z}[\sqrt{-5}]}) in which the classical version of the fundamental theorem of arithmetic fail (e.g. {6 = 2 \times 3 = (1-\sqrt{-5})(1+\sqrt{-5})}), we have the fundamental theorem of arithmetic for ideals: every ideal {\mathfrak{n}} in a Dedekind domain (which includes the ring {{\mathcal O}_K} of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals {\mathfrak{p}} (although these ideals might not necessarily be principal). For instance, in {{\bf Z}[\sqrt{-5}]}, the principal ideal {(6)} factors as the product of four prime (but non-principal) ideals {(2, 1+\sqrt{-5})}, {(2, 1-\sqrt{-5})}, {(3, 1+\sqrt{-5})}, {(3, 1-\sqrt{-5})}. (Note that the first two ideals {(2,1+\sqrt{5}), (2,1-\sqrt{5})} are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function

\displaystyle  \zeta_K(s) := \sum_{{\mathfrak n}} \frac{1}{N({\mathfrak n})^s}

where the summation is over all non-trivial ideals in {{\mathcal O}_K}. One can also define a von Mangoldt function {\Lambda_K({\mathfrak n})}, defined as {\log N( {\mathfrak p})} when {{\mathfrak n}} is a power of a prime ideal {{\mathfrak p}}, and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),

\displaystyle  \log N({\mathfrak n}) = \sum_{{\mathfrak d}|{\mathfrak n}} \Lambda_K({\mathfrak d}) \ \ \ \ \ (11)

which leads as before to an exponential identity

\displaystyle  \zeta_K(s) = \exp( \sum_{{\mathfrak n}} \frac{\Lambda_K({\mathfrak n})}{\log N({\mathfrak n})} N({\mathfrak n})^{-s} ) \ \ \ \ \ (12)

and an explicit formula of the heuristic form

\displaystyle  \Lambda({\mathfrak n}) \approx 1 - \sum_\rho N({\mathfrak n})^{\rho-1}

or, a little more accurately,

\displaystyle  \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (13)

in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form

\displaystyle  \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( \sqrt{x} \log(x) (d+\log(Dx)) )

where {D} is the conductor of {K} (which, in the case of number fields, is the absolute value of the discriminant of {K}) and {d = \hbox{dim}_{\bf Q}(K)} is the degree of the extension of {K} over {{\bf Q}}. As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound

\displaystyle  \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( x^{1/2+o(1)} )

where {o(1)} denotes a quantity that goes to zero as {x \rightarrow \infty} (holding {K} fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.

As was the case with the Dirichlet {L}-functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.

Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line {{\mathbb A}^1} and a finite field {{\mathbb F} = {\mathbb F}_q} of some order {q}. The polynomial functions on the affine line {{\mathbb A}^1/{\mathbb F}} are just the usual polynomial ring {{\mathbb F}[t]}, which then play the role of the integers {{\bf Z}} (or {{\mathcal O}_K}) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm {N(f)} of a polynomial is the order of {{\mathbb F}[t] / (f)}, which can be computed explicitly as

\displaystyle  N(f) = q^{\hbox{deg}(f)}.

Because of this, we will normalise things slightly differently here and use {\hbox{deg}(f)} in place of {\log N(f)} in what follows. The (local) zeta function {\zeta_{{\mathbb A}^1/{\mathbb F}}(s)} is then defined as

\displaystyle  \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = \sum_f \frac{1}{N(f)^s}

where {f} ranges over monic polynomials, and the von Mangoldt function {\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)} is defined to equal {\hbox{deg}(g)} when {f} is a power of a monic irreducible polynomial {g}, and zero otherwise. Note that because {N(f)} is always a power of {q}, the zeta function here is in fact periodic with period {2\pi i / \log q}. Because of this, it is customary to make a change of variables {T := q^{-s}}, so that

\displaystyle  \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = Z( {\mathbb A}^1/{\mathbb F}, T )

and {Z} is the renormalised zeta function

\displaystyle  Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_f T^{\hbox{deg}(f)}. \ \ \ \ \ (14)

We have the analogue of (1) (or (7) or (11)):

\displaystyle  \hbox{deg}(f) = \sum_{g|f} \Lambda_{{\mathbb A}^1/{\mathbb F}}(g), \ \ \ \ \ (15)

which leads as before to an exponential identity

\displaystyle  Z( {\mathbb A}^1/{\mathbb F}, T ) = \exp( \sum_f \frac{\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)}{\hbox{deg}(f)} T^{\hbox{deg}(f)} ) \ \ \ \ \ (16)

analogous to (2), (8), or (12). It also leads to the explicit formula

\displaystyle  \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - \sum_\rho N(f)^{\rho-1}

where {\rho} are the zeroes of the original zeta function {\zeta_{{\mathbb A}^1/{\mathbb F}}(s)} (counting each residue class of the period {2\pi i/\log q} just once), or equivalently

\displaystyle  \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - q^{-\hbox{deg}(f)} \sum_\alpha \alpha^{\hbox{deg}(f)},

where {\alpha} are the reciprocals of the roots of the normalised zeta function {Z( {\mathbb A}^1/{\mathbb F}, T )} (or to put it another way, {1-\alpha T} are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx q^n - \sum_\alpha \alpha^n.

As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (17)

for an explicit integer {c} (independent of {n}) arising from any potential pole of {Z} at {T=1}. In the case of the affine line {{\mathbb A}^1}, the situation is particularly simple, because the zeta function {Z( {\mathbb A}^1/{\mathbb F}, T)} is easy to compute. Indeed, since there are exactly {q^n} monic polynomials of a given degree {n}, we see from (14) that

\displaystyle  Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_{n=0}^\infty q^n T^n = \frac{1}{1-qT}

so in fact there are no zeroes whatsoever, and no pole at {T=1} either, so we have an exact prime number theorem for this function field:

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n \ \ \ \ \ (18)

Among other things, this tells us that the number of irreducible monic polynomials of degree {n} is {q^n/n + O(q^{n/2})}.

We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial {f \in {\mathbb F}[t]} through its roots, which are a finite set of points in the algebraic closure {\overline{{\mathbb F}}} of the finite field {{\mathbb F}} (or more suggestively, as points on the affine line {{\mathbb A}^1( \overline{{\mathbb F}} )}). The number of such points (counting multiplicity) is the degree of {f}, and from the factor theorem, the set of points determines the monic polynomial {f} (or, if one removes the monic hypothesis, it determines the polynomial {f} projectively). These points have an action of the Galois group {\hbox{Gal}( \overline{{\mathbb F}} / {\mathbb F} )}. It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map {\hbox{Frob}: x \mapsto x^q}, which fixes the elements of the original finite field {{\mathbb F}} but permutes the other elements of {\overline{{\mathbb F}}}. Thus the roots of a given polynomial {f} split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if {f} is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.

Now consider the degree {n} finite field extension {{\mathbb F}_n} of {{\mathbb F}} (it is a classical fact that there is exactly one such extension up to isomorphism for each {n}); this is a subfield of {\overline{{\mathbb F}}} of order {q^n}. (Here we are performing a standard abuse of notation by overloading the subscripts in the {{\mathbb F}} notation; thus {{\mathbb F}_q} denotes the field of order {q}, while {{\mathbb F}_n} denotes the extension of {{\mathbb F} = {\mathbb F}_q} of order {n}, so that we in fact have {{\mathbb F}_n = {\mathbb F}_{q^n}} if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point {x} in this extension (or, more suggestively, the affine line {{\mathbb A}^1({\mathbb F}_n)} over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of {x}. Since the Frobenius action is periodic of period {n} on {{\mathbb F}_n}, the degree of this minimal polynomial must divide {n}. Conversely, every monic irreducible polynomial of degree {d} dividing {n} produces {d} distinct zeroes that lie in {{\mathbb F}_d} (here we use the classical fact that finite fields are perfect) and hence in {{\mathbb F}_n}. We have thus partitioned {{\mathbb A}^1({\mathbb F}_n)} into Frobenius orbits (also known as closed points), with each monic irreducible polynomial {f} of degree {d} dividing {n} contributing an orbit of size {d = \hbox{deg}(f) = \Lambda(f^{n/d})}. From this we conclude a geometric interpretation of the left-hand side of (18):

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = \sum_{x \in {\mathbb A}^1({\mathbb F}_n)} 1. \ \ \ \ \ (19)

The identity (18) thus is equivalent to the thoroughly boring fact that the number of {{\mathbb F}_n}-points on the affine line {{\mathbb A}^1} is equal to {q^n}. However, things become much more interesting if one then replaces the affine line {{\mathbb A}^1} by a more general (geometrically) irreducible curve {C} defined over {{\mathbb F}}; for instance one could take {C} to be an ellpitic curve

\displaystyle  E = \{ (x,y): y^2 = x^3 + ax + b \} \ \ \ \ \ (20)

for some suitable {a,b \in {\mathbb F}}, although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of {{\mathbb F}}-rational points removed). The analogue of {{\mathbb F}[t]} is then the coordinate ring of {C} (for instance, in the case of the elliptic curve (20) it would be {{\mathbb F}[x,y] / (y^2-x^3-ax-b)}), with polynomials in this ring producing a set of roots in the curve {C( \overline{\mathbb F})} that is again invariant with respect to the Frobenius action (acting on the {x} and {y} coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on {C} will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function

\displaystyle  \zeta_{C/{\mathbb F}}(s) = \sum_{{\mathfrak f}} \frac{1}{N({\mathfrak f})^s}

and a von Mangoldt function {\Lambda_{C/{\mathbb F}}({\mathfrak f})} as before, where {{\mathfrak f}} would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve {C}; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points {\{x_1,\ldots,x_k\}} in {C}, or equivalently an effective divisor {[x_1] + \ldots + [x_k]} of {C}; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be rational in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of {C}. With this dictionary, the zeta function becomes

\displaystyle  \zeta_{C/{\mathbb F}}(s) = \sum_{D \geq 0} \frac{1}{q^{\hbox{deg}(D)}}

where the sum is over effective rational divisors {D} of {C} (with {k} being the degree of an effective divisor {[x_1] + \ldots + [x_k]}), or equivalently

\displaystyle  Z( C/{\mathbb F}, T ) = \sum_{D \geq 0} T^{\hbox{deg}(D)}.

The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes

\displaystyle  \sum_{N({\mathfrak f}) = q^n} \Lambda_{C/{\mathbb F}}({\mathfrak f}) = \sum_{x \in C({\mathbb F}_n)} 1

\displaystyle  = |C({\mathbb F}_n)|,

thus this sum is simply counting the number of {{\mathbb F}_n}-points of {C}. The analogue of the exponential identity (16) (or (2), (8), or (12)) is then

\displaystyle  Z( C/{\mathbb F}, T ) = \exp( \sum_{n \geq 1} \frac{|C({\mathbb F}_n)|}{n} T^n ) \ \ \ \ \ (21)

and the analogue of the explicit formula (17) (or (5), (10) or (13)) is

\displaystyle  |C({\mathbb F}_n)| = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (22)

where {\alpha} runs over the (reciprocal) zeroes of {Z( C/{\mathbb F}, T )} (counting multiplicity), and {c} is an integer independent of {n}. (As it turns out, {c} equals {1} when {C} is a projective curve, and more generally equals {1-k} when {C} is a projective curve with {k} rational points deleted.)

To evaluate {Z(C/{\mathbb F},T)}, one needs to count the number of effective divisors of a given degree on the curve {C}. Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when {C} is projective) that {Z(C/{\mathbb F},T)} is in fact a rational function, with a finite number of zeroes, and a simple pole at both {1} and {1/q}, with similar results when one deletes some rational points from {C}; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have

\displaystyle  |E({\mathbb F}_n)| = q^n - \alpha^n - \beta^n

for two complex numbers {\alpha,\beta} depending on {E} and {q}.

The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of {\zeta_{C/{\mathbb F}}} lie on the critical line, or equivalently that all the roots {\alpha} in (22) have modulus {\sqrt{q}}, so that (22) then gives the asymptotic

\displaystyle  |C({\mathbb F}_n)| = q^n + O( q^{n/2} ) \ \ \ \ \ (23)

where the implied constant depends only on the genus of {C} (and on the number of points removed from {C}). For instance, for elliptic curves we have the Hasse bound

\displaystyle  |E({\mathbb F}_n) - q^n| \leq 2 \sqrt{q}.

As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.

\displaystyle  |C({\mathbb F}_n)| = q^n + O( q^{n/2 + O(1)} ), \ \ \ \ \ (24)

then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the proofs of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large {n}, which then amplifies to the optimal bound (23) for all {n} (and in particular for {n=1}). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with {q}-dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no {q}-dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of {q}.

Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet {L}-function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve {C \subset {\mathbb A}^2} and an additive character {\psi: {\mathbb F}^2 \rightarrow {\bf C}^\times}, thus {\psi(x+y) = \psi(x) \psi(y)} for all {x,y \in {\mathbb F}^2}. Given a rational effective divisor {D = [x_1] + \ldots + [x_k]}, the sum {x_1+\ldots+x_k} is Frobenius-invariant and thus lies in {{\mathbb F}^2}. By abuse of notation, we may thus define {\psi} on such divisors by

\displaystyle  \psi( [x_1] + \ldots + [x_k] ) := \psi( x_1 + \ldots + x_k )

and observe that {\psi} is multiplicative in the sense that {\psi(D_1 + D_2) = \psi(D_1) \psi(D_2)} for rational effective divisors {D_1,D_2}. One can then define {\psi({\mathfrak f})} for any non-trivial ideal {{\mathfrak f}} by replacing that ideal with the associated rational effective divisor; for instance, if {f} is a polynomial in the coefficient ring of {C}, with zeroes at {x_1,\ldots,x_k \in C}, then {\psi((f))} is {\psi( x_1+\ldots+x_k )}. Again, we have the multiplicativity property {\psi({\mathfrak f} {\mathfrak g}) = \psi({\mathfrak f}) \psi({\mathfrak g})}. If we then form the twisted normalised zeta function

\displaystyle  Z( C/{\mathbb F}, \psi, T ) = \sum_{D \geq 0} \psi(D) T^{\hbox{deg}(D)}

then by twisting the previous analysis, we eventually arrive at the exponential identity

\displaystyle  Z( C/{\mathbb F}, \psi, T ) = \exp( \sum_{n \geq 1} \frac{S_n(C/{\mathbb F}, \psi)}{n} T^n ) \ \ \ \ \ (25)

in analogy with (21) (or (2), (8), (12), or (16)), where the companion sums {S_n(C/{\mathbb F}, \psi)} are defined by

\displaystyle  S_n(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F}^n)} \psi( \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) )

where the trace {\hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x)} of an element {x} in the plane {{\mathbb A}^2( {\mathbb F}_n )} is defined by the formula

\displaystyle  \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x).

In particular, {S_1} is the exponential sum

\displaystyle  S_1(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F})} \psi(x)

which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum

\displaystyle  K(a,b;p) := \sum_{x \in {\mathbb F}_p^\times} e_p( ax + \frac{b}{x})

as a special case, where {a,b \in {\mathbb F}_p^\times}. (NOTE: the sign conventions for the companion sum {S_n} are not consistent across the literature, sometimes it is {-S_n} which is referred to as the companion sum.)

If {\psi} is non-principal (and {C} is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that {Z} is a rational function of {T}, with no pole at {T=q^{-1}}, and one then gets an explicit formula of the form

\displaystyle  S_n(C/{\mathbb F},\psi) = -\sum_\alpha \alpha^n + c \ \ \ \ \ (26)

for the companion sums, where {\alpha} are the reciprocals of the zeroes of {S}, in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form

\displaystyle  \sum_{x \in {\mathbb F}_{p^n}^\times} e_p( a \hbox{Tr}(x) + \frac{b}{\hbox{Tr}(x)}) = -\alpha^n - \beta^n \ \ \ \ \ (27)

for all {n} and some complex numbers {\alpha,\beta} depending on {a,b,p}, where we have abbreviated {\hbox{Tr}_{{\mathbb F}_{p^n}/{\mathbb F}_p}} as {\hbox{Tr}}. As before, the Riemann hypothesis for {Z} then gives a square root cancellation bound of the form

\displaystyle  S_n(C/{\mathbb F},\psi) = O( q^{n/2} ) \ \ \ \ \ (28)

for the companion sums (and in particular gives the very explicit Weil bound {|K(a,b;p)| \leq 2\sqrt{p}} for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound

\displaystyle  S_n(C/{\mathbb F},\psi) = O( q^{n/2 + O(1)} ).

As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.

One can also twist the zeta function on a curve by a multiplicative character {\chi: {\mathbb F}^\times \rightarrow {\bf C}^\times} by similar arguments, except that instead of forming the sum {x_1+\ldots+x_k} of all the components of an effective divisor {[x_1]+\ldots+[x_k]}, one takes the product {x_1 \ldots x_k} instead, and similarly one replaces the trace

\displaystyle  \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x)

by the norm

\displaystyle  \hbox{Norm}_{{\mathbb F}_n/{\mathbb F}}(x) = x \cdot \hbox{Frob}(x) \cdot \ldots \cdot \hbox{Frob}^{n-1}(x).

Again, see Chapter 11 of Iwaniec-Kowalski for details.

Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of {\ell}-adic sheaves on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to {\ell}-adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an {\ell}-adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.

Read the rest of this entry »

As in previous posts, we use the following asymptotic notation: {x} is a parameter going off to infinity, and all quantities may depend on {x} unless explicitly declared to be “fixed”. The asymptotic notation {O(), o(), \ll} is then defined relative to this parameter. A quantity {q} is said to be of polynomial size if one has {q = O(x^{O(1)})}, and bounded if {q=O(1)}. We also write {X \lessapprox Y} for {X \ll x^{o(1)} Y}, and {X \sim Y} for {X \ll Y \ll X}.

The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument. In order to state the main result, we need to recall some definitions. If {I} is a bounded subset of {{\bf R}}, let {{\mathcal S}_I} denote the square-free numbers whose prime factors lie in {I}, and let {P_I := \prod_{p \in I} p} denote the product of the primes {p} in {I}. Note by the Chinese remainder theorem that the set {({\bf Z}/P_I{\bf Z})^\times} of primitive congruence classes {a\ (P_I)} modulo {P_I} can be identified with the tuples {(a_q\ (q))_{q \in {\mathcal S}_I}} of primitive congruence classes {a_q\ (q)} of congruence classes modulo {q} for each {q \in {\mathcal S}_I} which obey the Chinese remainder theorem

\displaystyle  (a_{qr}\ (qr)) = (a_q\ (q)) \cap (a_r\ (r))

for all coprime {q,r \in {\mathcal S}_I}, since one can identify {a\ (P_I)} with the tuple {(a\ (q))_{q \in {\mathcal S}_I}} for each {a \in ({\bf Z}/P_I{\bf Z})^\times}.

If {y > 1} and {n} is a natural number, we say that {n} is {y}-densely divisible if, for every {1 \leq R \leq n}, one can find a factor of {n} in the interval {[y^{-1} R, R]}. We say that {n} is doubly {y}-densely divisible if, for every {1 \leq R \leq n}, one can find a factor {m} of {n} in the interval {[y^{-1} R, R]} such that {m} is itself {y}-densely divisible. We let {{\mathcal D}_y^2} denote the set of doubly {y}-densely divisible natural numbers, and {{\mathcal D}_y} the set of {y}-densely divisible numbers.

Given any finitely supported sequence {\alpha: {\bf N} \rightarrow {\bf C}} and any primitive residue class {a\ (q)}, we define the discrepancy

\displaystyle \Delta(\alpha; a \ (q)) := \sum_{n: n = a\ (q)} \alpha(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1} \alpha(n).

For any fixed {\varpi, \delta > 0}, we let {MPZ''[\varpi,\delta]} denote the assertion that

\displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^2: q \leq x^{1/2+2\varpi}} |\Delta(\Lambda 1_{[x,2x]}; a\ (q))| \ll x \log^{-A} x \ \ \ \ \ (1)

for any fixed {A > 0}, any bounded {I}, and any primitive {a\ (P_I)}, where {\Lambda} is the von Mangoldt function. Importantly, we do not require {I} or {a} to be fixed, in particular {I} could grow polynomially in {x}, and {a} could grow exponentially in {x}, but the implied constant in (1) would still need to be fixed (so it has to be uniform in {I} and {a}). (In previous formulations of these estimates, the system of congruence {a\ (q)} was also required to obey a controlled multiplicity hypothesis, but we no longer need this hypothesis in our arguments.) In this post we will record the proof of the following result, which is currently the best distribution result produced by the ongoing polymath8 project to optimise Zhang’s theorem on bounded gaps between primes:

Theorem 1 We have {MPZ''[\varpi,\delta]} whenever {\frac{280}{3} \varpi + \frac{80}{3} \delta < 1}.

This improves upon the previous constraint of {148 \varpi + 33 \delta < 1} (see this previous post), although that latter statement was stronger in that it only required single dense divisibility rather than double dense divisibility. However, thanks to the efficiency of the sieving step of our argument, the upgrade of the single dense divisibility hypothesis to double dense divisibility costs almost nothing with respect to the {k_0} parameter (which, using this constraint, gives a value of {k_0=720} as verified in these comments, which then implies a value of {H = 5,414}).

This estimate is deduced from three sub-estimates, which require a bit more notation to state. We need a fixed quantity {A_0>0}.

Definition 2 A coefficient sequence is a finitely supported sequence {\alpha: {\bf N} \rightarrow {\bf R}} that obeys the bounds

\displaystyle  |\alpha(n)| \ll \tau^{O(1)}(n) \log^{O(1)}(x) \ \ \ \ \ (2)

for all {n}, where {\tau} is the divisor function.

  • (i) A coefficient sequence {\alpha} is said to be at scale {N} for some {N \geq 1} if it is supported on an interval of the form {[(1-O(\log^{-A_0} x)) N, (1+O(\log^{-A_0} x)) N]}.
  • (ii) A coefficient sequence {\alpha} at scale {N} is said to obey the Siegel-Walfisz theorem if one has

    \displaystyle  | \Delta(\alpha 1_{(\cdot,q)=1}; a\ (r)) | \ll \tau(qr)^{O(1)} N \log^{-A} x \ \ \ \ \ (3)

    for any {q,r \geq 1}, any fixed {A}, and any primitive residue class {a\ (r)}.

  • (iii) A coefficient sequence {\alpha} at scale {N} (relative to this choice of {A_0}) is said to be smooth if it takes the form {\alpha(n) = \psi(n/N)} for some smooth function {\psi: {\bf R} \rightarrow {\bf C}} supported on {[1-O(\log^{-A_0} x), 1+O(\log^{-A_0} x)]} obeying the derivative bounds

    \displaystyle  \psi^{(j)}(t) = O( \log^{j A_0} x ) \ \ \ \ \ (4)

    for all fixed {j \geq 0} (note that the implied constant in the {O()} notation may depend on {j}).

Definition 3 (Type I, Type II, Type III estimates) Let {0 < \varpi < 1/4}, {0 < \delta < 1/4+\varpi}, and {0 < \sigma < 1/2} be fixed quantities. We let {I} be an arbitrary bounded subset of {{\bf R}}, and {a\ (P_I)} a primitive congruence class.

  • (i) We say that {Type''_I[\varpi,\delta,\sigma]} holds if, whenever {M, N \gg 1} are quantities with

    \displaystyle  M N \sim x \ \ \ \ \ (5)

    and

    \displaystyle  x^{1/2-\sigma} \lessapprox N \lessapprox x^{1/2-2\varpi-c} \ \ \ \ \ (6)

    for some fixed {c>0}, and {\alpha,\beta} are coefficient sequences at scales {M,N} respectively, with {\beta} obeying a Siegel-Walfisz theorem, we have

    \displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^2: q \leq x^{1/2+2\varpi}} |\Delta(\alpha * \beta; a\ (q))| \ll x \log^{-A} x. \ \ \ \ \ (7)

  • (ii) We say that {Type''_{II}[\varpi,\delta]} holds if the conclusion (7) of {Type''_I[\varpi,\delta,\sigma]} holds under the same hypotheses as before, except that (6) is replaced with

    \displaystyle  x^{1/2-2\varpi-c} \lessapprox N \lessapprox x^{1/2} \ \ \ \ \ (8)

    for some sufficiently small fixed {c>0}.

  • (iii) We say that {Type''_{III}[\varpi,\delta,\sigma]} holds if, whenever {M, N_1,N_2,N_3 \gg 1} are quantities with

    \displaystyle  M N_1 N_2 N_3 \sim x

    and

    \displaystyle  N_1 N_2, N_1 N_3, N_2 N_3 \gtrapprox x^{1/2+\sigma} \ \ \ \ \ (9)

    and

    \displaystyle  x^{2\sigma} \lessapprox N_1,N_2,N_3 \lessapprox x^{1/2-\sigma} \ \ \ \ \ (10)

    and {\alpha,\psi_1,\psi_2,\psi_3} are coefficient sequences at scales {M,N_1,N_2,N_3} respectively, with {\psi_1,\psi_2,\psi_3} smooth, we have

    \displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^2: q \leq x^{1/2+2\varpi}} \ \ \ \ \ (11)

    \displaystyle  |\Delta(\alpha * \psi_1 * \psi_2 * \psi_3; a\ (q))| \ll x \log^{-A} x.

Theorem 1 is then a consequence of the following four statements.

Theorem 4 (Type I estimate) {Type''_I[\varpi,\delta,\sigma]} holds whenever {\varpi,\delta,\sigma > 0} are fixed quantities such that

\displaystyle  56 \varpi + 16 \delta + 4\sigma < 1.

Theorem 5 (Type II estimate) {Type''_{II}[\varpi,\delta]} holds whenever {\varpi,\delta > 0} are fixed quantities such that

\displaystyle  68 \varpi + 14 \delta < 1.

Theorem 6 (Type III estimate) {Type''_{III}[\varpi,\delta,\sigma]} holds whenever {0 < \varpi < 1/4}, {0 < \delta < 1/4+\varpi}, and {\sigma > 0} are fixed quantities such that

\displaystyle  \sigma > \frac{1}{18} + \frac{28}{9} \varpi + \frac{2}{9} \delta \ \ \ \ \ (12)

and

\displaystyle  \varpi< \frac{1}{12}. \ \ \ \ \ (13)

In particular, if

\displaystyle  70 \varpi + 5 \delta < 1.

then all values of {\sigma} that are sufficiently close to {1/10} are admissible.

Lemma 7 (Combinatorial lemma) Let {0 < \varpi < 1/4}, {0 < \delta < 1/4+\varpi}, and {1/10 < \sigma < 1/2} be such that {Type''_I[\varpi,\delta,\sigma]}, {Type''_{II}[\varpi,\delta]}, and {Type''_{III}[\varpi,\delta,\sigma]} simultaneously hold. Then {MPZ''[\varpi,\delta]} holds.

Indeed, if {\frac{280}{3} \varpi + \frac{80}{3} \delta < 1}, one checks that the hypotheses for Theorems 4, 5, 6 are obeyed for {\sigma} sufficiently close to {1/10}, at which point the claim follows from Lemma 7.

The proofs of Theorems 4, 5, 6 will be given below the fold, while the proof of Lemma 7 follows from the arguments in this previous post. We remark that in our current arguments, the double dense divisibility is only fully used in the Type I estimates; the Type II and Type III estimates are also valid just with single dense divisibility.

Remark 1 Theorem 6 is vacuously true for {\sigma > 1/6}, as the condition (10) cannot be satisfied in this case. If we use this trivial case of Theorem 6, while keeping the full strength of Theorems 4 and 5, we obtain Theorem 1 in the regime

\displaystyle  168 \varpi + 48 \delta < 1.

Read the rest of this entry »

For any {H \geq 2}, let {B[H]} denote the assertion that there are infinitely many pairs of consecutive primes {p_n, p_{n+1}} whose difference {p_{n+1}-p_n} is at most {H}, or equivalently that

\displaystyle  \lim\inf_{n \rightarrow \infty} p_{n+1} - p_n \leq H;

thus for instance {B[2]} is the notorious twin prime conjecture. While this conjecture remains unsolved, we have the following recent breakthrough result of Zhang, building upon earlier work of Goldston-Pintz-Yildirim, Bombieri, Fouvry, Friedlander, and Iwaniec, and others:

Theorem 1 (Zhang’s theorem) {B[H]} is true for some finite {H}.

In fact, Zhang’s paper shows that {B[H]} is true with {H = 70,000,000}.

About a month ago, the Polymath8 project was launched with the objective of reading through Zhang’s paper, clarifying the arguments, and then making them more efficient, in order to improve the value of {H}. This project is still ongoing, but we have made significant progress; currently, we have confirmed that {B[H]} holds for {H} as low as {12,006}, and provisionally for {H} as low as {6,966} subject to certain lengthy arguments being checked. For several reasons, our methods (which are largely based on Zhang’s original argument structure, though with numerous refinements and improvements) will not be able to attain the twin prime conjecture {B[2]}, but there is still scope to lower the value of {H} a bit further than what we have currently.

The precise arguments here are quite technical, and are discussed at length on other posts on this blog. In this post, I would like to give a “high level” summary of how Zhang’s argument works, and give some impressions of the improvements we have made so far; these would already be familiar to the active participants of the Polymath8 project, but perhaps may be of value to people who are following this project on a more casual basis.

While Zhang’s arguments (and our refinements of it) are quite lengthy, they are fortunately also very modular, that is to say they can be broken up into several independent components that can be understood and optimised more or less separately from each other (although we have on occasion needed to modify the formulation of one component in order to better suit the needs of another). At the top level, Zhang’s argument looks like this:

  1. Statements of the form {B[H]} are deduced from weakened versions of the Hardy-Littlewood prime tuples conjecture, which we have denoted {DHL[k_0,2]} (the {DHL} stands for “Dickson-Hardy-Littlewood”), by locating suitable narrow admissible tuples (see below). Zhang’s paper establishes for the first time an unconditional proof of {DHL[k_0,2]} for some finite {k_0}; in his initial paper, {k_0} was {3,500,000}, but we have lowered this value to {1,466} (and provisionally to {902}). Any reduction in the value of {k_0} leads directly to reductions in the value of {H}; a web site to collect the best known values of {H} in terms of {k_0} has recently been set up here (and is accepting submissions for anyone who finds narrower admissible tuples than are currently known).
  2. Next, by adapting sieve-theoretic arguments of Goldston, Pintz, and Yildirim, the Dickson-Hardy-Littlewood type assertions {DHL[k_0,2]} are deduced in turn from weakened versions of the Elliott-Halberstam conjecture that we have denoted {MPZ[\varpi,\delta]} (the {MPZ} stands for “Motohashi-Pintz-Zhang”). More recently, we have replaced the conjecture {MPZ[\varpi,\delta]} by a slightly stronger conjecture {MPZ'[\varpi,\delta]} to significantly improve the efficiency of this step (using some recent ideas of Pintz). Roughly speaking, these statements assert that the primes are more or less evenly distributed along many arithmetic progressions, including those that have relatively large spacing. A crucial technical fact here is that in contrast to the older Elliott-Halberstam conjecture, the Motohashi-Pintz-Zhang estimates only require one to control progressions whose spacings {q} have a lot of small prime factors (the original {MPZ[\varpi,\delta]} conjecture requires the spacing {q} to be smooth, but the newer variant {MPZ'[\varpi,\delta]} has relaxed this to “densely divisible” as this turns out to be more efficient). The {\varpi} parameter is more important than the technical parameter {\delta}; we would like {\varpi} to be as large as possible, as any increase in this parameter should lead to a reduced value of {k_0}. In Zhang’s original paper, {\varpi} was taken to be {1/1168}; we have now increased this to be almost as large as {1/148} (and provisionally {1/108}).
  3. By a certain amount of combinatorial manipulation (combined with a useful decomposition of the von Mangoldt function due Heath-Brown), estimates such as {MPZ[\varpi,\delta]} can be deduced from three subestimates, the “Type I” estimate {Type_I[\varpi,\delta,\sigma]}, the “Type II” estimate {Type_{II}[\varpi,\delta]}, and the “Type III” estimate {Type_{III}[\varpi,\delta,\sigma]}, which all involve the distribution of certain Dirichlet convolutions in arithmetic progressions. Here {1/10 < \sigma < 1/2} is an adjustable parameter that demarcates the border between the Type I and Type III estimates; raising {\sigma} makes it easier to prove Type III estimates but harder to prove Type I estimates, and lowering {\sigma} of course has the opposite effect. There is a combinatorial lemma that asserts that as long as one can find some {\sigma} between {1/10} and {1/2} for which all three estimates {Type_I[\varpi,\delta,\sigma]}, {Type_{II}[\varpi,\delta]}, {Type_{III}[\varpi,\delta,\sigma]} hold, one can prove {MPZ[\varpi,\delta]}. (The condition {\sigma > 1/10} arises from the combinatorics, and appears to be rather essential; in fact, it is currently a major obstacle to further improvement of {\varpi} and hence {k_0} and {H}.)
  4. The Type I estimates {Type_I[\varpi,\delta,\sigma]} are asserting good distribution properties of convolutions of the form {\alpha * \beta}, where {\alpha,\beta} are moderately long sequences which have controlled magnitude and length but are otherwise arbitrary. Estimates that are roughly of this type first appeared in a series of papers by Bombieri, Fouvry, Friedlander, Iwaniec, and other authors, and Zhang’s arguments here broadly follow those of previous authors, but with several new twists that take advantage of the many factors of the spacing {q}. In particular, the dispersion method of Linnik is used (which one can think of as a clever application of the Cauchy-Schwarz inequality) to ultimately reduce matters (after more Cauchy-Schwarz, as well as treatment of several error terms) to estimation of incomplete Kloosterman-type sums such as

    \displaystyle  \sum_{n \leq N} e_d( \frac{c}{n} ).

    Zhang’s argument uses classical estimates on this Kloosterman sum (dating back to the work of Weil), but we have improved this using the “{q}-van der Corput {A}-process” introduced by Heath-Brown and Ringrose.

  5. The Type II estimates {Type_{II}[\varpi,\delta]} are similar to the Type I estimates, but cover a small hole in the coverage of the Type I estimates which comes up when the two sequences {\alpha,\beta} are almost equal in length. It turns out that one can modify the Type I argument to cover this case also. In practice, these estimates give less stringent conditions on {\varpi,\delta} than the other two estimates, and so as a first approximation one can ignore the need to treat these estimates, although recently our Type I and Type III estimates have become so strong that it has become necessary to tighten the Type II estimates as well.
  6. The Type III estimates {Type_{III}[\varpi,\delta,\sigma]} are an averaged variant of the classical problem of understanding the distribution of the ternary divisor function {\tau_3(n) := \sum_{abc=n} 1} in arithmetic progressions. There are various ways to attack this problem, but most of them ultimately boil down (after the use of standard devices such as Cauchy-Schwarz and completion of sums) to the task of controlling certain higher-dimensional Kloosterman-type sums such as

    \displaystyle  \sum_{t,t' \in ({\bf Z}/d{\bf Z})^\times} \sum_{l \in {\bf Z}/d{\bf Z}: (l,d)=(l+k,d)=1} e_d( \frac{t}{l} - \frac{t'}{l+k} + \frac{m}{t} - \frac{m'}{t'} ).

    In principle, any such sum can be controlled by invoking Deligne’s proof of the Weil conjectures in arbitrary dimension (which, roughly speaking, establishes the analogue of the Riemann hypothesis for arbitrary varieties over finite fields), although in the higher dimensional setting some algebraic geometry is needed to ensure that one gets the full “square root cancellation” for these exponential sums. (For the particular sum above, the necessary details were worked out by Birch and Bombieri.) As such, this part of the argument is by far the least elementary component of the whole. Zhang’s original argument cleverly exploited some additional cancellation in the above exponential sums that goes beyond the naive square root cancellation heuristic; more recently, an alternate argument of Fouvry, Kowalski, Michel, and Nelson uses bounds on a slightly different higher-dimensional Kloosterman-type sum to obtain results that give better values of {\varpi,\delta,\sigma}. We have also been able to improve upon these estimates by exploiting some additional averaging that was left unused by the previous arguments.

As of this time of writing, our understanding of the first three stages of Zhang’s argument (getting from {DHL[k_0,2]} to {B[H]}, getting from {MPZ[\varpi,\delta]} or {MPZ'[\varpi,\delta]} to {DHL[k_0,2]}, and getting to {MPZ[\varpi,\delta]} or {MPZ'[\varpi,\delta]} from Type I, Type II, and Type III estimates) are quite satisfactory, with the implications here being about as efficient as one could hope for with current methods, although one could still hope to get some small improvements in parameters by wringing out some of the last few inefficiencies. The remaining major sources of improvements to the parameters are then coming from gains in the Type I, II, and III estimates; we are currently in the process of making such improvements, but it will still take some time before they are fully optimised.

Below the fold I will discuss (mostly at an informal, non-rigorous level) the six steps above in a little more detail (full details can of course be found in the other polymath8 posts on this blog). This post will also serve as a new research thread, as the previous threads were getting quite lengthy.

Read the rest of this entry »

As in previous posts, we use the following asymptotic notation: {x} is a parameter going off to infinity, and all quantities may depend on {x} unless explicitly declared to be “fixed”. The asymptotic notation {O(), o(), \ll} is then defined relative to this parameter. A quantity {q} is said to be of polynomial size if one has {q = O(x^{O(1)})}, and bounded if {q=O(1)}. We also write {X \lessapprox Y} for {X \ll x^{o(1)} Y}, and {X \sim Y} for {X \ll Y \ll X}.

The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument (though not fully self-contained, as we will need some lemmas from previous posts).

In order to state the main result, we need to recall some definitions.

Definition 1 (Singleton congruence class system) Let {I \subset {\bf R}}, and let {{\mathcal S}_I} denote the square-free numbers whose prime factors lie in {I}. A singleton congruence class system on {I} is a collection {{\mathcal C} = (\{a_q\})_{q \in {\mathcal S}_I}} of primitive residue classes {a_q \in ({\bf Z}/q{\bf Z})^\times} for each {q \in {\mathcal S}_I}, obeying the Chinese remainder theorem property

\displaystyle  a_{qr}\ (qr) = (a_q\ (q)) \cap (a_r\ (r)) \ \ \ \ \ (1)

whenever {q,r \in {\mathcal S}_I} are coprime. We say that such a system {{\mathcal C}} has controlled multiplicity if the

\displaystyle  \tau_{\mathcal C}(n) := |\{ q \in {\mathcal S}_I: n = a_q\ (q) \}|

obeys the estimate

\displaystyle  \sum_{C^{-1} x \leq n \leq Cx: n = a\ (r)} \tau_{\mathcal C}(n)^2 \ll \frac{x}{r} \tau(r)^{O(1)} \log^{O(1)} x + x^{o(1)}. \ \ \ \ \ (2)

for any fixed {C>1} and any congruence class {a\ (r)} with {r \in {\mathcal S}_I}. Here {\tau} is the divisor function.

Next we need a relaxation of the concept of {y}-smoothness.

Definition 2 (Dense divisibility) Let {y \geq 1}. A positive integer {q} is said to be {y}-densely divisible if, for every {1 \leq R \leq q}, there exists a factor of {q} in the interval {[y^{-1} R, R]}. We let {{\mathcal D}_y} denote the set of {y}-densely divisible positive integers.

Now we present a strengthened version {MPZ'[\varpi,\delta]} of the Motohashi-Pintz-Zhang conjecture {MPZ[\varpi,\delta]}, which depends on parameters {0 < \varpi < 1/4} and {0 < \delta < 1/4}.

Conjecture 3 ({MPZ'[\varpi,\delta]}) Let {I \subset {\bf R}}, and let {(\{a_q\})_{q \in {\mathcal S}_I}} be a congruence class system with controlled multiplicity. Then

\displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}: q< x^{1/2+2\varpi}} |\Delta(\Lambda 1_{[x,2x]}; a_q)| \ll x \log^{-A} x \ \ \ \ \ (3)

for any fixed {A>0}, where {\Lambda} is the von Mangoldt function.

The difference between this conjecture and the weaker conjecture {MPZ[\varpi,\delta]} is that the modulus {q} is constrained to be {x^\delta}-densely divisible rather than {x^\delta}-smooth (note that {I} is no longer constrained to lie in {[1,x^\delta]}). This relaxation of the smoothness condition improves the Goldston-Pintz-Yildirim type sieving needed to deduce {DHL[k_0,2]} from {MPZ'[\varpi,\delta]}; see this previous post.

The main result we will establish is

Theorem 4 {MPZ'[\varpi,\delta]} holds for any {\varpi,\delta>0} with

\displaystyle  148\varpi+33\delta < 1. \ \ \ \ \ (4)

This improves upon previous constraints of {87\varpi + 17 \delta < \frac{1}{4}} (see this blog comment) and {207 \varpi + 43 \delta < \frac{1}{4}} (see Theorem 13 of this previous post), which were also only established for {MPZ[\varpi,\delta]} instead of {MPZ'[\varpi,\delta]}. Inserting Theorem 4 into the Pintz sieve from this previous post gives {DHL[k_0,2]} for {k_0 = 1467} (see this blog comment), which when inserted in turn into newly set up tables of narrow prime tuples gives infinitely many prime gaps of separation at most {H = 12,012}.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,591 other followers