You are currently browsing the category archive for the ‘math.NT’ category.

Tamar Ziegler and I have just uploaded to the arXiv our paper “Narrow progressions in the primes“, submitted to the special issue “Analytic Number Theory” in honor of the 60th birthday of Helmut Maier. The results here are vaguely reminiscent of the recent progress on bounded gaps in the primes, but use different methods.

About a decade ago, Ben Green and I showed that the primes contained arbitrarily long arithmetic progressions: given any ${k}$, one could find a progression ${n, n+r, \dots, n+(k-1)r}$ with ${r>0}$ consisting entirely of primes. In fact we showed the same statement was true if the primes were replaced by any subset of the primes of positive relative density.

A little while later, Tamar Ziegler and I obtained the following generalisation: given any ${k}$ and any polynomials ${P_1,\dots,P_k: {\bf Z} \rightarrow {\bf Z}}$ with ${P_1(0)=\dots=P_k(0)}$, one could find a “polynomial progression” ${n+P_1(r),\dots,n+P_k(r)}$ with ${r>0}$ consisting entirely of primes. Furthermore, we could make this progression somewhat “narrow” by taking ${r = n^{o(1)}}$ (where ${o(1)}$ denotes a quantity that goes to zero as ${n}$ goes to infinity). Again, the same statement also applies if the primes were replaced by a subset of positive relative density. My previous result with Ben corresponds to the linear case ${P_i(r) = (i-1)r}$.

In this paper we were able to make the progressions a bit narrower still: given any ${k}$ and any polynomials ${P_1,\dots,P_k: {\bf Z} \rightarrow {\bf Z}}$ with ${P_1(0)=\dots=P_k(0)}$, one could find a “polynomial progression” ${n+P_1(r),\dots,n+P_k(r)}$ with ${r>0}$ consisting entirely of primes, and such that ${r \leq \log^L n}$, where ${L}$ depends only on ${k}$ and ${P_1,\dots,P_k}$ (in fact it depends only on ${k}$ and the degrees of ${P_1,\dots,P_k}$). The result is still true if the primes are replaced by a subset of positive density ${\delta}$, but unfortunately in our arguments we must then let ${L}$ depend on ${\delta}$. However, in the linear case ${P_i(r) = (i-1)r}$, we were able to make ${L}$ independent of ${\delta}$ (although it is still somewhat large, of the order of ${k 2^k}$).

The polylogarithmic factor is somewhat necessary: using an upper bound sieve, one can easily construct a subset of the primes of density, say, ${90\%}$, whose arithmetic progressions ${n,n+r,\dots,n+(k-1)r}$ of length ${k}$ all obey the lower bound ${r \gg \log^{k-1} n}$. On the other hand, the prime tuples conjecture predicts that if one works with the actual primes rather than dense subsets of the primes, then one should have infinitely many length ${k}$ arithmetic progressions of bounded width for any fixed ${k}$. The ${k=2}$ case of this is precisely the celebrated theorem of Yitang Zhang that was the focus of the recently concluded Polymath8 project here. The higher ${k}$ case is conjecturally true, but appears to be out of reach of known methods. (Using the multidimensional Selberg sieve of Maynard, one can get ${m}$ primes inside an interval of length ${O( \exp(O(m)) )}$, but this is such a sparse set of primes that one would not expect to find even a progression of length three within such an interval.)

The argument in the previous paper was unable to obtain a polylogarithmic bound on the width of the progressions, due to the reliance on a certain technical “correlation condition” on a certain Selberg sieve weight ${\nu}$. This correlation condition required one to control arbitrarily long correlations of ${\nu}$, which was not compatible with a bounded value of ${L}$ (particularly if one wanted to keep ${L}$ independent of ${\delta}$).

However, thanks to recent advances in this area by Conlon, Fox, and Zhao (who introduced a very nice “densification” technique), it is now possible (in principle, at least) to delete this correlation condition from the arguments. Conlon-Fox-Zhao did this for my original theorem with Ben; and in the current paper we apply the densification method to our previous argument to similarly remove the correlation condition. This method does not fully eliminate the need to control arbitrarily long correlations, but allows most of the factors in such a long correlation to be bounded, rather than merely controlled by an unbounded weight such as ${\nu}$. This turns out to be significantly easier to control, although in the non-linear case we still unfortunately had to make ${L}$ large compared to ${\delta}$ due to a certain “clearing denominators” step arising from the complicated nature of the Gowers-type uniformity norms that we were using to control polynomial averages. We believe though that this an artefact of our method, and one should be able to prove our theorem with an ${L}$ that is uniform in ${\delta}$.

Here is a simple instance of the densification trick in action. Suppose that one wishes to establish an estimate of the form

$\displaystyle {\bf E}_n {\bf E}_r f(n) g(n+r) h(n+r^2) = o(1) \ \ \ \ \ (1)$

for some real-valued functions ${f,g,h}$ which are bounded in magnitude by a weight function ${\nu}$, but which are not expected to be bounded; this average will naturally arise when trying to locate the pattern ${(n,n+r,n+r^2)}$ in a set such as the primes. Here I will be vague as to exactly what range the parameters ${n,r}$ are being averaged over. Suppose that the factor ${g}$ (say) has enough uniformity that one can already show a smallness bound

$\displaystyle {\bf E}_n {\bf E}_r F(n) g(n+r) H(n+r^2) = o(1) \ \ \ \ \ (2)$

whenever ${F, H}$ are bounded functions. (One should think of ${F,H}$ as being like the indicator functions of “dense” sets, in contrast to ${f,h}$ which are like the normalised indicator functions of “sparse” sets). The bound (2) cannot be directly applied to control (1) because of the unbounded (or “sparse”) nature of ${f}$ and ${h}$. However one can “densify” ${f}$ and ${h}$ as follows. Since ${f}$ is bounded in magnitude by ${\nu}$, we can bound the left-hand side of (1) as

$\displaystyle {\bf E}_n \nu(n) | {\bf E}_r g(n+r) h(n+r^2) |.$

The weight function ${\nu}$ will be normalised so that ${{\bf E}_n \nu(n) = O(1)}$, so by the Cauchy-Schwarz inequality it suffices to show that

$\displaystyle {\bf E}_n \nu(n) | {\bf E}_r g(n+r) h(n+r^2) |^2 = o(1).$

The left-hand side expands as

$\displaystyle {\bf E}_n {\bf E}_r {\bf E}_s \nu(n) g(n+r) h(n+r^2) g(n+s) h(n+s^2).$

Now, it turns out that after an enormous (but finite) number of applications of the Cauchy-Schwarz inequality to steadily eliminate the ${g,h}$ factors, as well as a certain “polynomial forms condition” hypothesis on ${\nu}$, one can show that

$\displaystyle {\bf E}_n {\bf E}_r {\bf E}_s (\nu-1)(n) g(n+r) h(n+r^2) g(n+s) h(n+s^2) = o(1).$

(Because of the polynomial shifts, this requires a method known as “PET induction”, but let me skip over this point here.) In view of this estimate, we now just need to show that

$\displaystyle {\bf E}_n {\bf E}_r {\bf E}_s g(n+r) h(n+r^2) g(n+s) h(n+s^2) = o(1).$

Now we can reverse the previous steps. First, we collapse back to

$\displaystyle {\bf E}_n | {\bf E}_r g(n+r) h(n+r^2) |^2 = o(1).$

One can bound ${|{\bf E}_r g(n+r) h(n+r^2)|}$ by ${{\bf E}_r \nu(n+r) \nu(n+r^2)}$, which can be shown to be “bounded on average” in a suitable sense (e.g. bounded ${L^4}$ norm) via the aforementioned polynomial forms condition. Because of this and the Hölder inequality, the above estimate is equivalent to

$\displaystyle {\bf E}_n | {\bf E}_r g(n+r) h(n+r^2) | = o(1).$

By setting ${F}$ to be the signum of ${{\bf E}_r g(n+r) h(n+r^2)}$, this is equivalent to

$\displaystyle {\bf E}_n {\bf E}_r F(n) g(n+r) h(n+r^2) = o(1).$

This is halfway between (1) and (2); the sparsely supported function ${f}$ has been replaced by its “densification” ${F}$, but we have not yet densified ${h}$ to ${H}$. However, one can shift ${n}$ by ${r^2}$ and repeat the above arguments to achieve a similar densificiation of ${h}$, at which point one has reduced (1) to (2).

Kevin Ford, Ben Green, Sergei Konyagin, and myself have just posted to the arXiv our preprint “Large gaps between consecutive prime numbers“. This paper concerns the “opposite” problem to that considered by the recently concluded Polymath8 project, which was concerned with very small values of the prime gap ${p_{n+1}-p_n}$. Here, we wish to consider the largest prime gap ${G(X) = p_{n+1}-p_n}$ that one can find in the interval ${[X] = \{1,\dots,X\}}$ as ${X}$ goes to infinity.

Finding lower bounds on ${G(X)}$ is more or less equivalent to locating long strings of consecutive composite numbers that are not too large compared to the length of the string. A classic (and quite well known) construction here starts with the observation that for any natural number ${n}$, the consecutive numbers ${n!+2, n!+3,\dots,n!+n}$ are all composite, because each ${n!+i}$, ${i=2,\dots,n}$ is divisible by some prime ${p \leq n}$, while being strictly larger than that prime ${p}$. From this and Stirling’s formula, it is not difficult to obtain the bound

$\displaystyle G(X) \gg \frac{\log X}{\log\log X}. \ \ \ \ \ (1)$

A more efficient bound comes from the prime number theorem: there are only ${(1+o(1)) \frac{X}{\log X}}$ primes up to ${X}$, so just from the pigeonhole principle one can locate a string of consecutive composite numbers up to ${X}$ of length at least ${(1-o(1)) \log X}$, thus

$\displaystyle G(X) \gtrsim \log X \ \ \ \ \ (2)$

where we use ${X \gtrsim Y}$ or ${Y \lesssim X}$ as shorthand for ${X \geq (1-o(1)) Y}$ or ${Y \leq (1+o(1)) X}$.

What about upper bounds? The Cramér random model predicts that the primes up to ${X}$ are distributed like a random subset ${\{1,\dots,X\}}$ of density ${1/\log X}$. Using this model, Cramér arrived at the conjecture

$\displaystyle G(X) \ll \log^2 X.$

In fact, if one makes the extremely optimistic assumption that the random model perfectly describes the behaviour of the primes, one would arrive at the even more precise prediction

$\displaystyle G(X) \sim \log^2 X.$

However, it is no longer widely believed that this optimistic version of the conjecture is true, due to some additional irregularities in the primes coming from the basic fact that large primes cannot be divisible by very small primes. Using the Maier matrix method to capture some of this irregularity, Granville was led to the conjecture that

$\displaystyle G(X) \gtrsim 2e^{-\gamma} \log^2 X$

(note that ${2e^{-\gamma} = 1.1229\dots}$ is slightly larger than ${1}$). For comparison, the known upper bounds on ${G(X)}$ are quite weak; unconditionally one has ${G(X) \ll X^{0.525}}$ by the work of Baker, Harman, and Pintz, and even on the Riemann hypothesis one only gets down to ${G(X) \ll X^{1/2} \log X}$, as shown by Cramér (a slight improvement is also possible if one additionally assumes the pair correlation conjecture; see this article of Heath-Brown and the references therein).

This conjecture remains out of reach of current methods. In 1931, Westzynthius managed to improve the bound (2) slightly to

$\displaystyle G(X) \gg \frac{\log\log\log X}{\log\log\log\log X} \log X ,$

which Erdös in 1935 improved to

$\displaystyle G(X) \gg \frac{\log\log X}{(\log\log\log X)^2} \log X$

and Rankin in 1938 improved slightly further to

$\displaystyle G(X) \gtrsim c \frac{\log\log X (\log\log\log\log X)}{(\log\log\log X)^2} \log X \ \ \ \ \ (3)$

with ${c=1/3}$. Remarkably, this rather strange bound then proved extremely difficult to advance further on; until recently, the only improvements were to the constant ${c}$, which was raised to ${c=\frac{1}{2} e^\gamma}$ in 1963 by Schönhage, to ${c= e^\gamma}$ in 1963 by Rankin, to ${c = 1.31256 e^\gamma}$ by Maier and Pomerance, and finally to ${c = 2e^\gamma}$ in 1997 by Pintz.

Erdös listed the problem of making ${c}$ arbitrarily large one of his favourite open problems, even offering (“somewhat rashly”, in his words) a cash prize for the solution. Our main result answers this question in the affirmative:

Theorem 1 The bound (3) holds for arbitrarily large ${c>0}$.

In principle, we thus have a bound of the form

$\displaystyle G(X) \geq f(X) \frac{\log\log X (\log\log\log\log X)}{(\log\log\log X)^2} \log X$

for some ${f(X)}$ that grows to infinity. Unfortunately, due to various sources of ineffectivity in our methods, we cannot provide any explicit rate of growth on ${f(X)}$ at all.

We decided to announce this result the old-fashioned way, as part of a research lecture; more precisely, Ben Green announced the result in his ICM lecture this Tuesday. (The ICM staff have very efficiently put up video of his talks (and most of the other plenary and prize talks) online; Ben’s talk is here, with the announcement beginning at about 0:48. Note a slight typo in his slides, in that the exponent of ${\log\log\log X}$ in the denominator is ${3}$ instead of ${2}$.) Ben’s lecture slides may be found here.

By coincidence, an independent proof of this theorem has also been obtained very recently by James Maynard.

I discuss our proof method below the fold.

The 2014 Fields medallists have just been announced as (in alphabetical order of surname) Artur Avila, Manjul Bhargava, Martin Hairer, and Maryam Mirzakhani (see also these nice video profiles for the winners, which is a new initiative of the IMU and the Simons foundation). This time four years ago, I wrote a blog post discussing one result from each of the 2010 medallists; I thought I would try to repeat the exercise here, although the work of the medallists this time around is a little bit further away from my own direct area of expertise than last time, and so my discussion will unfortunately be a bit superficial (and possibly not completely accurate) in places. As before, I am picking these results based on my own idiosyncratic tastes, and they should not be viewed as necessarily being the “best” work of these medallists. (See also the press releases for Avila, Bhargava, Hairer, and Mirzakhani.)

Artur Avila works in dynamical systems and in the study of Schrödinger operators. The work of Avila that I am most familiar with is his solution with Svetlana Jitormiskaya of the ten martini problem of Kac, the solution to which (according to Barry Simon) he offered ten martinis for, hence the name. The problem involves perhaps the simplest example of a Schrödinger operator with non-trivial spectral properties, namely the almost Mathieu operator ${H^{\lambda,\alpha}_\omega: \ell^2({\bf Z}) \rightarrow \ell^2({\bf Z})}$ defined for parameters ${\alpha,\omega \in {\bf R}/{\bf Z}}$ and ${\lambda>0}$ by a discrete one-dimensional Schrödinger operator with cosine potential:

$\displaystyle (H^{\lambda,\alpha}_\omega u)_n := u_{n+1} + u_{n-1} + 2\lambda (\cos 2\pi(\theta+n\alpha)) u_n.$

This is a bounded self-adjoint operator and thus has a spectrum ${\sigma( H^{\lambda,\alpha}_\omega )}$ that is a compact subset of the real line; it arises in a number of physical contexts, most notably in the theory of the integer quantum Hall effect, though I will not discuss these applications here. Remarkably, the structure of this spectrum depends crucially on the Diophantine properties of the frequency ${\alpha}$. For instance, if ${\alpha = p/q}$ is a rational number, then the operator is periodic with period ${q}$, and then basic (discrete) Floquet theory tells us that the spectrum is simply the union of ${q}$ (possibly touching) intervals. But for irrational ${\alpha}$ (in which case the spectrum is independent of the phase ${\theta}$), the situation is much more fractal in nature, for instance in the critical case ${\lambda=1}$ the spectrum (as a function of ${\alpha}$) gives rise to the Hofstadter butterfly. The “ten martini problem” asserts that for every irrational ${\alpha}$ and every choice of coupling constant ${\lambda > 0}$, the spectrum is homeomorphic to a Cantor set. Prior to the work of Avila and Jitormiskaya, there were a number of partial results on this problem, notably the result of Puig establishing Cantor spectrum for a full measure set of parameters ${(\lambda,\alpha)}$, as well as results requiring a perturbative hypothesis, such as ${\lambda}$ being very small or very large. The result was also already known for ${\alpha}$ being either very close to rational (i.e. a Liouville number) or very far from rational (a Diophantine number), although the analyses for these two cases failed to meet in the middle, leaving some cases untreated. The argument uses a wide variety of existing techniques, both perturbative and non-perturbative, to attack this problem, as well as an amusing argument by contradiction: they assume (in certain regimes) that the spectrum fails to be a Cantor set, and use this hypothesis to obtain additional Lipschitz control on the spectrum (as a function of the frequency ${\alpha}$), which they can then use (after much effort) to improve existing arguments and conclude that the spectrum was in fact Cantor after all!

Manjul Bhargava produces amazingly beautiful mathematics, though most of it is outside of my own area of expertise. One part of his work that touches on an area of my own interest (namely, random matrix theory) is his ongoing work with many co-authors on modeling (both conjecturally and rigorously) the statistics of various key number-theoretic features of elliptic curves (such as their rank, their Selmer group, or their Tate-Shafarevich groups). For instance, with Kane, Lenstra, Poonen, and Rains, Manjul has proposed a very general random matrix model that predicts all of these statistics (for instance, predicting that the ${p}$-component of the Tate-Shafarevich group is distributed like the cokernel of a certain random ${p}$-adic matrix, very much in the spirit of the Cohen-Lenstra heuristics discussed in this previous post). But what is even more impressive is that Manjul and his coauthors have been able to verify several non-trivial fragments of this model (e.g. showing that certain moments have the predicted asymptotics), giving for the first time non-trivial upper and lower bounds for various statistics, for instance obtaining lower bounds on how often an elliptic curve has rank ${0}$ or rank ${1}$, leading most recently (in combination with existing work of Gross-Zagier and of Kolyvagin, among others) to his amazing result with Skinner and Zhang that at least ${66\%}$ of all elliptic curves over ${{\bf Q}}$ (ordered by height) obey the Birch and Swinnerton-Dyer conjecture. Previously it was not even known that a positive proportion of curves obeyed the conjecture. This is still a fair ways from resolving the conjecture fully (in particular, the situation with the presumably small number of curves of rank ${2}$ and higher is still very poorly understood, and the theory of Gross-Zagier and Kolyvagin that this work relies on, which was initially only available for ${{\bf Q}}$, has only been extended to totally real number fields thus far, by the work of Zhang), but it certainly does provide hope that the conjecture could be within reach in a statistical sense at least.

Martin Hairer works in at the interface between probability and partial differential equations, and in particular in the theory of stochastic differential equations (SDEs). The result of his that is closest to my own interests is his remarkable demonstration with Jonathan Mattingly of unique invariant measure for the two-dimensional stochastically forced Navier-Stokes equation

$\displaystyle \partial_t u + (u \cdot \nabla u) = \nu \Delta u - \nabla p + \xi$

$\displaystyle \nabla \cdot u = 0$

on the two-torus ${({\bf R}/{\bf Z})^2}$, where ${\xi}$ is a Gaussian field that forces a fixed set of frequencies. It is expected that for any reasonable choice of initial data, the solution to this equation should asymptotically be distributed according to Kolmogorov’s power law, as discussed in this previous post. This is still far from established rigorously (although there are some results in this direction for dyadic models, see e.g. this paper of Cheskidov, Shvydkoy, and Friedlander). However, Hairer and Mattingly were able to show that there was a unique probability distribution to almost every initial data would converge to asymptotically; by the ergodic theorem, this is equivalent to demonstrating the existence and uniqueness of an invariant measure for the flow. Existence can be established using standard methods, but uniqueness is much more difficult. One of the standard routes to uniqueness is to establish a “strong Feller property” that enforces some continuity on the transition operators; among other things, this would mean that two ergodic probability measures with intersecting supports would in fact have a non-trivial common component, contradicting the ergodic theorem (which forces different ergodic measures to be mutually singular). Since all ergodic measures for Navier-Stokes can be seen to contain the origin in their support, this would give uniqueness. Unfortunately, the strong Feller property is unlikely to hold in the infinite-dimensional phase space for Navier-Stokes; but Hairer and Mattingly develop a clean abstract substitute for this property, which they call the asymptotic strong Feller property, which is again a regularity property on the transition operator; this in turn is then demonstrated by a careful application of Malliavin calculus.

Maryam Mirzakhani has mostly focused on the geometry and dynamics of Teichmuller-type moduli spaces, such as the moduli space of Riemann surfaces with a fixed genus and a fixed number of cusps (or with a fixed number of boundaries that are geodesics of a prescribed length). These spaces have an incredibly rich structure, ranging from geometric structure (such as the Kahler geometry given by the Weil-Petersson metric), to dynamical structure (through the action of the mapping class group on this and related spaces), to algebraic structure (viewing these spaces as algebraic varieties), and are thus connected to many other objects of interest in geometry and dynamics. For instance, by developing a new recursive formula for the Weil-Petersson volume of this space, Mirzakhani was able to asymptotically count the number of simple prime geodesics of length up to some threshold ${L}$ in a hyperbolic surface (or more precisely, she obtained asymptotics for the number of such geodesics in a given orbit of the mapping class group); the answer turns out to be polynomial in ${L}$, in contrast to the much larger class of non-simple prime geodesics, whose asymptotics are exponential in ${L}$ (the “prime number theorem for geodesics”, developed in a classic series of works by Delsart, Huber, Selberg, and Margulis); she also used this formula to establish a new proof of a conjecture of Witten on intersection numbers that was first proven by Kontsevich. More recently, in two lengthy papers with Eskin and with Eskin-Mohammadi, Mirzakhani established rigidity theorems for the action of ${SL_2({\bf R})}$ on such moduli spaces that are close analogues of Ratner’s celebrated rigidity theorems for unipotently generated groups (discussed in this previous blog post). Ratner’s theorems are already notoriously difficult to prove, and rely very much on the polynomial stability properties of unipotent flows; in this even more complicated setting, the unipotent flows are no longer tractable, and Mirzakhani instead uses a recent “exponential drift” method of Benoist and Quint with as a substitute. Ratner’s theorems are incredibly useful for all sorts of problems connected to homogeneous dynamics, and the analogous theorems established by Mirzakhani, Eskin, and Mohammadi have a similarly broad range of applications, for instance in counting periodic billiard trajectories in rational polygons.

I’ve just uploaded to the arXiv the D.H.J. Polymath paper “Variants of the Selberg sieve, and bounded intervals containing many primes“, which is the second paper to be produced from the Polymath8 project (the first one being discussed here). We’ll refer to this latter paper here as the Polymath8b paper, and the former as the Polymath8a paper. As with Polymath8a, the Polymath8b paper is concerned with the smallest asymptotic prime gap

$\displaystyle H_1 := \liminf_{n \rightarrow \infty}(p_{n+1}-p_n),$

where ${p_n}$ denotes the ${n^{th}}$ prime, as well as the more general quantities

$\displaystyle H_m := \liminf_{n \rightarrow \infty}(p_{n+m}-p_n).$

In the breakthrough paper of Goldston, Pintz, and Yildirim, the bound ${H_1 \leq 16}$ was obtained under the strong hypothesis of the Elliott-Halberstam conjecture. An unconditional bound on ${H_1}$, however, remained elusive until the celebrated work of Zhang last year, who showed that

$\displaystyle H_1 \leq 70{,}000{,}000.$

The Polymath8a paper then improved this to ${H_1 \leq 4{,}680}$. After that, Maynard introduced a new multidimensional Selberg sieve argument that gave the substantial improvement

$\displaystyle H_1 \leq 600$

unconditionally, and ${H_1 \leq 12}$ on the Elliott-Halberstam conjecture; furthermore, bounds on ${H_m}$ for higher ${m}$ were obtained for the first time, and specifically that ${H_m \ll m^3 e^{4m}}$ for all ${m \geq 1}$, with the improvements ${H_2 \leq 600}$ and ${H_m \ll m^3 e^{2m}}$ on the Elliott-Halberstam conjecture. (I had independently discovered the multidimensional sieve idea, although I did not obtain Maynard’s specific numerical results, and my asymptotic bounds were a bit weaker.)

In Polymath8b, we obtain some further improvements. Unconditionally, we have ${H_1 \leq 246}$ and ${H_m \ll m e^{(4 - \frac{28}{157}) m}}$, together with some explicit bounds on ${H_2,H_3,H_4,H_5}$; on the Elliott-Halberstam conjecture we have ${H_m \ll m e^{2m}}$ and some numerical improvements to the ${H_2,H_3,H_4,H_5}$ bounds; and assuming the generalised Elliott-Halberstam conjecture we have the bound ${H_1 \leq 6}$, which is best possible from sieve-theoretic methods thanks to the parity problem obstruction.

There were a variety of methods used to establish these results. Maynard’s paper obtained a criterion for bounding ${H_m}$ which reduced to finding a good solution to a certain multidimensional variational problem. When the dimension parameter ${k}$ was relatively small (e.g. ${k \leq 100}$), we were able to obtain good numerical solutions both by continuing the method of Maynard (using a basis of symmetric polynomials), or by using a Krylov iteration scheme. For large ${k}$, we refined the asymptotics and obtained near-optimal solutions of the variational problem. For the ${H_1}$ bounds, we extended the reach of the multidimensional Selberg sieve (particularly under the assumption of the generalised Elliott-Halberstam conjecture) by allowing the function ${F}$ in the multidimensional variational problem to extend to a larger region of space than was previously admissible, albeit with some tricky new constraints on ${F}$ (and penalties in the variational problem). This required some unusual sieve-theoretic manipulations, notably an “epsilon trick”, ultimately relying on the elementary inequality ${(a+b)^2 \geq a^2 + 2ab}$, that allowed one to get non-trivial lower bounds for sums such as ${\sum_n (a(n)+b(n))^2}$ even if the sum ${\sum_n b(n)^2}$ had no non-trivial estimates available; and a way to estimate divisor sums such as ${\sum_{n\leq x} \sum_{d|n} \lambda_d}$ even if ${d}$ was permitted to be comparable to or even exceed ${x}$, by using the fundamental theorem of arithmetic to factorise ${n}$ (after restricting to the case when ${n}$ is almost prime). I hope that these sieve-theoretic tricks will be useful in future work in the subject.

With this paper, the Polymath8 project is almost complete; there is still a little bit of scope to push our methods further and get some modest improvement for instance to the ${H_1 \leq 246}$ bound, but this would require a substantial amount of effort, and it is probably best to instead wait for some new breakthrough in the subject to come along. One final task we are performing is to write up a retrospective article on both the 8a and 8b experiences, an incomplete writeup of which can be found here. If anyone wishes to contribute some commentary on these projects (whether you were an active contributor, an occasional contributor, or a silent “lurker” in the online discussion), please feel free to do so in the comments to this post.

Two of the most famous open problems in additive prime number theory are the twin prime conjecture and the binary Goldbach conjecture. They have quite similar forms:

• Twin prime conjecture The equation ${p_1 - p_2 = 2}$ has infinitely many solutions with ${p_1,p_2}$ prime.
• Binary Goldbach conjecture The equation ${p_1 + p_2 = N}$ has at least one solution with ${p_1,p_2}$ prime for any given even ${N \geq 4}$.

In view of this similarity, it is not surprising that the partial progress on these two conjectures have tracked each other fairly closely; the twin prime conjecture is generally considered slightly easier than the binary Goldbach conjecture, but broadly speaking any progress made on one of the conjectures has also led to a comparable amount of progress on the other. (For instance, Chen’s theorem has a version for the twin prime conjecture, and a version for the binary Goldbach conjecture.) Also, the notorious parity obstruction is present in both problems, preventing a solution to either conjecture by almost all known methods (see this previous blog post for more discussion).

In this post, I would like to note a divergence from this general principle, with regards to bounded error versions of these two conjectures:

• Twin prime with bounded error The inequalities ${0 < p_1 - p_2 < H}$ has infinitely many solutions with ${p_1,p_2}$ prime for some absolute constant ${H}$.
• Binary Goldbach with bounded error The inequalities ${N \leq p_1+p_2 \leq N+H}$ has at least one solution with ${p_1,p_2}$ prime for any sufficiently large ${N}$ and some absolute constant ${H}$.

The first of these statements is now a well-known theorem of Zhang, and the Polymath8b project hosted on this blog has managed to lower ${H}$ to ${H=246}$ unconditionally, and to ${H=6}$ assuming the generalised Elliott-Halberstam conjecture. However, the second statement remains open; the best result that the Polymath8b project could manage in this direction is that (assuming GEH) at least one of the binary Goldbach conjecture with bounded error, or the twin prime conjecture with no error, had to be true.

All the known proofs of Zhang’s theorem proceed through sieve-theoretic means. Basically, they take as input equidistribution results that control the size of discrepancies such as

$\displaystyle \Delta(f; a\ (q)) := \sum_{x \leq n \leq 2x; n=a\ (q)} f(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} f(n) \ \ \ \ \ (1)$

for various congruence classes ${a\ (q)}$ and various arithmetic functions ${f}$, e.g. ${f(n) = \Lambda(n+h_i)}$ (or more generaly ${f(n) = \alpha * \beta(n+h_i)}$ for various ${\alpha,\beta}$). After taking some carefully chosen linear combinations of these discrepancies, and using the trivial positivity lower bound

$\displaystyle a_n \geq 0 \hbox{ for all } n \implies \sum_n a_n \geq 0 \ \ \ \ \ (2)$

one eventually obtains (for suitable ${H}$) a non-trivial lower bound of the form

$\displaystyle \sum_{x \leq n \leq 2x} \nu(n) 1_A(n) > 0$

where ${\nu}$ is some weight function, and ${A}$ is the set of ${n}$ such that there are at least two primes in the interval ${[n,n+H]}$. This implies at least one solution to the inequalities ${0 < p_1 - p_2 < H}$ with ${p_1,p_2 \sim x}$, and Zhang’s theorem follows.

In a similar vein, one could hope to use bounds on discrepancies such as (1) (for ${x}$ comparable to ${N}$), together with the trivial lower bound (2), to obtain (for sufficiently large ${N}$, and suitable ${H}$) a non-trivial lower bound of the form

$\displaystyle \sum_{n \leq N} \nu(n) 1_B(n) > 0 \ \ \ \ \ (3)$

for some weight function ${\nu}$, where ${B}$ is the set of ${n}$ such that there is at least one prime in each of the intervals ${[n,n+H]}$ and ${[N-n-H,n]}$. This would imply the binary Goldbach conjecture with bounded error.

However, the parity obstruction blocks such a strategy from working (for much the same reason that it blocks any bound of the form ${H \leq 4}$ in Zhang’s theorem, as discussed in the Polymath8b paper.) The reason is as follows. The sieve-theoretic arguments are linear with respect to the ${n}$ summation, and as such, any such sieve-theoretic argument would automatically also work in a weighted setting in which the ${n}$ summation is weighted by some non-negative weight ${\omega(n) \geq 0}$. More precisely, if one could control the weighted discrepancies

$\displaystyle \Delta(f\omega; a\ (q)) = \sum_{x \leq n \leq 2x; n=a\ (q)} f(n) \omega(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} f(n) \omega(n)$

to essentially the same accuracy as the unweighted discrepancies (1), then thanks to the trivial weighted version

$\displaystyle a_n \geq 0 \hbox{ for all } n \implies \sum_n a_n \omega(n) \geq 0$

of (2), any sieve-theoretic argument that was capable of proving (3) would also be capable of proving the weighted estimate

$\displaystyle \sum_{n \leq N} \nu(n) 1_B(n) \omega(n) > 0. \ \ \ \ \ (4)$

However, (4) may be defeated by a suitable choice of weight ${\omega}$, namely

$\displaystyle \omega(n) := \prod_{i=1}^H (1 + \lambda(n) \lambda(n+i)) \times \prod_{j=0}^H (1 - \lambda(n) \lambda(N-n-j))$

where ${n \mapsto \lambda(n)}$ is the Liouville function, which counts the parity of the number of prime factors of a given number ${n}$. Since ${\lambda(n)^2 = 1}$, one can expand out ${\omega(n)}$ as the sum of ${1}$ and a finite number of other terms, each of which consists of the product of two or more translates (or reflections) of ${\lambda}$. But from the Möbius randomness principle (or its analogue for the Liouville function), such products of ${\lambda}$ are widely expected to be essentially orthogonal to any arithmetic function ${f(n)}$ that is arising from a single multiplicative function such as ${\Lambda}$, even on very short arithmetic progressions. As such, replacing ${1}$ by ${\omega(n)}$ in (1) should have a negligible effect on the discrepancy. On the other hand, in order for ${\omega(n)}$ to be non-zero, ${\lambda(n+i)}$ has to have the same sign as ${\lambda(n)}$ and hence the opposite sign to ${\lambda(N-n-j)}$ cannot simultaneously be prime for any ${0 \leq i,j \leq H}$, and so ${1_B(n) \omega(n)}$ vanishes identically, contradicting (4). This indirectly rules out any modification of the Goldston-Pintz-Yildirim/Zhang method for establishing the binary Goldbach conjecture with bounded error.

The above argument is not watertight, and one could envisage some ways around this problem. One of them is that the Möbius randomness principle could simply be false, in which case the parity obstruction vanishes. A good example of this is the result of Heath-Brown that shows that if there are infinitely many Siegel zeroes (which is a strong violation of the Möbius randomness principle), then the twin prime conjecture holds. Another way around the obstruction is to start controlling the discrepancy (1) for functions ${f}$ that are combinations of more than one multiplicative function, e.g. ${f(n) = \Lambda(n) \Lambda(n+2)}$. However, controlling such functions looks to be at least as difficult as the twin prime conjecture (which is morally equivalent to obtaining non-trivial lower-bounds for ${\sum_{x \leq n \leq 2x} \Lambda(n) \Lambda(n+2)}$). A third option is not to use a sieve-theoretic argument, but to try a different method (e.g. the circle method). However, most other known methods also exhibit linearity in the “${n}$” variable and I would suspect they would be vulnerable to a similar obstruction. (In any case, the circle method specifically has some other difficulties in tackling binary problems, as discussed in this previous post.)

Let ${\bar{{\bf Q}}}$ be the algebraic closure of ${{\bf Q}}$, that is to say the field of algebraic numbers. We fix an embedding of ${\bar{{\bf Q}}}$ into ${{\bf C}}$, giving rise to a complex absolute value ${z \mapsto |z|}$ for algebraic numbers ${z \in \bar{{\bf Q}}}$.

Let ${\alpha \in \bar{{\bf Q}}}$ be of degree ${D > 1}$, so that ${\alpha}$ is irrational. A classical theorem of Liouville gives the quantitative bound

$\displaystyle |\alpha - \frac{p}{q}| \geq c \frac{1}{|q|^D} \ \ \ \ \ (1)$

for the irrationality of ${\alpha}$ fails to be approximated by rational numbers ${p/q}$, where ${c>0}$ depends on ${\alpha,D}$ but not on ${p,q}$. Indeed, if one lets ${\alpha = \alpha_1, \alpha_2, \dots, \alpha_D}$ be the Galois conjugates of ${\alpha}$, then the quantity ${\prod_{i=1}^D |q \alpha_i - p|}$ is a non-zero natural number divided by a constant, and so we have the trivial lower bound

$\displaystyle \prod_{i=1}^D |q \alpha_i - p| \geq c$

from which the bound (1) easily follows. A well known corollary of the bound (1) is that Liouville numbers are automatically transcendental.

The famous theorem of Thue, Siegel and Roth improves the bound (1) to

$\displaystyle |\alpha - \frac{p}{q}| \geq c \frac{1}{|q|^{2+\epsilon}} \ \ \ \ \ (2)$

for any ${\epsilon>0}$ and rationals ${\frac{p}{q}}$, where ${c>0}$ depends on ${\alpha,\epsilon}$ but not on ${p,q}$. Apart from the ${\epsilon}$ in the exponent and the implied constant, this bound is optimal, as can be seen from Dirichlet’s theorem. This theorem is a good example of the ineffectivity phenomenon that affects a large portion of modern number theory: the implied constant in the ${\gg}$ notation is known to be finite, but there is no explicit bound for it in terms of the coefficients of the polynomial defining ${\alpha}$ (in contrast to (1), for which an effective bound may be easily established). This is ultimately due to the reliance on the “dueling conspiracy” (or “repulsion phenomenon”) strategy. We do not as yet have a good way to rule out one counterexample to (2), in which ${\frac{p}{q}}$ is far closer to ${\alpha}$ than ${\frac{1}{|q|^{2+\epsilon}}}$; however we can rule out two such counterexamples, by playing them off of each other.

A powerful strengthening of the Thue-Siegel-Roth theorem is given by the subspace theorem, first proven by Schmidt and then generalised further by several authors. To motivate the theorem, first observe that the Thue-Siegel-Roth theorem may be rephrased as a bound of the form

$\displaystyle | \alpha p - \beta q | \times | \alpha' p - \beta' q | \geq c (1 + |p| + |q|)^{-\epsilon} \ \ \ \ \ (3)$

for any algebraic numbers ${\alpha,\beta,\alpha',\beta'}$ with ${(\alpha,\beta)}$ and ${(\alpha',\beta')}$ linearly independent (over the algebraic numbers), and any ${(p,q) \in {\bf Z}^2}$ and ${\epsilon>0}$, with the exception when ${\alpha,\beta}$ or ${\alpha',\beta'}$ are rationally dependent (i.e. one is a rational multiple of the other), in which case one has to remove some lines (i.e. subspaces in ${{\bf Q}^2}$) of rational slope from the space ${{\bf Z}^2}$ of pairs ${(p,q)}$ to which the bound (3) does not apply (namely, those lines for which the left-hand side vanishes). Here ${c>0}$ can depend on ${\alpha,\beta,\alpha',\beta',\epsilon}$ but not on ${p,q}$. More generally, we have

Theorem 1 (Schmidt subspace theorem) Let ${d}$ be a natural number. Let ${L_1,\dots,L_d: \bar{{\bf Q}}^d \rightarrow \bar{{\bf Q}}}$ be linearly independent linear forms. Then for any ${\epsilon>0}$, one has the bound

$\displaystyle \prod_{i=1}^d |L_i(x)| \geq c (1 + \|x\| )^{-\epsilon}$

for all ${x \in {\bf Z}^d}$, outside of a finite number of proper subspaces of ${{\bf Q}^d}$, where

$\displaystyle \| (x_1,\dots,x_d) \| := \max( |x_1|, \dots, |x_d| )$

and ${c>0}$ depends on ${\epsilon, d}$ and the ${\alpha_{i,j}}$, but is independent of ${x}$.

Being a generalisation of the Thue-Siegel-Roth theorem, it is unsurprising that the known proofs of the subspace theorem are also ineffective with regards to the constant ${c}$. (However, the number of exceptional subspaces may be bounded effectively; cf. the situation with the Skolem-Mahler-Lech theorem, discussed in this previous blog post.) Once again, the lower bound here is basically sharp except for the ${\epsilon}$ factor and the implied constant: given any ${\delta_1,\dots,\delta_d > 0}$ with ${\delta_1 \dots \delta_d = 1}$, a simple volume packing argument (the same one used to prove the Dirichlet approximation theorem) shows that for any sufficiently large ${N \geq 1}$, one can find integers ${x_1,\dots,x_d \in [-N,N]}$, not all zero, such that

$\displaystyle |L_i(x)| \ll \delta_i$

for all ${i=1,\dots,d}$. Thus one can get ${\prod_{i=1}^d |L_i(x)|}$ comparable to ${1}$ in many different ways.

There are important generalisations of the subspace theorem to other number fields than the rationals (and to other valuations than the Archimedean valuation ${z \mapsto |z|}$); we will develop one such generalisation below.

The subspace theorem is one of many finiteness theorems in Diophantine geometry; in this case, it is the number of exceptional subspaces which is finite. It turns out that finiteness theorems are very compatible with the language of nonstandard analysis. (See this previous blog post for a review of the basics of nonstandard analysis, and in particular for the nonstandard interpretation of asymptotic notation such as ${\ll}$ and ${o()}$.) The reason for this is that a standard set ${X}$ is finite if and only if it contains no strictly nonstandard elements (that is to say, elements of ${{}^* X \backslash X}$). This makes for a clean formulation of finiteness theorems in the nonstandard setting. For instance, the standard form of Bezout’s theorem asserts that if ${P(x,y), Q(x,y)}$ are coprime polynomials over some field, then the curves ${\{ (x,y): P(x,y) = 0\}}$ and ${\{ (x,y): Q(x,y)=0\}}$ intersect in only finitely many points. The nonstandard version of this is then

Theorem 2 (Bezout’s theorem, nonstandard form) Let ${P(x,y), Q(x,y)}$ be standard coprime polynomials. Then there are no strictly nonstandard solutions to ${P(x,y)=Q(x,y)=0}$.

Now we reformulate Theorem 1 in nonstandard language. We need a definition:

Definition 3 (General position) Let ${K \subset L}$ be nested fields. A point ${x = (x_1,\dots,x_d)}$ in ${L^d}$ is said to be in ${K}$-general position if it is not contained in any hyperplane of ${L^d}$ definable over ${K}$, or equivalently if one has

$\displaystyle a_1 x_1 + \dots + a_d x_d = 0 \iff a_1=\dots = a_d = 0$

for any ${a_1,\dots,a_d \in K}$.

Theorem 4 (Schmidt subspace theorem, nonstandard version) Let ${d}$ be a standard natural number. Let ${L_1,\dots,L_d: \bar{{\bf Q}}^d \rightarrow \bar{{\bf Q}}}$ be linearly independent standard linear forms. Let ${x \in {}^* {\bf Z}^d}$ be a tuple of nonstandard integers which is in ${{\bf Q}}$-general position (in particular, this forces ${x}$ to be strictly nonstandard). Then one has

$\displaystyle \prod_{i=1}^d |L_i(x)| \gg \|x\|^{-o(1)},$

where we extend ${L_i}$ from ${\bar{{\bf Q}}}$ to ${{}^* \bar{{\bf Q}}}$ (and also similarly extend ${\| \|}$ from ${{\bf Z}^d}$ to ${{}^* {\bf Z}^d}$) in the usual fashion.

Observe that (as is usual when translating to nonstandard analysis) some of the epsilons and quantifiers that are present in the standard version become hidden in the nonstandard framework, being moved inside concepts such as “strictly nonstandard” or “general position”. We remark that as ${x}$ is in ${{\bf Q}}$-general position, it is also in ${\bar{{\bf Q}}}$-general position (as an easy Galois-theoretic argument shows), and the requirement that the ${L_1,\dots,L_d}$ are linearly independent is thus equivalent to ${L_1(x),\dots,L_d(x)}$ being ${\bar{{\bf Q}}}$-linearly independent.

Exercise 1 Verify that Theorem 1 and Theorem 4 are equivalent. (Hint: there are only countably many proper subspaces of ${{\bf Q}^d}$.)

We will not prove the subspace theorem here, but instead focus on a particular application of the subspace theorem, namely to counting integer points on curves. In this paper of Corvaja and Zannier, the subspace theorem was used to give a new proof of the following basic result of Siegel:

Theorem 5 (Siegel’s theorem on integer points) Let ${P \in {\bf Q}[x,y]}$ be an irreducible polynomial of two variables, such that the affine plane curve ${C := \{ (x,y): P(x,y)=0\}}$ either has genus at least one, or has at least three points on the line at infinity, or both. Then ${C}$ has only finitely many integer points ${(x,y) \in {\bf Z}^2}$.

This is a finiteness theorem, and as such may be easily converted to a nonstandard form:

Theorem 6 (Siegel’s theorem, nonstandard form) Let ${P \in {\bf Q}[x,y]}$ be a standard irreducible polynomial of two variables, such that the affine plane curve ${C := \{ (x,y): P(x,y)=0\}}$ either has genus at least one, or has at least three points on the line at infinity, or both. Then ${C}$ does not contain any strictly nonstandard integer points ${(x_*,y_*) \in {}^* {\bf Z}^2 \backslash {\bf Z}^2}$.

Note that Siegel’s theorem can fail for genus zero curves that only meet the line at infinity at just one or two points; the key examples here are the graphs ${\{ (x,y): y - f(x) = 0\}}$ for a polynomial ${f \in {\bf Z}[x]}$, and the Pell equation curves ${\{ (x,y): x^2 - dy^2 = 1 \}}$. Siegel’s theorem can be compared with the more difficult theorem of Faltings, which establishes finiteness of rational points (not just integer points), but now needs the stricter requirement that the curve ${C}$ has genus at least two (to avoid the additional counterexample of elliptic curves of positive rank, which have infinitely many rational points).

The standard proofs of Siegel’s theorem rely on a combination of the Thue-Siegel-Roth theorem and a number of results on abelian varieties (notably the Mordell-Weil theorem). The Corvaja-Zannier argument rebalances the difficulty of the argument by replacing the Thue-Siegel-Roth theorem by the more powerful subspace theorem (in fact, they need one of the stronger versions of this theorem alluded to earlier), while greatly reducing the reliance on results on abelian varieties. Indeed, for curves with three or more points at infinity, no theory from abelian varieties is needed at all, while for the remaining cases, one mainly needs the existence of the Abel-Jacobi embedding, together with a relatively elementary theorem of Chevalley-Weil which is used in the proof of the Mordell-Weil theorem, but is significantly easier to prove.

The Corvaja-Zannier argument (together with several further applications of the subspace theorem) is presented nicely in this Bourbaki expose of Bilu. To establish the theorem in full generality requires a certain amount of algebraic number theory machinery, such as the theory of valuations on number fields, or of relative discriminants between such number fields. However, the basic ideas can be presented without much of this machinery by focusing on simple special cases of Siegel’s theorem. For instance, we can handle irreducible cubics that meet the line at infinity at exactly three points ${[1,\alpha_1,0], [1,\alpha_2,0], [1,\alpha_3,0]}$:

Theorem 7 (Siegel’s theorem with three points at infinity) Siegel’s theorem holds when the irreducible polynomial ${P(x,y)}$ takes the form

$\displaystyle P(x,y) = (y - \alpha_1 x) (y - \alpha_2 x) (y - \alpha_3 x) + Q(x,y)$

for some quadratic polynomial ${Q \in {\bf Q}[x,y]}$ and some distinct algebraic numbers ${\alpha_1,\alpha_2,\alpha_3}$.

Proof: We use the nonstandard formalism. Suppose for sake of contradiction that we can find a strictly nonstandard integer point ${(x_*,y_*) \in {}^* {\bf Z}^2 \backslash {\bf Z}^2}$ on a curve ${C := \{ (x,y): P(x,y)=0\}}$ of the indicated form. As this point is infinitesimally close to the line at infinity, ${y_*/x_*}$ must be infinitesimally close to one of ${\alpha_1,\alpha_2,\alpha_3}$; without loss of generality we may assume that ${y_*/x_*}$ is infinitesimally close to ${\alpha_1}$.

We now use a version of the polynomial method, to find some polynomials of controlled degree that vanish to high order on the “arm” of the cubic curve ${C}$ that asymptotes to ${[1,\alpha_1,0]}$. More precisely, let ${D \geq 3}$ be a large integer (actually ${D=3}$ will already suffice here), and consider the ${\bar{{\bf Q}}}$-vector space ${V}$ of polynomials ${R(x,y) \in \bar{{\bf Q}}[x,y]}$ of degree at most ${D}$, and of degree at most ${2}$ in the ${y}$ variable; this space has dimension ${3D}$. Also, as one traverses the arm ${y/x \rightarrow \alpha_1}$ of ${C}$, any polynomial ${R}$ in ${V}$ grows at a rate of at most ${D}$, that is to say ${R}$ has a pole of order at most ${D}$ at the point at infinity ${[1,\alpha_1,0]}$. By performing Laurent expansions around this point (which is a non-singular point of ${C}$, as the ${\alpha_i}$ are assumed to be distinct), we may thus find a basis ${R_1, \dots, R_{3D}}$ of ${V}$, with the property that ${R_j}$ has a pole of order at most ${D+1-j}$ at ${[1,\alpha_1,0]}$ for each ${j=1,\dots,3D}$.

From the control of the pole at ${[1,\alpha_1,0]}$, we have

$\displaystyle |R_j(x_*,y_*)| \ll (|x_*|+|y_*|)^{D+1-j}$

for all ${j=1,\dots,3D}$. The exponents here become negative for ${j > D+1}$, and on multiplying them all together we see that

$\displaystyle \prod_{j=1}^{3D} |R_j(x_*,y_*)| \ll (|x_*|+|y_*|)^{3D(D+1) - \frac{3D(3D+1)}{2}}.$

This exponent is negative for ${D}$ large enough (or just take ${D=3}$). If we expand

$\displaystyle R_j(x_*,y_*) = \sum_{a+b \leq D; b \leq 2} \alpha_{j,a,b} x_*^a y_*^b$

for some algebraic numbers ${\alpha_{j,a,b}}$, then we thus have

$\displaystyle \prod_{j=1}^{3D} |\sum_{a+b \leq D; b \leq 2} \alpha_{j,a,b} x_*^a y_*^b| \ll (|x_*|+|y_*|)^{-\epsilon}$

for some standard ${\epsilon>0}$. Note that the ${3D}$-dimensional vectors ${(\alpha_{j,a,b})_{a+b \leq D; b \leq 2}}$ are linearly independent in ${{\bf C}^{3D}}$, because the ${R_j}$ are linearly independent in ${V}$. Applying the Schmidt subspace theorem in the contrapositive, we conclude that the ${3D}$-tuple ${( x_*^a y_*^b )_{a+b \leq D; b \leq 2} \in {}^* {\bf Z}^{3D}}$ is not in ${{\bf Q}}$-general position. That is to say, one has a non-trivial constraint of the form

$\displaystyle \sum_{a+b \leq D; b \leq 2} c_{a,b} x_*^a y_*^b = 0 \ \ \ \ \ (4)$

for some standard rational coefficients ${c_{a,b}}$, not all zero. But, as ${P}$ is irreducible and cubic in ${y}$, it has no common factor with the standard polynomial ${\sum_{a+b \leq D; b \leq 2} c_{a,b} x^a y^b}$, so by Bezout’s theorem (Theorem 2) the constraint (4) only has standard solutions, contradicting the strictly nonstandard nature of ${(x_*,y_*)}$. $\Box$

Exercise 2 Rewrite the above argument so that it makes no reference to nonstandard analysis. (In this case, the rewriting is quite straightforward; however, there will be a subsequent argument in which the standard version is significantly messier than the nonstandard counterpart, which is the reason why I am working with the nonstandard formalism in this blog post.)

A similar argument works for higher degree curves that meet the line at infinity in three or more points, though if the curve has singularities at infinity then it becomes convenient to rely on the Riemann-Roch theorem to control the dimension of the analogue of the space ${V}$. Note that when there are only two or fewer points at infinity, though, one cannot get the negative exponent of ${-\epsilon}$ needed to usefully apply the subspace theorem. To deal with this case we require some additional tricks. For simplicity we focus on the case of Mordell curves, although it will be convenient to work with more general number fields ${{\bf Q} \subset K \subset \bar{{\bf Q}}}$ than the rationals:

Theorem 8 (Siegel’s theorem for Mordell curves) Let ${k}$ be a non-zero integer. Then there are only finitely many integer solutions ${(x,y) \in {\bf Z}^2}$ to ${y^2 - x^3 = k}$. More generally, for any number field ${K}$, and any nonzero ${k \in K}$, there are only finitely many algebraic integer solutions ${(x,y) \in {\mathcal O}_K^2}$ to ${y^2-x^3=k}$, where ${{\mathcal O}_K}$ is the ring of algebraic integers in ${K}$.

Again, we will establish the nonstandard version. We need some additional notation:

Definition 9

• We define an almost rational integer to be a nonstandard ${x \in {}^* {\bf Q}}$ such that ${Mx \in {}^* {\bf Z}}$ for some standard positive integer ${M}$, and write ${{\bf Q} {}^* {\bf Z}}$ for the ${{\bf Q}}$-algebra of almost rational integers.
• If ${K}$ is a standard number field, we define an almost ${K}$-integer to be a nonstandard ${x \in {}^* K}$ such that ${Mx \in {}^* {\mathcal O}_K}$ for some standard positive integer ${M}$, and write ${K {}^* {\bf Z} = K {\mathcal O}_K}$ for the ${K}$-algebra of almost ${K}$-integers.
• We define an almost algebraic integer to be a nonstandard ${x \in {}^* {\bar Q}}$ such that ${Mx}$ is a nonstandard algebraic integer for some standard positive integer ${M}$, and write ${\bar{{\bf Q}} {}^* {\bf Z}}$ for the ${\bar{{\bf Q}}}$-algebra of almost algebraic integers.
• Theorem 10 (Siegel for Mordell, nonstandard version) Let ${k}$ be a non-zero standard algebraic number. Then the curve ${\{ (x,y): y^2 - x^3 = k \}}$ does not contain any strictly nonstandard almost algebraic integer point.

Another way of phrasing this theorem is that if ${x,y}$ are strictly nonstandard almost algebraic integers, then ${y^2-x^3}$ is either strictly nonstandard or zero.

Exercise 3 Verify that Theorem 8 and Theorem 10 are equivalent.

Due to all the ineffectivity, our proof does not supply any bound on the solutions ${x,y}$ in terms of ${k}$, even if one removes all references to nonstandard analysis. It is a conjecture of Hall (a special case of the notorious ABC conjecture) that one has the bound ${|x| \ll_\epsilon |k|^{2+\epsilon}}$ for all ${\epsilon>0}$ (or equivalently ${|y| \ll_\epsilon |k|^{3+\epsilon}}$), but even the weaker conjecture that ${x,y}$ are of polynomial size in ${k}$ is open. (The best known bounds are of exponential nature, and are proven using a version of Baker’s method: see for instance this text of Sprindzuk.)

A direct repetition of the arguments used to prove Theorem 7 will not work here, because the Mordell curve ${\{ (x,y): y^2 - x^3 = k \}}$ only hits the line at infinity at one point, ${[0,1,0]}$. To get around this we will exploit the fact that the Mordell curve is an elliptic curve and thus has a group law on it. We will then divide all the integer points on this curve by two; as elliptic curves have four 2-torsion points, this will end up placing us in a situation like Theorem 7, with four points at infinity. However, there is an obstruction: it is not obvious that dividing an integer point on the Mordell curve by two will produce another integer point. However, this is essentially true (after enlarging the ring of integers slightly) thanks to a general principle of Chevalley and Weil, which can be worked out explicitly in the case of division by two on Mordell curves by relatively elementary means (relying mostly on unique factorisation of ideals of algebraic integers). We give the details below the fold.

Let ${V}$ be a quasiprojective variety defined over a finite field ${{\bf F}_q}$, thus for instance ${V}$ could be an affine variety

$\displaystyle V = \{ x \in {\bf A}^d: P_1(x) = \dots = P_m(x) = 0\} \ \ \ \ \ (1)$

where ${{\bf A}^d}$ is ${d}$-dimensional affine space and ${P_1,\dots,P_m: {\bf A}^d \rightarrow {\bf A}}$ are a finite collection of polynomials with coefficients in ${{\bf F}_q}$. Then one can define the set ${V[{\bf F}_q]}$ of ${{\bf F}_q}$-rational points, and more generally the set ${V[{\bf F}_{q^n}]}$ of ${{\bf F}_{q^n}}$-rational points for any ${n \geq 1}$, since ${{\bf F}_{q^n}}$ can be viewed as a field extension of ${{\bf F}_q}$. Thus for instance in the affine case (1) we have

$\displaystyle V[{\bf F}_{q^n}] := \{ x \in {\bf F}_{q^n}^d: P_1(x) = \dots = P_m(x) = 0\}.$

The Weil conjectures are concerned with understanding the number

$\displaystyle S_n := |V[{\bf F}_{q^n}]| \ \ \ \ \ (2)$

of ${{\bf F}_{q^n}}$-rational points over a variety ${V}$. The first of these conjectures was proven by Dwork, and can be phrased as follows.

Theorem 1 (Rationality of the zeta function) Let ${V}$ be a quasiprojective variety defined over a finite field ${{\bf F}_q}$, and let ${S_n}$ be given by (2). Then there exist a finite number of algebraic integers ${\alpha_1,\dots,\alpha_k, \beta_1,\dots,\beta_{k'} \in O_{\overline{{\bf Q}}}}$ (known as characteristic values of ${V}$), such that

$\displaystyle S_n = \alpha_1^n + \dots + \alpha_k^n - \beta_1^n - \dots - \beta_{k'}^n$

for all ${n \geq 1}$.

After cancelling, we may of course assume that ${\alpha_i \neq \beta_j}$ for any ${i=1,\dots,k}$ and ${j=1,\dots,k'}$, and then it is easy to see (as we will see below) that the ${\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'}}$ become uniquely determined up to permutations of the ${\alpha_1,\dots,\alpha_k}$ and ${\beta_1,\dots,\beta_{k'}}$. These values are known as the characteristic values of ${V}$. Since ${S_n}$ is a rational integer (i.e. an element of ${{\bf Z}}$) rather than merely an algebraic integer (i.e. an element of the ring of integers ${O_{\overline{{\bf Q}}}}$ of the algebraic closure ${\overline{{\bf Q}}}$ of ${{\bf Q}}$), we conclude from the above-mentioned uniqueness that the set of characteristic values are invariant with respect to the Galois group ${Gal(\overline{{\bf Q}} / {\bf Q} )}$. To emphasise this Galois invariance, we will not fix a specific embedding ${\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}$ of the algebraic numbers into the complex field ${{\bf C} = {\bf C}_\infty}$, but work with all such embeddings simultaneously. (Thus, for instance, ${\overline{{\bf Q}}}$ contains three cube roots of ${2}$, but which of these is assigned to the complex numbers ${2^{1/3}}$, ${e^{2\pi i/3} 2^{1/3}}$, ${e^{4\pi i/3} 2^{1/3}}$ will depend on the choice of embedding ${\iota_\infty}$.)

An equivalent way of phrasing Dwork’s theorem is that the (${T}$-form of the) zeta function

$\displaystyle \zeta_V(T) := \exp( \sum_{n=1}^\infty \frac{S_n}{n} T^n )$

associated to ${V}$ (which is well defined as a formal power series in ${T}$, at least) is equal to a rational function of ${T}$ (with the ${\alpha_1,\dots,\alpha_k}$ and ${\beta_1,\dots,\beta_{k'}}$ being the poles and zeroes of ${\zeta_V}$ respectively). Here, we use the formal exponential

$\displaystyle \exp(X) := 1 + X + \frac{X^2}{2!} + \frac{X^3}{3!} + \dots.$

Equivalently, the (${s}$-form of the) zeta-function ${s \mapsto \zeta_V(q^{-s})}$ is a meromorphic function on the complex numbers ${{\bf C}}$ which is also periodic with period ${2\pi i/\log q}$, and which has only finitely many poles and zeroes up to this periodicity.

Dwork’s argument relies primarily on ${p}$-adic analysis – an analogue of complex analysis, but over an algebraically complete (and metrically complete) extension ${{\bf C}_p}$ of the ${p}$-adic field ${{\bf Q}_p}$, rather than over the Archimedean complex numbers ${{\bf C}}$. The argument is quite effective, and in particular gives explicit upper bounds for the number ${k+k'}$ of characteristic values in terms of the complexity of the variety ${V}$; for instance, in the affine case (1) with ${V}$ of degree ${D}$, Bombieri used Dwork’s methods (in combination with Deligne’s theorem below) to obtain the bound ${k+k' \leq (4D+9)^{2d+1}}$, and a subsequent paper of Hooley established the slightly weaker bound ${k+k' \leq (11D+11)^{d+m+2}}$ purely from Dwork’s methods (a similar bound had also been pointed out in unpublished work of Dwork). In particular, one has bounds that are uniform in the field ${{\bf F}_q}$, which is an important fact for many analytic number theory applications.

These ${p}$-adic arguments stand in contrast with Deligne’s resolution of the last (and deepest) of the Weil conjectures:

Theorem 2 (Riemann hypothesis) Let ${V}$ be a quasiprojective variety defined over a finite field ${{\bf F}_q}$, and let ${\lambda \in \overline{{\bf Q}}}$ be a characteristic value of ${V}$. Then there exists a natural number ${w}$ such that ${|\iota_\infty(\lambda)|_\infty = q^{w/2}}$ for every embedding ${\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}$, where ${| |_\infty}$ denotes the usual absolute value on the complex numbers ${{\bf C} = {\bf C}_\infty}$. (Informally: ${\lambda}$ and all of its Galois conjugates have complex magnitude ${q^{w/2}}$.)

To put it another way that closely resembles the classical Riemann hypothesis, all the zeroes and poles of the ${s}$-form ${s \mapsto \zeta_V(q^{-s})}$ lie on the critical lines ${\{ s \in {\bf C}: \hbox{Re}(s) = \frac{w}{2} \}}$ for ${w=0,1,2,\dots}$. (See this previous blog post for further comparison of various instantiations of the Riemann hypothesis.) Whereas Dwork uses ${p}$-adic analysis, Deligne uses the essentially orthogonal technique of ell-adic cohomology to establish his theorem. However, ell-adic methods can be used (via the Grothendieck-Lefschetz trace formula) to establish rationality, and conversely, in this paper of Kedlaya p-adic methods are used to establish the Riemann hypothesis. As pointed out by Kedlaya, the ell-adic methods are tied to the intrinsic geometry of ${V}$ (such as the structure of sheaves and covers over ${V}$), while the ${p}$-adic methods are more tied to the extrinsic geometry of ${V}$ (how ${V}$ sits inside its ambient affine or projective space).

In this post, I would like to record my notes on Dwork’s proof of Theorem 1, drawing heavily on the expositions of Serre, Hooley, Koblitz, and others.

The basic strategy is to control the rational integers ${S_n}$ both in an “Archimedean” sense (embedding the rational integers inside the complex numbers ${{\bf C}_\infty}$ with the usual norm ${||_\infty}$) as well as in the “${p}$-adic” sense, with ${p}$ the characteristic of ${{\bf F}_q}$ (embedding the integers now in the “complexification” ${{\bf C}_p}$ of the ${p}$-adic numbers ${{\bf Q}_p}$, which is equipped with a norm ${||_p}$ that we will recall later). (This is in contrast to the methods of ell-adic cohomology, in which one primarily works over an ${\ell}$-adic field ${{\bf Q}_\ell}$ with ${\ell \neq p,\infty}$.) The Archimedean control is trivial:

Proposition 3 (Archimedean control of ${S_n}$) With ${S_n}$ as above, and any embedding ${\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}$, we have

$\displaystyle |\iota_\infty(S_n)|_\infty \leq C q^{A n}$

for all ${n}$ and some ${C, A >0}$ independent of ${n}$.

Proof: Since ${S_n}$ is a rational integer, ${|\iota_\infty(S_n)|_\infty}$ is just ${|S_n|_\infty}$. By decomposing ${V}$ into affine pieces, we may assume that ${V}$ is of the affine form (1), then we trivially have ${|S_n|_\infty \leq q^{nd}}$, and the claim follows. $\Box$

Another way of thinking about this Archimedean control is that it guarantees that the zeta function ${T \mapsto \zeta_V(T)}$ can be defined holomorphically on the open disk in ${{\bf C}_\infty}$ of radius ${q^{-A}}$ centred at the origin.

The ${p}$-adic control is significantly more difficult, and is the main component of Dwork’s argument:

Proposition 4 (${p}$-adic control of ${S_n}$) With ${S_n}$ as above, and using an embedding ${\iota_p: \overline{{\bf Q}} \rightarrow {\bf C}_p}$ (defined later) with ${p}$ the characteristic of ${{\bf F}_q}$, we can find for any real ${A > 0}$ a finite number of elements ${\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'} \in {\bf C}_p}$ such that

$\displaystyle |\iota_p(S_n) - (\alpha_1^n + \dots + \alpha_k^n - \beta_1^n - \dots - \beta_{k'}^n)|_p \leq q^{-An}$

for all ${n}$.

Another way of thinking about this ${p}$-adic control is that it guarantees that the zeta function ${T \mapsto \zeta_V(T)}$ can be defined meromorphically on the entire ${p}$-adic complex field ${{\bf C}_p}$.

Proposition 4 is ostensibly much weaker than Theorem 1 because of (a) the error term of ${p}$-adic magnitude at most ${Cq^{-An}}$; (b) the fact that the number ${k+k'}$ of potential characteristic values here may go to infinity as ${A \rightarrow \infty}$; and (c) the potential characteristic values ${\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'}}$ only exist inside the complexified ${p}$-adics ${{\bf C}_p}$, rather than in the algebraic integers ${O_{\overline{{\bf Q}}}}$. However, it turns out that by combining ${p}$-adic control on ${S_n}$ in Proposition 4 with the trivial control on ${S_n}$ in Proposition 3, one can obtain Theorem 1 by an elementary argument that does not use any further properties of ${S_n}$ (other than the obvious fact that the ${S_n}$ are rational integers), with the ${A}$ in Proposition 4 chosen to exceed the ${A}$ in Proposition 3. We give this argument (essentially due to Borel) below the fold.

The proof of Proposition 4 can be split into two pieces. The first piece, which can be viewed as the number-theoretic component of the proof, uses external descriptions of ${V}$ such as (1) to obtain the following decomposition of ${S_n}$:

Proposition 5 (Decomposition of ${S_n}$) With ${\iota_p}$ and ${S_n}$ as above, we can decompose ${\iota_p(S_n)}$ as a finite linear combination (over the integers) of sequences ${S'_n \in {\bf C}_p}$, such that for each such sequence ${n \mapsto S'_n}$, the zeta functions

$\displaystyle \zeta'(T) := \exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n ) = \sum_{n=0}^\infty c_n T^n$

are entire in ${{\bf C}_p}$, by which we mean that

$\displaystyle |c_n|_p^{1/n} \rightarrow 0$

as ${n \rightarrow \infty}$.

This proposition will ultimately be a consequence of the properties of the Teichmuller lifting ${\tau: \overline{{\bf F}_p}^\times \rightarrow {\bf C}_p^\times}$.

The second piece, which can be viewed as the “${p}$-adic complex analytic” component of the proof, relates the ${p}$-adic entire nature of a zeta function with control on the associated sequence ${S'_n}$, and can be interpreted (after some manipulation) as a ${p}$-adic version of the Weierstrass preparation theorem:

Proposition 6 (${p}$-adic Weierstrass preparation theorem) Let ${S'_n}$ be a sequence in ${{\bf C}_p}$, such that the zeta function

$\displaystyle \zeta'(T) := \exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n )$

is entire in ${{\bf C}_p}$. Then for any real ${A > 0}$, there exist a finite number of elements ${\beta_1,\dots,\beta_{k'} \in {\bf C}_p}$ such that

$\displaystyle |\iota_p(S'_n) + \beta_1^n + \dots + \beta_{k'}^n|_p \leq q^{-An}$

for all ${n}$ and some ${C>0}$.

Clearly, the combination of Proposition 5 and Proposition 6 (and the non-Archimedean nature of the ${||_p}$ norm) imply Proposition 4.

Let ${{\bf F}_q}$ be a finite field of order ${q = p^n}$, and let ${C}$ be an absolutely irreducible smooth projective curve defined over ${{\bf F}_q}$ (and hence over the algebraic closure ${k := \overline{{\bf F}_q}}$ of that field). For instance, ${C}$ could be the projective elliptic curve

$\displaystyle C = \{ [x,y,z]: y^2 z = x^3 + ax z^2 + b z^3 \}$

in the projective plane ${{\bf P}^2 = \{ [x,y,z]: (x,y,z) \neq (0,0,0) \}}$, where ${a,b \in {\bf F}_q}$ are coefficients whose discriminant ${-16(4a^3+27b^2)}$ is non-vanishing, which is the projective version of the affine elliptic curve

$\displaystyle \{ (x,y): y^2 = x^3 + ax + b \}.$

To each such curve ${C}$ one can associate a genus ${g}$, which we will define later; for instance, elliptic curves have genus ${1}$. We can also count the cardinality ${|C({\bf F}_q)|}$ of the set ${C({\bf F}_q)}$ of ${{\bf F}_q}$-points of ${C}$. The Hasse-Weil bound relates the two:

Theorem 1 (Hasse-Weil bound) ${||C({\bf F}_q)| - q - 1| \leq 2g\sqrt{q}}$.

The usual proofs of this bound proceed by first establishing a trace formula of the form

$\displaystyle |C({\bf F}_{p^n})| = p^n - \sum_{i=1}^{2g} \alpha_i^n + 1 \ \ \ \ \ (1)$

for some complex numbers ${\alpha_1,\dots,\alpha_{2g}}$ independent of ${n}$; this is in fact a special case of the Lefschetz-Grothendieck trace formula, and can be interpreted as an assertion that the zeta function associated to the curve ${C}$ is rational. The task is then to establish a bound ${|\alpha_i| \leq \sqrt{p}}$ for all ${i=1,\dots,2g}$; this (or more precisely, the slightly stronger assertion ${|\alpha_i| = \sqrt{p}}$) is the Riemann hypothesis for such curves. This can be done either by passing to the Jacobian variety of ${C}$ and using a certain duality available on the cohomology of such varieties, known as Rosati involution; alternatively, one can pass to the product surface ${C \times C}$ and apply the Riemann-Roch theorem for that surface.

In 1969, Stepanov introduced an elementary method (a version of what is now known as the polynomial method) to count (or at least to upper bound) the quantity ${|C({\bf F}_q)|}$. The method was initially restricted to hyperelliptic curves, but was soon extended to general curves. In particular, Bombieri used this method to give a short proof of the following weaker version of the Hasse-Weil bound:

Theorem 2 (Weak Hasse-Weil bound) If ${q}$ is a perfect square, and ${q \geq (g+1)^4}$, then ${|C({\bf F}_q)| \leq q + (2g+1) \sqrt{q} + 1}$.

In fact, the bound on ${|C({\bf F}_q)|}$ can be sharpened a little bit further, as we will soon see.

Theorem 2 is only an upper bound on ${|C({\bf F}_q)|}$, but there is a Galois-theoretic trick to convert (a slight generalisation of) this upper bound to a matching lower bound, and if one then uses the trace formula (1) (and the “tensor power trick” of sending ${n}$ to infinity to control the weights ${\alpha_i}$) one can then recover the full Hasse-Weil bound. We discuss these steps below the fold.

I’ve discussed Bombieri’s proof of Theorem 2 in this previous post (in the special case of hyperelliptic curves), but now wish to present the full proof, with some minor simplifications from Bombieri’s original presentation; it is mostly elementary, with the deepest fact from algebraic geometry needed being Riemann’s inequality (a weak form of the Riemann-Roch theorem).

The first step is to reinterpret ${|C({\bf F}_q)|}$ as the number of points of intersection between two curves ${C_1,C_2}$ in the surface ${C \times C}$. Indeed, if we define the Frobenius endomorphism ${\hbox{Frob}_q}$ on any projective space by

$\displaystyle \hbox{Frob}_q( [x_0,\dots,x_n] ) := [x_0^q, \dots, x_n^q]$

then this map preserves the curve ${C}$, and the fixed points of this map are precisely the ${{\bf F}_q}$ points of ${C}$:

$\displaystyle C({\bf F}_q) = \{ z \in C: \hbox{Frob}_q(z) = z \}.$

Thus one can interpret ${|C({\bf F}_q)|}$ as the number of points of intersection between the diagonal curve

$\displaystyle \{ (z,z): z \in C \}$

and the Frobenius graph

$\displaystyle \{ (z, \hbox{Frob}_q(z)): z \in C \}$

which are copies of ${C}$ inside ${C \times C}$. But we can use the additional hypothesis that ${q}$ is a perfect square to write this more symmetrically, by taking advantage of the fact that the Frobenius map has a square root

$\displaystyle \hbox{Frob}_q = \hbox{Frob}_{\sqrt{q}}^2$

with ${\hbox{Frob}_{\sqrt{q}}}$ also preserving ${C}$. One can then also interpret ${|C({\bf F}_q)|}$ as the number of points of intersection between the curve

$\displaystyle C_1 := \{ (z, \hbox{Frob}_{\sqrt{q}}(z)): z \in C \} \ \ \ \ \ (2)$

and its transpose

$\displaystyle C_2 := \{ (\hbox{Frob}_{\sqrt{q}}(w), w): w \in C \}.$

Let ${k(C \times C)}$ be the field of rational functions on ${C \times C}$ (with coefficients in ${k}$), and define ${k(C_1)}$, ${k(C_2)}$, and ${k(C_1 \cap C_2)}$ analogously )(although ${C_1 \cap C_2}$ is likely to be disconnected, so ${k(C_1 \cap C_2)}$ will just be a ring rather than a field. We then (morally) have the commuting square

$\displaystyle \begin{array}{ccccc} && k(C \times C) && \\ & \swarrow & & \searrow & \\ k(C_1) & & & & k(C_2) \\ & \searrow & & \swarrow & \\ && k(C_1 \cap C_2) && \end{array},$

if we ignore the issue that a rational function on, say, ${C \times C}$, might blow up on all of ${C_1}$ and thus not have a well-defined restriction to ${C_1}$. We use ${\pi_1: k(C \times C) \rightarrow k(C_1)}$ and ${\pi_2: k(C \times C) \rightarrow k(C_2)}$ to denote the restriction maps. Furthermore, we have obvious isomorphisms ${\iota_1: k(C_1) \rightarrow k(C)}$, ${\iota_2: k(C_2) \rightarrow k(C)}$ coming from composing with the graphing maps ${z \mapsto (z, \hbox{Frob}_{\sqrt{q}}(z))}$ and ${w \mapsto (\hbox{Frob}_{\sqrt{q}}(w), w)}$.

The idea now is to find a rational function ${f \in k(C \times C)}$ on the surface ${C \times C}$ of controlled degree which vanishes when restricted to ${C_1}$, but is non-vanishing (and not blowing up) when restricted to ${C_2}$. On ${C_2}$, we thus get a non-zero rational function ${f \downharpoonright_{C_2}}$ of controlled degree which vanishes on ${C_1 \cap C_2}$ – which then lets us bound the cardinality of ${C_1 \cap C_2}$ in terms of the degree of ${f \downharpoonright_{C_2}}$. (In Bombieri’s original argument, one required vanishing to high order on the ${C_1}$ side, but in our presentation, we have factored out a ${\hbox{Frob}_{\sqrt{q}}}$ term which removes this high order vanishing condition.)

To find this ${f}$, we will use linear algebra. Namely, we will locate a finite-dimensional subspace ${V}$ of ${k(C \times C)}$ (consisting of certain “controlled degree” rational functions) which projects injectively to ${k(C_2)}$, but whose projection to ${k(C_1)}$ has strictly smaller dimension than ${V}$ itself. The rank-nullity theorem then forces the existence of a non-zero element ${P}$ of ${V}$ whose projection to ${k(C_1)}$ vanishes, but whose projection to ${k(C_2)}$ is non-zero.

Now we build ${V}$. Pick a ${{\bf F}_q}$ point ${P_\infty}$ of ${C}$, which we will think of as being a point at infinity. (For the purposes of proving Theorem 2, we may clearly assume that ${C({\bf F}_q)}$ is non-empty.) Thus ${P_\infty}$ is fixed by ${\hbox{Frob}_q}$. To simplify the exposition, we will also assume that ${P_\infty}$ is fixed by the square root ${\hbox{Frob}_{\sqrt{q}}}$ of ${\hbox{Frob}_q}$; in the opposite case when ${\hbox{Frob}_{\sqrt{q}}}$ has order two when acting on ${P_\infty}$, the argument is essentially the same, but all references to ${P_\infty}$ in the second factor of ${C \times C}$ need to be replaced by ${\hbox{Frob}_{\sqrt{q}} P_\infty}$ (we leave the details to the interested reader).

For any natural number ${n}$, define ${R_n}$ to be the set of rational functions ${f \in k(C)}$ which are allowed to have a pole of order up to ${n}$ at ${P_\infty}$, but have no other poles on ${C}$; note that as we are assuming ${C}$ to be smooth, it is unambiguous what a pole is (and what order it will have). (In the fancier language of divisors and Cech cohomology, we have ${R_n = H^0( C, {\mathcal O}_C(-n P_\infty) )}$.) The space ${R_n}$ is clearly a vector space over ${k}$; one can view intuitively as the space of “polynomials” on ${C}$ of “degree” at most ${n}$. When ${n=0}$, ${R_0}$ consists just of the constant functions. Indeed, if ${f \in R_0}$, then the image ${f(C)}$ of ${f}$ avoids ${\infty}$ and so lies in the affine line ${k = {\mathbf P}^1 \backslash \{\infty\}}$; but as ${C}$ is projective, the image ${f(C)}$ needs to be compact (hence closed) in ${{\mathbf P}^1}$, and must therefore be a point, giving the claim.

For higher ${n \geq 1}$, we have the easy relations

$\displaystyle \hbox{dim}(R_{n-1}) \leq \hbox{dim}(R_n) \leq \hbox{dim}(R_{n-1})+1. \ \ \ \ \ (3)$

The former inequality just comes from the trivial inclusion ${R_{n-1} \subset R_n}$. For the latter, observe that if two functions ${f, g}$ lie in ${R_n}$, so that they each have a pole of order at most ${n}$ at ${P_\infty}$, then some linear combination of these functions must have a pole of order at most ${n-1}$ at ${P_\infty}$; thus ${R_{n-1}}$ has codimension at most one in ${R_n}$, giving the claim.

From (3) and induction we see that each of the ${R_n}$ are finite dimensional, with the trivial upper bound

$\displaystyle \hbox{dim}(R_n) \leq n+1. \ \ \ \ \ (4)$

Riemann’s inequality complements this with the lower bound

$\displaystyle \hbox{dim}(R_n) \geq n+1-g, \ \ \ \ \ (5)$

thus one has ${\hbox{dim}(R_n) = \hbox{dim}(R_{n-1})+1}$ for all but at most ${g}$ exceptions (in fact, exactly ${g}$ exceptions as it turns out). This is a consequence of the Riemann-Roch theorem; it can be proven from abstract nonsense (the snake lemma) if one defines the genus ${g}$ in a non-standard fashion (as the dimension of the first Cech cohomology ${H^1(C)}$ of the structure sheaf ${{\mathcal O}_C}$ of ${C}$), but to obtain this inequality with a standard definition of ${g}$ (e.g. as the dimension of the zeroth Cech cohomolgy ${H^0(C, \Omega_C^1)}$ of the line bundle of differentials) requires the more non-trivial tool of Serre duality.

At any rate, now that we have these vector spaces ${R_n}$, we will define ${V \subset k(C \times C)}$ to be a tensor product space

$\displaystyle V = R_\ell \otimes R_m$

for some natural numbers ${\ell, m \geq 0}$ which we will optimise in later. That is to say, ${V}$ is spanned by functions of the form ${(z,w) \mapsto f(z) g(w)}$ with ${f \in R_\ell}$ and ${g \in R_m}$. This is clearly a linear subspace of ${k(C \times C)}$ of dimension ${\hbox{dim}(R_\ell) \hbox{dim}(R_m)}$, and hence by Rieman’s inequality we have

$\displaystyle \hbox{dim}(V) \geq (\ell+1-g) (m+1-g) \ \ \ \ \ (6)$

if

$\displaystyle \ell,m \geq g-1. \ \ \ \ \ (7)$

Observe that ${\iota_1 \circ \pi_1}$ maps a tensor product ${(z,w) \mapsto f(z) g(w)}$ to a function ${z \mapsto f(z) g(\hbox{Frob}_{\sqrt{q}} z)}$. If ${f \in R_\ell}$ and ${g \in R_m}$, then we see that the function ${z \mapsto f(z) g(\hbox{Frob}_{\sqrt{q}} z)}$ has a pole of order at most ${\ell+m\sqrt{q}}$ at ${P_\infty}$. We conclude that

$\displaystyle \iota_1 \circ \pi_1( V ) \subset R_{\ell + m\sqrt{q}} \ \ \ \ \ (8)$

and in particular by (4)

$\displaystyle \hbox{dim}(\pi_1(V)) \leq \ell + m \sqrt{q} + 1 \ \ \ \ \ (9)$

and similarly

$\displaystyle \hbox{dim}(\pi_2(V)) \leq \ell \sqrt{q} + m + 1. \ \ \ \ \ (10)$

We will choose ${m}$ to be a bit bigger than ${\ell}$, to make the ${\pi_2}$ image of ${V}$ smaller than that of ${\pi_1}$. From (6), (10) we see that if we have the inequality

$\displaystyle (\ell+1-g) (m+1-g) > \ell \sqrt{q}+m + 1 \ \ \ \ \ (11)$

(together with (7)) then ${\pi_2}$ cannot be injective.

On the other hand, we have the following basic fact:

Lemma 3 (Injectivity) If

$\displaystyle \ell < \sqrt{q}, \ \ \ \ \ (12)$

then ${\pi_1: V \rightarrow \pi_1(V)}$ is injective.

Proof: From (3), we can find a linear basis ${f_1,\dots,f_a}$ of ${R_\ell}$ such that each of the ${f_i}$ has a distinct order ${d_i}$ of pole at ${P_\infty}$ (somewhere between ${0}$ and ${\ell}$ inclusive). Similarly, we may find a linear basis ${g_1,\dots,g_b}$ of ${R_m}$ such that each of the ${g_j}$ has a distinct order ${e_j}$ of pole at ${P_\infty}$ (somewhere between ${0}$ and ${m}$ inclusive). The functions ${z \mapsto f_i(z) g_j(\hbox{Frob}_{\sqrt{q}} z)}$ then span ${\iota_1(\pi_1(V))}$, and the order of pole at ${P_\infty}$ is ${d_i + \sqrt{q} e_j}$. But since ${\ell < \sqrt{q}}$, these orders are all distinct, and so these functions must be linearly independent. The claim follows. $\Box$

This gives us the following bound:

Proposition 4 Let ${\ell,m}$ be natural numbers such that (7), (11), (12) hold. Then ${|C({\bf F}_q)| \leq \ell + m \sqrt{q}}$.

Proof: As ${\pi_2}$ is not injective, we can find ${f \in V}$ with ${\pi_2(f)}$ vanishing. By the above lemma, the function ${\iota_1(\pi_1(f))}$ is then non-zero, but it must also vanish on ${\iota_1(C_1 \cap C_2)}$, which has cardinality ${|C({\bf F}_q)|}$. On the other hand, by (8), ${\iota_1(\pi_1(f))}$ has a pole of order at most ${\ell+m\sqrt{q}}$ at ${P_\infty}$ and no other poles. Since the number of poles and zeroes of a rational function on a projective curve must add up to zero, the claim follows. $\Box$

If ${q \geq (g+1)^4}$, we may make the explicit choice

$\displaystyle m := \sqrt{q}+2g; \quad \ell := \lfloor \frac{g}{g+1} \sqrt{q} \rfloor + g + 1$

and a brief calculation then gives Theorem 2. In some cases one can optimise things a bit further. For instance, in the genus zero case ${g=0}$ (e.g. if ${C}$ is just the projective line ${{\mathbf P}^1}$) one may take ${\ell=1, m = \sqrt{q}}$ and conclude the absolutely sharp bound ${|C({\bf F}_q)| \leq q+1}$ in this case; in the case of the projective line ${{\mathbf P}^1}$, the function ${f}$ is in fact the very concrete function ${f(z,w) := z - w^{\sqrt{q}}}$.

Remark 1 When ${q = p^{2n+1}}$ is not a perfect square, one can try to run the above argument using the factorisation ${\hbox{Frob}_q = \hbox{Frob}_{p^n} \hbox{Frob}_{p^{n+1}}}$ instead of ${\hbox{Frob}_q = \hbox{Frob}_{\sqrt{q}} \hbox{Frob}_{\sqrt{q}}}$. This gives a weaker version of the above bound, of the shape ${|C({\bf F}_q)| \leq q + O( \sqrt{p} \sqrt{q} )}$. In the hyperelliptic case at least, one can erase this loss by working with a variant of the argument in which one requires ${f}$ to vanish to high order at ${C_1}$, rather than just to first order; see this survey article of mine for details.

This is the eighth thread for the Polymath8b project to obtain new bounds for the quantity

$\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$

either for small values of ${m}$ (in particular ${m=1,2}$) or asymptotically as ${m \rightarrow \infty}$. The previous thread may be found here. The currently best known bounds on ${H_m}$ can be found at the wiki page.

The big news since the last thread is that we have managed to obtain the (sieve-theoretically) optimal bound of ${H_1 \leq 6}$ assuming the generalised Elliott-Halberstam conjecture (GEH), which pretty much closes off that part of the story. Unconditionally, our bound on ${H_1}$ is still ${H_1 \leq 270}$. This bound was obtained using the “vanilla” Maynard sieve, in which the cutoff ${F}$ was supported in the original simplex ${\{ t_1+\dots+t_k \leq 1\}}$, and only Bombieri-Vinogradov was used. In principle, we can enlarge the sieve support a little bit further now; for instance, we can enlarge to ${\{ t_1+\dots+t_k \leq \frac{k}{k-1} \}}$, but then have to shrink the J integrals to ${\{t_1+\dots+t_{k-1} \leq 1-\epsilon\}}$, provided that the marginals vanish for ${\{ t_1+\dots+t_{k-1} \geq 1+\epsilon \}}$. However, we do not yet know how to numerically work with these expanded problems.

Given the substantial progress made so far, it looks like we are close to the point where we should declare victory and write up the results (though we should take one last look to see if there is any room to improve the ${H_1 \leq 270}$ bounds). There is actually a fair bit to write up:

• Improvements to the Maynard sieve (pushing beyond the simplex, the epsilon trick, and pushing beyond the cube);
• Asymptotic bounds for ${M_k}$ and hence ${H_m}$;
• Explicit bounds for ${H_m, m \geq 2}$ (using the Polymath8a results)
• ${H_1 \leq 270}$;
• ${H_1 \leq 6}$ on GEH (and parity obstructions to any further improvement).

I will try to create a skeleton outline of such a paper in the Polymath8 Dropbox folder soon. It shouldn’t be nearly as big as the Polymath8a paper, but it will still be quite sizeable.

There are multiple purposes to this blog post.

The first purpose is to announce the uploading of the paper “New equidistribution estimates of Zhang type, and bounded gaps between primes” by D.H.J. Polymath, which is the main output of the Polymath8a project on bounded gaps between primes, to the arXiv, and to describe the main results of this paper below the fold.

The second purpose is to roll over the previous thread on all remaining Polymath8a-related matters (e.g. updates on the submission status of the paper) to a fresh thread. (Discussion of the ongoing Polymath8b project is however being kept on a separate thread, to try to reduce confusion.)

The final purpose of this post is to coordinate the writing of a retrospective article on the Polymath8 experience, which has been solicited for the Newsletter of the European Mathematical Society. I suppose that this could encompass both the Polymath8a and Polymath8b projects, even though the second one is still ongoing (but I think we will soon be entering the endgame there). I think there would be two main purposes of such a retrospective article. The first one would be to tell a story about the process of conducting mathematical research, rather than just describe the outcome of such research; this is an important aspect of the subject which is given almost no attention in most mathematical writing, and it would be good to be able to capture some sense of this process while memories are still relatively fresh. The other would be to draw some tentative conclusions with regards to what the strengths and weaknesses of a Polymath project are, and how appropriate such a format would be for other mathematical problems than bounded gaps between primes. In my opinion, the bounded gaps problem had some fairly unique features that made it particularly amenable to a Polymath project, such as (a) a high level of interest amongst the mathematical community in the problem; (b) a very focused objective (“improve ${H}$!”), which naturally provided an obvious metric to measure progress; (c) the modular nature of the project, which allowed for people to focus on one aspect of the problem only, and still make contributions to the final goal; and (d) a very reasonable level of ambition (for instance, we did not attempt to prove the twin prime conjecture, which in my opinion would make a terrible Polymath project at our current level of mathematical technology). This is not an exhaustive list of helpful features of the problem; I would welcome other diagnoses of the project by other participants.

With these two objectives in mind, I propose a format for the retrospective article consisting of a brief introduction to the polymath concept in general and the polymath8 project in particular, followed by a collection of essentially independent contributions by different participants on their own experiences and thoughts. Finally we could have a conclusion section in which we make some general remarks on the polymath project (such as the remarks above). I’ve started a dropbox subfolder for this article (currently in a very skeletal outline form only), and will begin writing a section on my own experiences; other participants are of course encouraged to add their own sections (it is probably best to create separate files for these, and then input them into the main file retrospective.tex, to reduce edit conflicts. If there are participants who wish to contribute but do not currently have access to the Dropbox folder, please email me and I will try to have you added (or else you can supply your thoughts by email, or in the comments to this post; we may have a section for shorter miscellaneous comments from more casual participants, for people who don’t wish to write a lengthy essay on the subject).

As for deadlines, the EMS Newsletter would like a submitted article by mid-April in order to make the June issue, but in the worst case, it will just be held over until the issue after that.