You are currently browsing the category archive for the ‘math.NT’ category.
Kevin Ford, Ben Green, Sergei Konyagin, and myself have just posted to the arXiv our preprint “Large gaps between consecutive prime numbers“. This paper concerns the “opposite” problem to that considered by the recently concluded Polymath8 project, which was concerned with very small values of the prime gap . Here, we wish to consider the largest prime gap that one can find in the interval as goes to infinity.
Finding lower bounds on is more or less equivalent to locating long strings of consecutive composite numbers that are not too large compared to the length of the string. A classic (and quite well known) construction here starts with the observation that for any natural number , the consecutive numbers are all composite, because each , is divisible by some prime , while being strictly larger than that prime . From this and Stirling’s formula, it is not difficult to obtain the bound
A more efficient bound comes from the prime number theorem: there are only primes up to , so just from the pigeonhole principle one can locate a string of consecutive composite numbers up to of length at least , thus
where we use or as shorthand for or .
What about upper bounds? The Cramér random model predicts that the primes up to are distributed like a random subset of density . Using this model, Cramér arrived at the conjecture
In fact, if one makes the extremely optimistic assumption that the random model perfectly describes the behaviour of the primes, one would arrive at the even more precise prediction
However, it is no longer widely believed that this optimistic version of the conjecture is true, due to some additional irregularities in the primes coming from the basic fact that large primes cannot be divisible by very small primes. Using the Maier matrix method to capture some of this irregularity, Granville was led to the conjecture that
(note that is slightly larger than ). For comparison, the known upper bounds on are quite weak; unconditionally one has by the work of Baker, Harman, and Pintz, and even on the Riemann hypothesis one only gets down to , as shown by Cramér (a slight improvement is also possible if one additionally assumes the pair correlation conjecture; see this article of Heath-Brown and the references therein).
This conjecture remains out of reach of current methods. In 1931, Westzynthius managed to improve the bound (2) slightly to
which Erdös in 1935 improved to
and Rankin in 1938 improved slightly further to
with . Remarkably, this rather strange bound then proved extremely difficult to advance further on; until recently, the only improvements were to the constant , which was raised to in 1963 by Schönhage, to in 1963 by Rankin, to by Maier and Pomerance, and finally to in 1997 by Pintz.
Erdös listed the problem of making arbitrarily large one of his favourite open problems, even offering (“somewhat rashly”, in his words) a cash prize for the solution. Our main result answers this question in the affirmative:
Theorem 1 The bound (3) holds for arbitrarily large .
In principle, we thus have a bound of the form
for some that grows to infinity. Unfortunately, due to various sources of ineffectivity in our methods, we cannot provide any explicit rate of growth on at all.
We decided to announce this result the old-fashioned way, as part of a research lecture; more precisely, Ben Green announced the result in his ICM lecture this Tuesday. (The ICM staff have very efficiently put up video of his talks (and most of the other plenary and prize talks) online; Ben’s talk is here, with the announcement beginning at about 0:48. Note a slight typo in his slides, in that the exponent of in the denominator is instead of .) Ben’s lecture slides may be found here.
By coincidence, an independent proof of this theorem has also been obtained very recently by James Maynard, whose preprint should appear on the arxiv shortly.
I discuss our proof method below the fold.
The 2014 Fields medallists have just been announced as (in alphabetical order of surname) Artur Avila, Manjul Bhargava, Martin Hairer, and Maryam Mirzakhani (see also these nice video profiles for the winners, which is a new initiative of the IMU and the Simons foundation). This time four years ago, I wrote a blog post discussing one result from each of the 2010 medallists; I thought I would try to repeat the exercise here, although the work of the medallists this time around is a little bit further away from my own direct area of expertise than last time, and so my discussion will unfortunately be a bit superficial (and possibly not completely accurate) in places. As before, I am picking these results based on my own idiosyncratic tastes, and they should not be viewed as necessarily being the “best” work of these medallists. (See also the press releases for Avila, Bhargava, Hairer, and Mirzakhani.)
Artur Avila works in dynamical systems and in the study of Schrödinger operators. The work of Avila that I am most familiar with is his solution with Svetlana Jitormiskaya of the ten martini problem of Kac, the solution to which (according to Barry Simon) he offered ten martinis for, hence the name. The problem involves perhaps the simplest example of a Schrödinger operator with non-trivial spectral properties, namely the almost Mathieu operator defined for parameters and by a discrete one-dimensional Schrödinger operator with cosine potential:
This is a bounded self-adjoint operator and thus has a spectrum that is a compact subset of the real line; it arises in a number of physical contexts, most notably in the theory of the integer quantum Hall effect, though I will not discuss these applications here. Remarkably, the structure of this spectrum depends crucially on the Diophantine properties of the frequency . For instance, if is a rational number, then the operator is periodic with period , and then basic (discrete) Floquet theory tells us that the spectrum is simply the union of (possibly touching) intervals. But for irrational (in which case the spectrum is independent of the phase ), the situation is much more fractal in nature, for instance in the critical case the spectrum (as a function of ) gives rise to the Hofstadter butterfly. The “ten martini problem” asserts that for every irrational and every choice of coupling constant , the spectrum is homeomorphic to a Cantor set. Prior to the work of Avila and Jitormiskaya, there were a number of partial results on this problem, notably the result of Puig establishing Cantor spectrum for a full measure set of parameters , as well as results requiring a perturbative hypothesis, such as being very small or very large. The result was also already known for being either very close to rational (i.e. a Liouville number) or very far from rational (a Diophantine number), although the analyses for these two cases failed to meet in the middle, leaving some cases untreated. The argument uses a wide variety of existing techniques, both perturbative and non-perturbative, to attack this problem, as well as an amusing argument by contradiction: they assume (in certain regimes) that the spectrum fails to be a Cantor set, and use this hypothesis to obtain additional Lipschitz control on the spectrum (as a function of the frequency ), which they can then use (after much effort) to improve existing arguments and conclude that the spectrum was in fact Cantor after all!
Manjul Bhargava produces amazingly beautiful mathematics, though most of it is outside of my own area of expertise. One part of his work that touches on an area of my own interest (namely, random matrix theory) is his ongoing work with many co-authors on modeling (both conjecturally and rigorously) the statistics of various key number-theoretic features of elliptic curves (such as their rank, their Selmer group, or their Tate-Shafarevich groups). For instance, with Kane, Lenstra, Poonen, and Rains, Manjul has proposed a very general random matrix model that predicts all of these statistics (for instance, predicting that the -component of the Tate-Shafarevich group is distributed like the cokernel of a certain random -adic matrix, very much in the spirit of the Cohen-Lenstra heuristics discussed in this previous post). But what is even more impressive is that Manjul and his coauthors have been able to verify several non-trivial fragments of this model (e.g. showing that certain moments have the predicted asymptotics), giving for the first time non-trivial upper and lower bounds for various statistics, for instance obtaining lower bounds on how often an elliptic curve has rank or rank , leading most recently (in combination with existing work of Gross-Zagier and of Kolyvagin, among others) to his amazing result with Skinner and Zhang that at least of all elliptic curves over (ordered by height) obey the Birch and Swinnerton-Dyer conjecture. Previously it was not even known that a positive proportion of curves obeyed the conjecture. This is still a fair ways from resolving the conjecture fully (in particular, the situation with the presumably small number of curves of rank and higher is still very poorly understood, and the theory of Gross-Zagier and Kolyvagin that this work relies on is not available for other number fields than ), but it certainly does provide hope that the conjecture could be within reach in a statistical sense at least.
Martin Hairer works in at the interface between probability and partial differential equations, and in particular in the theory of stochastic differential equations (SDEs). The result of his that is closest to my own interests is his remarkable demonstration with Jonathan Mattingly of unique invariant measure for the two-dimensional stochastically forced Navier-Stokes equation
on the two-torus , where is a Gaussian field that forces a fixed set of frequencies. It is expected that for any reasonable choice of initial data, the solution to this equation should asymptotically be distributed according to Kolmogorov’s power law, as discussed in this previous post. This is still far from established rigorously (although there are some results in this direction for dyadic models, see e.g. this paper of Cheskidov, Shvydkoy, and Friedlander). However, Hairer and Mattingly were able to show that there was a unique probability distribution to almost every initial data would converge to asymptotically; by the ergodic theorem, this is equivalent to demonstrating the existence and uniqueness of an invariant measure for the flow. Existence can be established using standard methods, but uniqueness is much more difficult. One of the standard routes to uniqueness is to establish a “strong Feller property” that enforces some continuity on the transition operators; among other things, this would mean that two ergodic probability measures with intersecting supports would in fact have a non-trivial common component, contradicting the ergodic theorem (which forces different ergodic measures to be mutually singular). Since all ergodic measures for Navier-Stokes can be seen to contain the origin in their support, this would give uniqueness. Unfortunately, the strong Feller property is unlikely to hold in the infinite-dimensional phase space for Navier-Stokes; but Hairer and Mattingly develop a clean abstract substitute for this property, which they call the asymptotic strong Feller property, which is again a regularity property on the transition operator; this in turn is then demonstrated by a careful application of Malliavin calculus.
Maryam Mirzakhani has mostly focused on the geometry and dynamics of Teichmuller-type moduli spaces, such as the moduli space of Riemann surfaces with a fixed genus and a fixed number of cusps (or with a fixed number of boundaries that are geodesics of a prescribed length). These spaces have an incredibly rich structure, ranging from geometric structure (such as the Kahler geometry given by the Weil-Petersson metric), to dynamical structure (through the action of the mapping class group on this and related spaces), to algebraic structure (viewing these spaces as algebraic varieties), and are thus connected to many other objects of interest in geometry and dynamics. For instance, by developing a new recursive formula for the Weil-Petersson volume of this space, Mirzakhani was able to asymptotically count the number of simple prime geodesics of length up to some threshold in a hyperbolic surface (or more precisely, she obtained asymptotics for the number of such geodesics in a given orbit of the mapping class group); the answer turns out to be polynomial in , in contrast to the much larger class of non-simple prime geodesics, whose asymptotics are exponential in (the “prime number theorem for geodesics”, developed in a classic series of works by Delsart, Huber, Selberg, and Margulis); she also used this formula to establish a new proof of a conjecture of Witten on intersection numbers that was first proven by Kontsevich. More recently, in two lengthy papers with Eskin and with Eskin-Mohammadi, Mirzakhani established rigidity theorems for the action of on such moduli spaces that are close analogues of Ratner’s celebrated rigidity theorems for unipotently generated groups (discussed in this previous blog post). Ratner’s theorems are already notoriously difficult to prove, and rely very much on the polynomial stability properties of unipotent flows; in this even more complicated setting, the unipotent flows are no longer tractable, and Mirzakhani instead uses a recent “exponential drift” method of Benoist and Quint with as a substitute. Ratner’s theorems are incredibly useful for all sorts of problems connected to homogeneous dynamics, and the analogous theorems established by Mirzakhani, Eskin, and Mohammadi have a similarly broad range of applications, for instance in counting periodic billiard trajectories in rational polygons.
I’ve just uploaded to the arXiv the D.H.J. Polymath paper “Variants of the Selberg sieve, and bounded intervals containing many primes“, which is the second paper to be produced from the Polymath8 project (the first one being discussed here). We’ll refer to this latter paper here as the Polymath8b paper, and the former as the Polymath8a paper. As with Polymath8a, the Polymath8b paper is concerned with the smallest asymptotic prime gap
where denotes the prime, as well as the more general quantities
In the breakthrough paper of Goldston, Pintz, and Yildirim, the bound was obtained under the strong hypothesis of the Elliott-Halberstam conjecture. An unconditional bound on , however, remained elusive until the celebrated work of Zhang last year, who showed that
The Polymath8a paper then improved this to . After that, Maynard introduced a new multidimensional Selberg sieve argument that gave the substantial improvement
unconditionally, and on the Elliott-Halberstam conjecture; furthermore, bounds on for higher were obtained for the first time, and specifically that for all , with the improvements and on the Elliott-Halberstam conjecture. (I had independently discovered the multidimensional sieve idea, although I did not obtain Maynard’s specific numerical results, and my asymptotic bounds were a bit weaker.)
In Polymath8b, we obtain some further improvements. Unconditionally, we have and , together with some explicit bounds on ; on the Elliott-Halberstam conjecture we have and some numerical improvements to the bounds; and assuming the generalised Elliott-Halberstam conjecture we have the bound , which is best possible from sieve-theoretic methods thanks to the parity problem obstruction.
There were a variety of methods used to establish these results. Maynard’s paper obtained a criterion for bounding which reduced to finding a good solution to a certain multidimensional variational problem. When the dimension parameter was relatively small (e.g. ), we were able to obtain good numerical solutions both by continuing the method of Maynard (using a basis of symmetric polynomials), or by using a Krylov iteration scheme. For large , we refined the asymptotics and obtained near-optimal solutions of the variational problem. For the bounds, we extended the reach of the multidimensional Selberg sieve (particularly under the assumption of the generalised Elliott-Halberstam conjecture) by allowing the function in the multidimensional variational problem to extend to a larger region of space than was previously admissible, albeit with some tricky new constraints on (and penalties in the variational problem). This required some unusual sieve-theoretic manipulations, notably an “epsilon trick”, ultimately relying on the elementary inequality , that allowed one to get non-trivial lower bounds for sums such as even if the sum had no non-trivial estimates available; and a way to estimate divisor sums such as even if was permitted to be comparable to or even exceed , by using the fundamental theorem of arithmetic to factorise (after restricting to the case when is almost prime). I hope that these sieve-theoretic tricks will be useful in future work in the subject.
With this paper, the Polymath8 project is almost complete; there is still a little bit of scope to push our methods further and get some modest improvement for instance to the bound, but this would require a substantial amount of effort, and it is probably best to instead wait for some new breakthrough in the subject to come along. One final task we are performing is to write up a retrospective article on both the 8a and 8b experiences, an incomplete writeup of which can be found here. If anyone wishes to contribute some commentary on these projects (whether you were an active contributor, an occasional contributor, or a silent “lurker” in the online discussion), please feel free to do so in the comments to this post.
Two of the most famous open problems in additive prime number theory are the twin prime conjecture and the binary Goldbach conjecture. They have quite similar forms:
- Twin prime conjecture The equation has infinitely many solutions with prime.
- Binary Goldbach conjecture The equation has at least one solution with prime for any given even .
In view of this similarity, it is not surprising that the partial progress on these two conjectures have tracked each other fairly closely; the twin prime conjecture is generally considered slightly easier than the binary Goldbach conjecture, but broadly speaking any progress made on one of the conjectures has also led to a comparable amount of progress on the other. (For instance, Chen’s theorem has a version for the twin prime conjecture, and a version for the binary Goldbach conjecture.) Also, the notorious parity obstruction is present in both problems, preventing a solution to either conjecture by almost all known methods (see this previous blog post for more discussion).
In this post, I would like to note a divergence from this general principle, with regards to bounded error versions of these two conjectures:
- Twin prime with bounded error The inequalities has infinitely many solutions with prime for some absolute constant .
- Binary Goldbach with bounded error The inequalities has at least one solution with prime for any sufficiently large and some absolute constant .
The first of these statements is now a well-known theorem of Zhang, and the Polymath8b project hosted on this blog has managed to lower to unconditionally, and to assuming the generalised Elliott-Halberstam conjecture. However, the second statement remains open; the best result that the Polymath8b project could manage in this direction is that (assuming GEH) at least one of the binary Goldbach conjecture with bounded error, or the twin prime conjecture with no error, had to be true.
All the known proofs of Zhang’s theorem proceed through sieve-theoretic means. Basically, they take as input equidistribution results that control the size of discrepancies such as
for various congruence classes and various arithmetic functions , e.g. (or more generaly for various ). After taking some carefully chosen linear combinations of these discrepancies, and using the trivial positivity lower bound
one eventually obtains (for suitable ) a non-trivial lower bound of the form
where is some weight function, and is the set of such that there are at least two primes in the interval . This implies at least one solution to the inequalities with , and Zhang’s theorem follows.
In a similar vein, one could hope to use bounds on discrepancies such as (1) (for comparable to ), together with the trivial lower bound (2), to obtain (for sufficiently large , and suitable ) a non-trivial lower bound of the form
for some weight function , where is the set of such that there is at least one prime in each of the intervals and . This would imply the binary Goldbach conjecture with bounded error.
However, the parity obstruction blocks such a strategy from working (for much the same reason that it blocks any bound of the form in Zhang’s theorem, as discussed in the Polymath8b paper.) The reason is as follows. The sieve-theoretic arguments are linear with respect to the summation, and as such, any such sieve-theoretic argument would automatically also work in a weighted setting in which the summation is weighted by some non-negative weight . More precisely, if one could control the weighted discrepancies
to essentially the same accuracy as the unweighted discrepancies (1), then thanks to the trivial weighted version
of (2), any sieve-theoretic argument that was capable of proving (3) would also be capable of proving the weighted estimate
However, (4) may be defeated by a suitable choice of weight , namely
where is the Liouville function, which counts the parity of the number of prime factors of a given number . Since , one can expand out as the sum of and a finite number of other terms, each of which consists of the product of two or more translates (or reflections) of . But from the Möbius randomness principle (or its analogue for the Liouville function), such products of are widely expected to be essentially orthogonal to any arithmetic function that is arising from a single multiplicative function such as , even on very short arithmetic progressions. As such, replacing by in (1) should have a negligible effect on the discrepancy. On the other hand, in order for to be non-zero, has to have the same sign as and hence the opposite sign to cannot simultaneously be prime for any , and so vanishes identically, contradicting (4). This indirectly rules out any modification of the Goldston-Pintz-Yildirim/Zhang method for establishing the binary Goldbach conjecture with bounded error.
The above argument is not watertight, and one could envisage some ways around this problem. One of them is that the Möbius randomness principle could simply be false, in which case the parity obstruction vanishes. A good example of this is the result of Heath-Brown that shows that if there are infinitely many Siegel zeroes (which is a strong violation of the Möbius randomness principle), then the twin prime conjecture holds. Another way around the obstruction is to start controlling the discrepancy (1) for functions that are combinations of more than one multiplicative function, e.g. . However, controlling such functions looks to be at least as difficult as the twin prime conjecture (which is morally equivalent to obtaining non-trivial lower-bounds for ). A third option is not to use a sieve-theoretic argument, but to try a different method (e.g. the circle method). However, most other known methods also exhibit linearity in the “” variable and I would suspect they would be vulnerable to a similar obstruction. (In any case, the circle method specifically has some other difficulties in tackling binary problems, as discussed in this previous post.)
Let be the algebraic closure of , that is to say the field of algebraic numbers. We fix an embedding of into , giving rise to a complex absolute value for algebraic numbers .
Let be of degree , so that is irrational. A classical theorem of Liouville gives the quantitative bound
for the irrationality of fails to be approximated by rational numbers , where depends on but not on . Indeed, if one lets be the Galois conjugates of , then the quantity is a non-zero natural number divided by a constant, and so we have the trivial lower bound
from which the bound (1) easily follows. A well known corollary of the bound (1) is that Liouville numbers are automatically transcendental.
The famous theorem of Thue, Siegel and Roth improves the bound (1) to
for any and rationals , where depends on but not on . Apart from the in the exponent and the implied constant, this bound is optimal, as can be seen from Dirichlet’s theorem. This theorem is a good example of the ineffectivity phenomenon that affects a large portion of modern number theory: the implied constant in the notation is known to be finite, but there is no explicit bound for it in terms of the coefficients of the polynomial defining (in contrast to (1), for which an effective bound may be easily established). This is ultimately due to the reliance on the “dueling conspiracy” (or “repulsion phenomenon”) strategy. We do not as yet have a good way to rule out one counterexample to (2), in which is far closer to than ; however we can rule out two such counterexamples, by playing them off of each other.
A powerful strengthening of the Thue-Siegel-Roth theorem is given by the subspace theorem, first proven by Schmidt and then generalised further by several authors. To motivate the theorem, first observe that the Thue-Siegel-Roth theorem may be rephrased as a bound of the form
for any algebraic numbers with and linearly independent (over the algebraic numbers), and any and , with the exception when or are rationally dependent (i.e. one is a rational multiple of the other), in which case one has to remove some lines (i.e. subspaces in ) of rational slope from the space of pairs to which the bound (3) does not apply (namely, those lines for which the left-hand side vanishes). Here can depend on but not on . More generally, we have
Theorem 1 (Schmidt subspace theorem) Let be a natural number. Let be linearly independent linear forms. Then for any , one has the bound
for all , outside of a finite number of proper subspaces of , where
and depends on and the , but is independent of .
Being a generalisation of the Thue-Siegel-Roth theorem, it is unsurprising that the known proofs of the subspace theorem are also ineffective with regards to the constant . (However, the number of exceptional subspaces may be bounded effectively; cf. the situation with the Skolem-Mahler-Lech theorem, discussed in this previous blog post.) Once again, the lower bound here is basically sharp except for the factor and the implied constant: given any with , a simple volume packing argument (the same one used to prove the Dirichlet approximation theorem) shows that for any sufficiently large , one can find integers , not all zero, such that
for all . Thus one can get comparable to in many different ways.
There are important generalisations of the subspace theorem to other number fields than the rationals (and to other valuations than the Archimedean valuation ); we will develop one such generalisation below.
The subspace theorem is one of many finiteness theorems in Diophantine geometry; in this case, it is the number of exceptional subspaces which is finite. It turns out that finiteness theorems are very compatible with the language of nonstandard analysis. (See this previous blog post for a review of the basics of nonstandard analysis, and in particular for the nonstandard interpretation of asymptotic notation such as and .) The reason for this is that a standard set is finite if and only if it contains no strictly nonstandard elements (that is to say, elements of ). This makes for a clean formulation of finiteness theorems in the nonstandard setting. For instance, the standard form of Bezout’s theorem asserts that if are coprime polynomials over some field, then the curves and intersect in only finitely many points. The nonstandard version of this is then
Theorem 2 (Bezout’s theorem, nonstandard form) Let be standard coprime polynomials. Then there are no strictly nonstandard solutions to .
Now we reformulate Theorem 1 in nonstandard language. We need a definition:
Definition 3 (General position) Let be nested fields. A point in is said to be in -general position if it is not contained in any hyperplane of definable over , or equivalently if one has
for any .
Theorem 4 (Schmidt subspace theorem, nonstandard version) Let be a standard natural number. Let be linearly independent standard linear forms. Let be a tuple of nonstandard integers which is in -general position (in particular, this forces to be strictly nonstandard). Then one has
where we extend from to (and also similarly extend from to ) in the usual fashion.
Observe that (as is usual when translating to nonstandard analysis) some of the epsilons and quantifiers that are present in the standard version become hidden in the nonstandard framework, being moved inside concepts such as “strictly nonstandard” or “general position”. We remark that as is in -general position, it is also in -general position (as an easy Galois-theoretic argument shows), and the requirement that the are linearly independent is thus equivalent to being -linearly independent.
Exercise 1 Verify that Theorem 1 and Theorem 4 are equivalent. (Hint: there are only countably many proper subspaces of .)
We will not prove the subspace theorem here, but instead focus on a particular application of the subspace theorem, namely to counting integer points on curves. In this paper of Corvaja and Zannier, the subspace theorem was used to give a new proof of the following basic result of Siegel:
Theorem 5 (Siegel’s theorem on integer points) Let be an irreducible polynomial of two variables, such that the affine plane curve either has genus at least one, or has at least three points on the line at infinity, or both. Then has only finitely many integer points .
This is a finiteness theorem, and as such may be easily converted to a nonstandard form:
Theorem 6 (Siegel’s theorem, nonstandard form) Let be a standard irreducible polynomial of two variables, such that the affine plane curve either has genus at least one, or has at least three points on the line at infinity, or both. Then does not contain any strictly nonstandard integer points .
Note that Siegel’s theorem can fail for genus zero curves that only meet the line at infinity at just one or two points; the key examples here are the graphs for a polynomial , and the Pell equation curves . Siegel’s theorem can be compared with the more difficult theorem of Faltings, which establishes finiteness of rational points (not just integer points), but now needs the stricter requirement that the curve has genus at least two (to avoid the additional counterexample of elliptic curves of positive rank, which have infinitely many rational points).
The standard proofs of Siegel’s theorem rely on a combination of the Thue-Siegel-Roth theorem and a number of results on abelian varieties (notably the Mordell-Weil theorem). The Corvaja-Zannier argument rebalances the difficulty of the argument by replacing the Thue-Siegel-Roth theorem by the more powerful subspace theorem (in fact, they need one of the stronger versions of this theorem alluded to earlier), while greatly reducing the reliance on results on abelian varieties. Indeed, for curves with three or more points at infinity, no theory from abelian varieties is needed at all, while for the remaining cases, one mainly needs the existence of the Abel-Jacobi embedding, together with a relatively elementary theorem of Chevalley-Weil which is used in the proof of the Mordell-Weil theorem, but is significantly easier to prove.
The Corvaja-Zannier argument (together with several further applications of the subspace theorem) is presented nicely in this Bourbaki expose of Bilu. To establish the theorem in full generality requires a certain amount of algebraic number theory machinery, such as the theory of valuations on number fields, or of relative discriminants between such number fields. However, the basic ideas can be presented without much of this machinery by focusing on simple special cases of Siegel’s theorem. For instance, we can handle irreducible cubics that meet the line at infinity at exactly three points :
Theorem 7 (Siegel’s theorem with three points at infinity) Siegel’s theorem holds when the irreducible polynomial takes the form
for some quadratic polynomial and some distinct algebraic numbers .
Proof: We use the nonstandard formalism. Suppose for sake of contradiction that we can find a strictly nonstandard integer point on a curve of the indicated form. As this point is infinitesimally close to the line at infinity, must be infinitesimally close to one of ; without loss of generality we may assume that is infinitesimally close to .
We now use a version of the polynomial method, to find some polynomials of controlled degree that vanish to high order on the “arm” of the cubic curve that asymptotes to . More precisely, let be a large integer (actually will already suffice here), and consider the -vector space of polynomials of degree at most , and of degree at most in the variable; this space has dimension . Also, as one traverses the arm of , any polynomial in grows at a rate of at most , that is to say has a pole of order at most at the point at infinity . By performing Laurent expansions around this point (which is a non-singular point of , as the are assumed to be distinct), we may thus find a basis of , with the property that has a pole of order at most at for each .
From the control of the pole at , we have
for all . The exponents here become negative for , and on multiplying them all together we see that
This exponent is negative for large enough (or just take ). If we expand
for some algebraic numbers , then we thus have
for some standard . Note that the -dimensional vectors are linearly independent in , because the are linearly independent in . Applying the Schmidt subspace theorem in the contrapositive, we conclude that the -tuple is not in -general position. That is to say, one has a non-trivial constraint of the form
for some standard rational coefficients , not all zero. But, as is irreducible and cubic in , it has no common factor with the standard polynomial , so by Bezout’s theorem (Theorem 2) the constraint (4) only has standard solutions, contradicting the strictly nonstandard nature of .
Exercise 2 Rewrite the above argument so that it makes no reference to nonstandard analysis. (In this case, the rewriting is quite straightforward; however, there will be a subsequent argument in which the standard version is significantly messier than the nonstandard counterpart, which is the reason why I am working with the nonstandard formalism in this blog post.)
A similar argument works for higher degree curves that meet the line at infinity in three or more points, though if the curve has singularities at infinity then it becomes convenient to rely on the Riemann-Roch theorem to control the dimension of the analogue of the space . Note that when there are only two or fewer points at infinity, though, one cannot get the negative exponent of needed to usefully apply the subspace theorem. To deal with this case we require some additional tricks. For simplicity we focus on the case of Mordell curves, although it will be convenient to work with more general number fields than the rationals:
Theorem 8 (Siegel’s theorem for Mordell curves) Let be a non-zero integer. Then there are only finitely many integer solutions to . More generally, for any number field , and any nonzero , there are only finitely many algebraic integer solutions to , where is the ring of algebraic integers in .
Again, we will establish the nonstandard version. We need some additional notation:
Definition 9
We define an almost rational integer to be a nonstandard such that for some standard positive integer , and write for the -algebra of almost rational integers. If is a standard number field, we define an almost -integer to be a nonstandard such that for some standard positive integer , and write for the -algebra of almost -integers. We define an almost algebraic integer to be a nonstandard such that is a nonstandard algebraic integer for some standard positive integer , and write for the -algebra of almost algebraic integers.
Theorem 10 (Siegel for Mordell, nonstandard version) Let be a non-zero standard algebraic number. Then the curve does not contain any strictly nonstandard almost algebraic integer point.
Another way of phrasing this theorem is that if are strictly nonstandard almost algebraic integers, then is either strictly nonstandard or zero.
Exercise 3 Verify that Theorem 8 and Theorem 10 are equivalent.
Due to all the ineffectivity, our proof does not supply any bound on the solutions in terms of , even if one removes all references to nonstandard analysis. It is a conjecture of Hall (a special case of the notorious ABC conjecture) that one has the bound for all (or equivalently ), but even the weaker conjecture that are of polynomial size in is open. (The best known bounds are of exponential nature, and are proven using a version of Baker’s method: see for instance this text of Sprindzuk.)
A direct repetition of the arguments used to prove Theorem 7 will not work here, because the Mordell curve only hits the line at infinity at one point, . To get around this we will exploit the fact that the Mordell curve is an elliptic curve and thus has a group law on it. We will then divide all the integer points on this curve by two; as elliptic curves have four 2-torsion points, this will end up placing us in a situation like Theorem 7, with four points at infinity. However, there is an obstruction: it is not obvious that dividing an integer point on the Mordell curve by two will produce another integer point. However, this is essentially true (after enlarging the ring of integers slightly) thanks to a general principle of Chevalley and Weil, which can be worked out explicitly in the case of division by two on Mordell curves by relatively elementary means (relying mostly on unique factorisation of ideals of algebraic integers). We give the details below the fold.
Let be a quasiprojective variety defined over a finite field , thus for instance could be an affine variety
where is -dimensional affine space and are a finite collection of polynomials with coefficients in . Then one can define the set of -rational points, and more generally the set of -rational points for any , since can be viewed as a field extension of . Thus for instance in the affine case (1) we have
The Weil conjectures are concerned with understanding the number
of -rational points over a variety . The first of these conjectures was proven by Dwork, and can be phrased as follows.
Theorem 1 (Rationality of the zeta function) Let be a quasiprojective variety defined over a finite field , and let be given by (2). Then there exist a finite number of algebraic integers (known as characteristic values of ), such that
for all .
After cancelling, we may of course assume that for any and , and then it is easy to see (as we will see below) that the become uniquely determined up to permutations of the and . These values are known as the characteristic values of . Since is a rational integer (i.e. an element of ) rather than merely an algebraic integer (i.e. an element of the ring of integers of the algebraic closure of ), we conclude from the above-mentioned uniqueness that the set of characteristic values are invariant with respect to the Galois group . To emphasise this Galois invariance, we will not fix a specific embedding of the algebraic numbers into the complex field , but work with all such embeddings simultaneously. (Thus, for instance, contains three cube roots of , but which of these is assigned to the complex numbers , , will depend on the choice of embedding .)
An equivalent way of phrasing Dwork’s theorem is that the (-form of the) zeta function
associated to (which is well defined as a formal power series in , at least) is equal to a rational function of (with the and being the poles and zeroes of respectively). Here, we use the formal exponential
Equivalently, the (-form of the) zeta-function is a meromorphic function on the complex numbers which is also periodic with period , and which has only finitely many poles and zeroes up to this periodicity.
Dwork’s argument relies primarily on -adic analysis – an analogue of complex analysis, but over an algebraically complete (and metrically complete) extension of the -adic field , rather than over the Archimedean complex numbers . The argument is quite effective, and in particular gives explicit upper bounds for the number of characteristic values in terms of the complexity of the variety ; for instance, in the affine case (1) with of degree , Bombieri used Dwork’s methods (in combination with Deligne’s theorem below) to obtain the bound , and a subsequent paper of Hooley established the slightly weaker bound purely from Dwork’s methods (a similar bound had also been pointed out in unpublished work of Dwork). In particular, one has bounds that are uniform in the field , which is an important fact for many analytic number theory applications.
These -adic arguments stand in contrast with Deligne’s resolution of the last (and deepest) of the Weil conjectures:
Theorem 2 (Riemann hypothesis) Let be a quasiprojective variety defined over a finite field , and let be a characteristic value of . Then there exists a natural number such that for every embedding , where denotes the usual absolute value on the complex numbers . (Informally: and all of its Galois conjugates have complex magnitude .)
To put it another way that closely resembles the classical Riemann hypothesis, all the zeroes and poles of the -form lie on the critical lines for . (See this previous blog post for further comparison of various instantiations of the Riemann hypothesis.) Whereas Dwork uses -adic analysis, Deligne uses the essentially orthogonal technique of ell-adic cohomology to establish his theorem. However, ell-adic methods can be used (via the Grothendieck-Lefschetz trace formula) to establish rationality, and conversely, in this paper of Kedlaya p-adic methods are used to establish the Riemann hypothesis. As pointed out by Kedlaya, the ell-adic methods are tied to the intrinsic geometry of (such as the structure of sheaves and covers over ), while the -adic methods are more tied to the extrinsic geometry of (how sits inside its ambient affine or projective space).
In this post, I would like to record my notes on Dwork’s proof of Theorem 1, drawing heavily on the expositions of Serre, Hooley, Koblitz, and others.
The basic strategy is to control the rational integers both in an “Archimedean” sense (embedding the rational integers inside the complex numbers with the usual norm ) as well as in the “-adic” sense, with the characteristic of (embedding the integers now in the “complexification” of the -adic numbers , which is equipped with a norm that we will recall later). (This is in contrast to the methods of ell-adic cohomology, in which one primarily works over an -adic field with .) The Archimedean control is trivial:
Proposition 3 (Archimedean control of ) With as above, and any embedding , we have
for all and some independent of .
Proof: Since is a rational integer, is just . By decomposing into affine pieces, we may assume that is of the affine form (1), then we trivially have , and the claim follows.
Another way of thinking about this Archimedean control is that it guarantees that the zeta function can be defined holomorphically on the open disk in of radius centred at the origin.
The -adic control is significantly more difficult, and is the main component of Dwork’s argument:
Proposition 4 (-adic control of ) With as above, and using an embedding (defined later) with the characteristic of , we can find for any real a finite number of elements such that
for all .
Another way of thinking about this -adic control is that it guarantees that the zeta function can be defined meromorphically on the entire -adic complex field .
Proposition 4 is ostensibly much weaker than Theorem 1 because of (a) the error term of -adic magnitude at most ; (b) the fact that the number of potential characteristic values here may go to infinity as ; and (c) the potential characteristic values only exist inside the complexified -adics , rather than in the algebraic integers . However, it turns out that by combining -adic control on in Proposition 4 with the trivial control on in Proposition 3, one can obtain Theorem 1 by an elementary argument that does not use any further properties of (other than the obvious fact that the are rational integers), with the in Proposition 4 chosen to exceed the in Proposition 3. We give this argument (essentially due to Borel) below the fold.
The proof of Proposition 4 can be split into two pieces. The first piece, which can be viewed as the number-theoretic component of the proof, uses external descriptions of such as (1) to obtain the following decomposition of :
Proposition 5 (Decomposition of ) With and as above, we can decompose as a finite linear combination (over the integers) of sequences , such that for each such sequence , the zeta functions
are entire in , by which we mean that
as .
This proposition will ultimately be a consequence of the properties of the Teichmuller lifting .
The second piece, which can be viewed as the “-adic complex analytic” component of the proof, relates the -adic entire nature of a zeta function with control on the associated sequence , and can be interpreted (after some manipulation) as a -adic version of the Weierstrass preparation theorem:
Proposition 6 (-adic Weierstrass preparation theorem) Let be a sequence in , such that the zeta function
is entire in . Then for any real , there exist a finite number of elements such that
for all and some .
Clearly, the combination of Proposition 5 and Proposition 6 (and the non-Archimedean nature of the norm) imply Proposition 4.
Let be a finite field of order , and let be an absolutely irreducible smooth projective curve defined over (and hence over the algebraic closure of that field). For instance, could be the projective elliptic curve
in the projective plane , where are coefficients whose discriminant is non-vanishing, which is the projective version of the affine elliptic curve
To each such curve one can associate a genus , which we will define later; for instance, elliptic curves have genus . We can also count the cardinality of the set of -points of . The Hasse-Weil bound relates the two:
The usual proofs of this bound proceed by first establishing a trace formula of the form
for some complex numbers independent of ; this is in fact a special case of the Lefschetz-Grothendieck trace formula, and can be interpreted as an assertion that the zeta function associated to the curve is rational. The task is then to establish a bound for all ; this (or more precisely, the slightly stronger assertion ) is the Riemann hypothesis for such curves. This can be done either by passing to the Jacobian variety of and using a certain duality available on the cohomology of such varieties, known as Rosati involution; alternatively, one can pass to the product surface and apply the Riemann-Roch theorem for that surface.
In 1969, Stepanov introduced an elementary method (a version of what is now known as the polynomial method) to count (or at least to upper bound) the quantity . The method was initially restricted to hyperelliptic curves, but was soon extended to general curves. In particular, Bombieri used this method to give a short proof of the following weaker version of the Hasse-Weil bound:
Theorem 2 (Weak Hasse-Weil bound) If is a perfect square, and , then .
In fact, the bound on can be sharpened a little bit further, as we will soon see.
Theorem 2 is only an upper bound on , but there is a Galois-theoretic trick to convert (a slight generalisation of) this upper bound to a matching lower bound, and if one then uses the trace formula (1) (and the “tensor power trick” of sending to infinity to control the weights ) one can then recover the full Hasse-Weil bound. We discuss these steps below the fold.
I’ve discussed Bombieri’s proof of Theorem 2 in this previous post (in the special case of hyperelliptic curves), but now wish to present the full proof, with some minor simplifications from Bombieri’s original presentation; it is mostly elementary, with the deepest fact from algebraic geometry needed being Riemann’s inequality (a weak form of the Riemann-Roch theorem).
The first step is to reinterpret as the number of points of intersection between two curves in the surface . Indeed, if we define the Frobenius endomorphism on any projective space by
then this map preserves the curve , and the fixed points of this map are precisely the points of :
Thus one can interpret as the number of points of intersection between the diagonal curve
and the Frobenius graph
which are copies of inside . But we can use the additional hypothesis that is a perfect square to write this more symmetrically, by taking advantage of the fact that the Frobenius map has a square root
with also preserving . One can then also interpret as the number of points of intersection between the curve
Let be the field of rational functions on (with coefficients in ), and define , , and analogously )(although is likely to be disconnected, so will just be a ring rather than a field. We then (morally) have the commuting square
if we ignore the issue that a rational function on, say, , might blow up on all of and thus not have a well-defined restriction to . We use and to denote the restriction maps. Furthermore, we have obvious isomorphisms , coming from composing with the graphing maps and .
The idea now is to find a rational function on the surface of controlled degree which vanishes when restricted to , but is non-vanishing (and not blowing up) when restricted to . On , we thus get a non-zero rational function of controlled degree which vanishes on – which then lets us bound the cardinality of in terms of the degree of . (In Bombieri’s original argument, one required vanishing to high order on the side, but in our presentation, we have factored out a term which removes this high order vanishing condition.)
To find this , we will use linear algebra. Namely, we will locate a finite-dimensional subspace of (consisting of certain “controlled degree” rational functions) which projects injectively to , but whose projection to has strictly smaller dimension than itself. The rank-nullity theorem then forces the existence of a non-zero element of whose projection to vanishes, but whose projection to is non-zero.
Now we build . Pick a point of , which we will think of as being a point at infinity. (For the purposes of proving Theorem 2, we may clearly assume that is non-empty.) Thus is fixed by . To simplify the exposition, we will also assume that is fixed by the square root of ; in the opposite case when has order two when acting on , the argument is essentially the same, but all references to in the second factor of need to be replaced by (we leave the details to the interested reader).
For any natural number , define to be the set of rational functions which are allowed to have a pole of order up to at , but have no other poles on ; note that as we are assuming to be smooth, it is unambiguous what a pole is (and what order it will have). (In the fancier language of divisors and Cech cohomology, we have .) The space is clearly a vector space over ; one can view intuitively as the space of “polynomials” on of “degree” at most . When , consists just of the constant functions. Indeed, if , then the image of avoids and so lies in the affine line ; but as is projective, the image needs to be compact (hence closed) in , and must therefore be a point, giving the claim.
For higher , we have the easy relations
The former inequality just comes from the trivial inclusion . For the latter, observe that if two functions lie in , so that they each have a pole of order at most at , then some linear combination of these functions must have a pole of order at most at ; thus has codimension at most one in , giving the claim.
From (3) and induction we see that each of the are finite dimensional, with the trivial upper bound
Riemann’s inequality complements this with the lower bound
thus one has for all but at most exceptions (in fact, exactly exceptions as it turns out). This is a consequence of the Riemann-Roch theorem; it can be proven from abstract nonsense (the snake lemma) if one defines the genus in a non-standard fashion (as the dimension of the first Cech cohomology of the structure sheaf of ), but to obtain this inequality with a standard definition of (e.g. as the dimension of the zeroth Cech cohomolgy of the line bundle of differentials) requires the more non-trivial tool of Serre duality.
At any rate, now that we have these vector spaces , we will define to be a tensor product space
for some natural numbers which we will optimise in later. That is to say, is spanned by functions of the form with and . This is clearly a linear subspace of of dimension , and hence by Rieman’s inequality we have
Observe that maps a tensor product to a function . If and , then we see that the function has a pole of order at most at . We conclude that
and in particular by (4)
We will choose to be a bit bigger than , to make the image of smaller than that of . From (6), (10) we see that if we have the inequality
(together with (7)) then cannot be injective.
On the other hand, we have the following basic fact:
Proof: From (3), we can find a linear basis of such that each of the has a distinct order of pole at (somewhere between and inclusive). Similarly, we may find a linear basis of such that each of the has a distinct order of pole at (somewhere between and inclusive). The functions then span , and the order of pole at is . But since , these orders are all distinct, and so these functions must be linearly independent. The claim follows.
This gives us the following bound:
Proposition 4 Let be natural numbers such that (7), (11), (12) hold. Then .
Proof: As is not injective, we can find with vanishing. By the above lemma, the function is then non-zero, but it must also vanish on , which has cardinality . On the other hand, by (8), has a pole of order at most at and no other poles. Since the number of poles and zeroes of a rational function on a projective curve must add up to zero, the claim follows.
If , we may make the explicit choice
and a brief calculation then gives Theorem 2. In some cases one can optimise things a bit further. For instance, in the genus zero case (e.g. if is just the projective line ) one may take and conclude the absolutely sharp bound in this case; in the case of the projective line , the function is in fact the very concrete function .
Remark 1 When is not a perfect square, one can try to run the above argument using the factorisation instead of . This gives a weaker version of the above bound, of the shape . In the hyperelliptic case at least, one can erase this loss by working with a variant of the argument in which one requires to vanish to high order at , rather than just to first order; see this survey article of mine for details.
This is the eighth thread for the Polymath8b project to obtain new bounds for the quantity
either for small values of (in particular ) or asymptotically as . The previous thread may be found here. The currently best known bounds on can be found at the wiki page.
The big news since the last thread is that we have managed to obtain the (sieve-theoretically) optimal bound of assuming the generalised Elliott-Halberstam conjecture (GEH), which pretty much closes off that part of the story. Unconditionally, our bound on is still . This bound was obtained using the “vanilla” Maynard sieve, in which the cutoff was supported in the original simplex , and only Bombieri-Vinogradov was used. In principle, we can enlarge the sieve support a little bit further now; for instance, we can enlarge to , but then have to shrink the J integrals to , provided that the marginals vanish for . However, we do not yet know how to numerically work with these expanded problems.
Given the substantial progress made so far, it looks like we are close to the point where we should declare victory and write up the results (though we should take one last look to see if there is any room to improve the bounds). There is actually a fair bit to write up:
- Improvements to the Maynard sieve (pushing beyond the simplex, the epsilon trick, and pushing beyond the cube);
- Asymptotic bounds for and hence ;
- Explicit bounds for (using the Polymath8a results)
- ;
- on GEH (and parity obstructions to any further improvement).
I will try to create a skeleton outline of such a paper in the Polymath8 Dropbox folder soon. It shouldn’t be nearly as big as the Polymath8a paper, but it will still be quite sizeable.
There are multiple purposes to this blog post.
The first purpose is to announce the uploading of the paper “New equidistribution estimates of Zhang type, and bounded gaps between primes” by D.H.J. Polymath, which is the main output of the Polymath8a project on bounded gaps between primes, to the arXiv, and to describe the main results of this paper below the fold.
The second purpose is to roll over the previous thread on all remaining Polymath8a-related matters (e.g. updates on the submission status of the paper) to a fresh thread. (Discussion of the ongoing Polymath8b project is however being kept on a separate thread, to try to reduce confusion.)
The final purpose of this post is to coordinate the writing of a retrospective article on the Polymath8 experience, which has been solicited for the Newsletter of the European Mathematical Society. I suppose that this could encompass both the Polymath8a and Polymath8b projects, even though the second one is still ongoing (but I think we will soon be entering the endgame there). I think there would be two main purposes of such a retrospective article. The first one would be to tell a story about the process of conducting mathematical research, rather than just describe the outcome of such research; this is an important aspect of the subject which is given almost no attention in most mathematical writing, and it would be good to be able to capture some sense of this process while memories are still relatively fresh. The other would be to draw some tentative conclusions with regards to what the strengths and weaknesses of a Polymath project are, and how appropriate such a format would be for other mathematical problems than bounded gaps between primes. In my opinion, the bounded gaps problem had some fairly unique features that made it particularly amenable to a Polymath project, such as (a) a high level of interest amongst the mathematical community in the problem; (b) a very focused objective (“improve !”), which naturally provided an obvious metric to measure progress; (c) the modular nature of the project, which allowed for people to focus on one aspect of the problem only, and still make contributions to the final goal; and (d) a very reasonable level of ambition (for instance, we did not attempt to prove the twin prime conjecture, which in my opinion would make a terrible Polymath project at our current level of mathematical technology). This is not an exhaustive list of helpful features of the problem; I would welcome other diagnoses of the project by other participants.
With these two objectives in mind, I propose a format for the retrospective article consisting of a brief introduction to the polymath concept in general and the polymath8 project in particular, followed by a collection of essentially independent contributions by different participants on their own experiences and thoughts. Finally we could have a conclusion section in which we make some general remarks on the polymath project (such as the remarks above). I’ve started a dropbox subfolder for this article (currently in a very skeletal outline form only), and will begin writing a section on my own experiences; other participants are of course encouraged to add their own sections (it is probably best to create separate files for these, and then input them into the main file retrospective.tex, to reduce edit conflicts. If there are participants who wish to contribute but do not currently have access to the Dropbox folder, please email me and I will try to have you added (or else you can supply your thoughts by email, or in the comments to this post; we may have a section for shorter miscellaneous comments from more casual participants, for people who don’t wish to write a lengthy essay on the subject).
As for deadlines, the EMS Newsletter would like a submitted article by mid-April in order to make the June issue, but in the worst case, it will just be held over until the issue after that.
This is the seventh thread for the Polymath8b project to obtain new bounds for the quantity
either for small values of (in particular ) or asymptotically as . The previous thread may be found here. The currently best known bounds on can be found at the wiki page.
The current focus is on improving the upper bound on under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from to . Very recently, we have been able to exploit GEH more fully, leading to a promising new expansion of the sieve support region. The problem now reduces to the following:
Problem 1 Does there exist a (not necessarily convex) polytope with quantities , and a non-trivial square-integrable function supported on such that
- when ;
- when ;
- when ;
and such that we have the inequality
An affirmative answer to this question will imply on GEH. We are “within two percent” of this claim; we cannot quite reach yet, but have got as far as . However, we have not yet fully optimised in the above problem. In particular, the simplex
is now available, and should lead to some noticeable improvement in the numerology.
There is also a very slim chance that the twin prime conjecture is now provable on GEH. It would require an affirmative solution to the following problem:
Problem 2 Does there exist a (not necessarily convex) polytope with quantities , and a non-trivial square-integrable function supported on such that
- when ;
- when ;
and such that we have the inequality
We suspect that the answer to this question is negative, but have not formally ruled it out yet.
For the rest of this post, I will justify why positive answers to these sorts of variational problems are sufficient to get bounds on (or more generally ).
Recent Comments