You are currently browsing the tag archive for the ‘Selberg sieve’ tag.
Many problems in non-multiplicative prime number theory can be recast as sieving problems. Consider for instance the problem of counting the number of pairs of twin primes contained in for some large ; note that the claim that for arbitrarily large is equivalent to the twin prime conjecture. One can obtain this count by any of the following variants of the sieve of Eratosthenes:
- Let be the set of natural numbers in . For each prime , let be the union of the residue classes and . Then is the cardinality of the sifted set .
- Let be the set of primes in . For each prime , let be the residue class . Then is the cardinality of the sifted set .
- Let be the set of primes in . For each prime , let be the residue class . Then is the cardinality of the sifted set .
- Let be the set . For each prime , let be the residue class Then is the cardinality of the sifted set .
Exercise 1 Develop similar sifting formulations of the other three Landau problems.
In view of these sieving interpretations of number-theoretic problems, it becomes natural to try to estimate the size of sifted sets for various finite sets of integers, and subsets of integers indexed by primes dividing some squarefree natural number (which, in the above examples, would be the product of all primes up to ). As we see in the above examples, the sets in applications are typically the union of one or more residue classes modulo , but we will work at a more abstract level of generality here by treating as more or less arbitrary sets of integers, without caring too much about the arithmetic structure of such sets.
It turns out to be conceptually more natural to replace sets by functions, and to consider the more general the task of estimating sifted sums
for some finitely supported sequence of non-negative numbers; the previous combinatorial sifting problem then corresponds to the indicator function case . (One could also use other index sets here than the integers if desired; for much of sieve theory the index set and its subsets are treated as abstract sets, so the exact arithmetic structure of these sets is not of primary importance.)
Continuing with twin primes as a running example, we thus have the following sample sieving problem:
where , is the product of all the primes strictly less than (we omit itself for minor technical reasons), and is the union of the residue classes . Obtain upper and lower bounds on which are as strong as possible in the asymptotic regime where goes to infinity and the sifting level grows with (ideally we would like to grow as fast as ).
From the preceding discussion we know that the number of twin prime pairs in is equal to , if is not a perfect square; one also easily sees that the number of twin prime pairs in is at least , again if is not a perfect square. Thus we see that a sufficiently good answer to Problem 2 would resolve the twin prime conjecture, particularly if we can get the sifting level to be as large as .
We return now to the general problem of estimating (1). We may expand
where (with the convention that ). We thus arrive at the Legendre sieve identity
Specialising to the case of an indicator function , we recover the inclusion-exclusion formula
Such exact sieving formulae are already satisfactory for controlling sifted sets or sifted sums when the amount of sieving is relatively small compared to the size of . For instance, let us return to the running example in Problem 2 for some . Observe that each in this example consists of residue classes modulo , where is defined to equal when and when is odd. By the Chinese remainder theorem, this implies that for each , consists of residue classes modulo . Using the basic bound
Also, the number of divisors of is at most . From the Legendre sieve (3), we thus conclude that
We can factorise the main term to obtain
coming from the equidistribution of residues principle (Section 3 of Supplement 4), bearing in mind (from the modified Cramér model, see Section 1 of Supplement 4) that we expect this heuristic to become inaccurate when becomes very large. We can simplify the right-hand side of (7) by recalling the twin prime constant
(see equation (7) from Supplement 4); note that
so from Mertens’ third theorem (Theorem 42 from Notes 1) one has
when with . This is somewhat encouraging for the purposes of getting a sufficiently good answer to Problem 2 to resolve the twin prime conjecture, but note that is currently far too small: one needs to get as large as before one is counting twin primes, and currently can only get as large as .
The problem is that the number of terms in the Legendre sieve (3) basically grows exponentially in , and so the error terms in (4) accumulate to an unacceptable extent once is significantly larger than . An alternative way to phrase this problem is that the estimate (4) is only expected to be truly useful in the regime ; on the other hand, the moduli appearing in (3) can be as large as , which grows exponentially in by the prime number theorem.
To resolve this problem, it is thus natural to try to truncate the Legendre sieve, in such a way that one only uses information about the sums for a relatively small number of divisors of , such as those which are below a certain threshold . This leads to the following general sieving problem:
Problem 3 (General sieving problem) Let be a squarefree natural number, and let be a set of divisors of . For each prime dividing , let be a set of integers, and define for all (with the convention that ). Suppose that is an (unknown) finitely supported sequence of non-negative reals, whose sums
are known for all . What are the best upper and lower bounds one can conclude on the quantity (1)?
Here is a simple example of this type of problem (corresponding to the case , , , , and ):
and give counterexamples to show that these bounds cannot be improved in general, even when is an indicator function sequence.
Problem 3 is an example of a linear programming problem. By using linear programming duality (as encapsulated by results such as the Hahn-Banach theorem, the separating hyperplane theorem, or the Farkas lemma), we can rephrase the above problem in terms of upper and lower bound sieves:
Theorem 5 (Dual sieve problem) Let be as in Problem 3. We assume that Problem 3 is feasible, in the sense that there exists at least one finitely supported sequence of non-negative reals obeying the constraints in that problem. Define an (normalised) upper bound sieve to be a function of the form
for some coefficients , and obeying the pointwise upper bound
for all . Thus for instance and are (trivially) upper bound sieves and lower bound sieves respectively.
- (i) The supremal value of the quantity (1), subject to the constraints in Problem 3, is equal to the infimal value of the quantity , as ranges over all upper bound sieves.
- (ii) The infimal value of the quantity (1), subject to the constraints in Problem 3, is equal to the supremal value of the quantity , as ranges over all lower bound sieves.
Proof: We prove part (i) only, and leave part (ii) as an exercise. Let be the supremal value of the quantity (1) given the constraints in Problem 3, and let be the infimal value of . We need to show that .
We first establish the easy inequality . If the sequence obeys the constraints in Problem 3, and is an upper bound sieve, then
and hence (by the non-negativity of and )
taking suprema in and infima in we conclude that .
Now suppose for contradiction that , thus for some real number . We will argue using the hyperplane separation theorem; one can also proceed using one of the other duality results mentioned above. (See this previous blog post for some discussion of the connections between these various forms of linear duality.) Consider the affine functional
on the vector space of finitely supported sequences of reals. On the one hand, since , this functional is positive for every sequence obeying the constraints in Problem 3. Next, let be the space of affine functionals of the form
for some real numbers , some non-negative function which is a finite linear combination of the for , and some non-negative . This is a closed convex cone in a finite-dimensional vector space ; note also that lies in . Suppose first that , thus we have a representation of the form
for any finitely supported sequence . Comparing coefficients, we conclude that
for any (i.e., is an upper bound sieve), and also
and thus , a contradiction. Thus lies outside of . But then by the hyperplane separation theorem, we can find an affine functional on that is non-negative on and negative on . By duality, such an affine functional takes the form for some finitely supported sequence and (indeed, can be supported on a finite set consisting of a single representative for each atom of the finite -algebra generated by the ). Since is non-negative on the cone , we see (on testing against multiples of the functionals or ) that the and are non-negative, and that for all ; thus is feasible for Problem 3. Since is negative on , we see that
and thus , giving the desired contradiction.
Exercise 6 Prove part (ii) of the above theorem.
Exercise 7 Show that the infima and suprema in the above theorem are actually attained (so one can replace “infimal” and “supremal” by “minimal” and “maximal” if desired).
Exercise 8 What are the optimal upper and lower bound sieves for Exercise 4?
In the case when consists of all the divisors of , we see that the Legendre sieve is both the optimal upper bound sieve and the optimal lower bound sieve, regardless of what the quantities are. However, in most cases of interest, will only be some strict subset of the divisors of , and there will be a gap between the optimal upper and lower bounds.
Observe that a sequence of real numbers will form an upper bound sieve if one has the inequalities
for all ; we will refer to such sequences as upper bound sieve coefficients. (Conversely, if the sets are in “general position” in the sense that every set of the form for is non-empty, we see that every upper bound sieve arises from a sequence of upper bound sieve coefficients.) Similarly, a sequence of real numbers will form a lower bound sieve if one has the inequalities
for all with ; we will refer to such sequences as lower bound sieve coefficients.
where is the number of prime factors of , is a sequence of upper bound sieve coefficients for even , and a sequence of lower bound sieve coefficients for odd . Deduce the Bonferroni inequalities
when is odd, whenever one is in the situation of Problem 3 (and contains all with ). The resulting upper and lower bound sieves are sometimes known as Brun pure sieves. The Legendre sieve can be viewed as the limiting case when .
In many applications the sums in (9) take the form
for some quantity independent of , some multiplicative function with , and some remainder term whose effect is expected to be negligible on average if is restricted to be small, e.g. less than a threshold ; note for instance that (5) is of this form if for some fixed (note from the divisor bound, Lemma 23 of Notes 1, that if ). We are thus led to the following idealisation of the sieving problem, in which the remainder terms are ignored:
Thus, for instance, the trivial upper bound sieve and the trivial lower bound sieve show that (14) can equal and (15) can equal . Of course, one hopes to do better than these trivial bounds in many situations; usually one can improve the upper bound quite substantially, but improving the lower bound is significantly more difficult, particularly when is large compared with .
If the remainder terms in (13) are indeed negligible on average for , then one expects the upper and lower bounds in Problem 3 to essentially be the optimal bounds in (14) and (15) respectively, multiplied by the normalisation factor . Thus Problem 10 serves as a good model problem for Problem 3, in which all the arithmetic content of the original sieving problem has been abstracted into two parameters and a multiplicative function . In many applications, will be approximately on the average for some fixed , known as the sieve dimension; for instance, in the twin prime sieving problem discussed above, the sieve dimension is . The larger one makes the level of distribution compared to , the more choices one has for the upper and lower bound sieves; it is thus of interest to obtain equidistribution estimates such as (13) for as large as possible. When the sequence is of arithmetic origin (for instance, if it is the von Mangoldt function ), then estimates such as the Bombieri-Vinogradov theorem, Theorem 17 from Notes 3, turn out to be particularly useful in this regard; in other contexts, the required equidistribution estimates might come from other sources, such as homogeneous dynamics, or the theory of expander graphs (the latter arises in the recent theory of the affine sieve, discussed in this previous blog post). However, the sieve-theoretic tools developed in this post are not particularly sensitive to how a certain level of distribution is attained, and are generally content to use sieve axioms such as (13) as “black boxes”.
In some applications one needs to modify Problem 10 in various technical ways (e.g. in altering the product , the set , or the definition of an upper or lower sieve coefficient sequence), but to simplify the exposition we will focus on the above problem without such alterations.
Exercise 11 Let be as in Problem 10, and set .
- (i) Show that the quantity (14) is always at least when is a sequence of upper bound sieve coefficients. Similarly, show that the quantity (15) is always at most when is a sequence of lower bound sieve coefficients. (Hint: compute the expected value of when is a random factor of chosen according to a certain probability distribution depending on .)
- (ii) Show that (14) and (15) can both attain the value of when . (Hint: translate the Legendre sieve to this setting.)
The problem of finding good sequences of upper and lower bound sieve coefficients in order to solve problems such as Problem 10 is one of the core objectives of sieve theory, and has been intensively studied. This is more of an optimisation problem rather than a genuinely number theoretic problem; however, the optimisation problem is sufficiently complicated that it has not been solved exactly or even asymptotically, except in a few special cases. (It can be reduced to a optimisation problem involving multilinear integrals of certain unknown functions of several variables, but this problem is rather difficult to analyse further; see these lecture notes of Selberg for further discussion.) But while we do not yet have a definitive solution to this problem in general, we do have a number of good general-purpose upper and lower bound sieve coefficients that give fairly good values for (14), (15), often coming within a constant factor of the idealised value , and which work well for sifting levels as large as a small power of the level of distribution . Unfortunately, we also know of an important limitation to the sieve, known as the parity problem, that prevents one from taking as large as while still obtaining non-trivial lower bounds; as a consequence, sieve theory is not able, on its own, to sift out primes for such purposes as establishing the twin prime conjecture. However, it is still possible to use these sieves, in conjunction with additional tools, to produce various types of primes or prime patterns in some cases; examples of this include the theorem of Ben Green and myself in which an upper bound sieve is used to demonstrate the existence of primes in arbitrarily long arithmetic progressions, or the more recent theorem of Zhang in which (among other things) used an upper bound sieve was used to demonstrate the existence of infinitely many pairs of primes whose difference was bounded. In such arguments, the upper bound sieve was used not so much to count the primes or prime patterns directly, but to serve instead as a sort of “container” to efficiently envelop such prime patterns; when used in such a manner, the upper bound sieves are sometimes known as enveloping sieves. If the original sequence was supported on primes, then the enveloping sieve can be viewed as a “smoothed out indicator function” that is concentrated on almost primes, which in this context refers to numbers with no small prime factors.
In a somewhat different direction, it can be possible in some cases to break the parity barrier by assuming additional equidistribution axioms on the sequence than just (13), in particular controlling certain bilinear sums involving rather than just linear sums of the . This approach was in particular pursued by Friedlander and Iwaniec, leading to their theorem that there are infinitely many primes of the form .
The study of sieves is an immense topic; see for instance the recent 527-page text by Friedlander and Iwaniec. We will limit attention to two sieves which give good general-purpose results, if not necessarily the most optimal ones:
- (i) The beta sieve (or Rosser-Iwaniec sieve), which is a modification of the classical combinatorial sieve of Brun. (A collection of sieve coefficients is called combinatorial if its coefficients lie in .) The beta sieve is a family of upper and lower bound combinatorial sieves, and are particularly useful for efficiently sieving out all primes up to a parameter from a set of integers of size , in the regime where is moderately large, leading to what is sometimes known as the fundamental lemma of sieve theory.
- (ii) The Selberg upper bound sieve, which is a general-purpose sieve that can serve both as an upper bound sieve for classical sieving problems, as well as an enveloping sieve for sets such as the primes. (One can also convert the Selberg upper bound sieve into a lower bound sieve in a number of ways, but we will only touch upon this briefly.) A key advantage of the Selberg sieve is that, due to the “quadratic” nature of the sieve, the difficult optimisation problem in Problem 10 is replaced with a much more tractable quadratic optimisation problem, which can often be solved for exactly.
Remark 12 It is possible to compose two sieves together, for instance by using the observation that the product of two upper bound sieves is again an upper bound sieve, or that the product of an upper bound sieve and a lower bound sieve is a lower bound sieve. Such a composition of sieves is useful in some applications, for instance if one wants to apply the fundamental lemma as a “preliminary sieve” to sieve out small primes, but then use a more precise sieve like the Selberg sieve to sieve out medium primes. We will see an example of this in later notes, when we discuss the linear beta-sieve.
We will also briefly present the (arithmetic) large sieve, which gives a rather different approach to Problem 3 in the case that each consists of some number (typically a large number) of residue classes modulo , and is powered by the (analytic) large sieve inequality of the preceding section. As an application of these methods, we will utilise the Selberg upper bound sieve as an enveloping sieve to establish Zhang’s theorem on bounded gaps between primes. Finally, we give an informal discussion of the parity barrier which gives some heuristic limitations on what sieve theory is able to accomplish with regards to counting prime patters such as twin primes.
These notes are only an introduction to the vast topic of sieve theory; more detailed discussion can be found in the Friedlander-Iwaniec text, in these lecture notes of Selberg, and in many further texts.
This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project. As the previous post was getting somewhat full, we are rolling the thread over to the current post.
In this post we will record a new truncation of the elementary Selberg sieve discussed in this previous post (and also analysed in the context of bounded prime gaps by Graham-Goldston-Pintz-Yildirim and Motohashi-Pintz) that was recently worked out by Janos Pintz, who has kindly given permission to share this new idea with the Polymath8 project. This new sieve decouples the parameter that was present in our previous analysis of Zhang’s argument into two parameters, a quantity that used to measure smoothness in the modulus, but now measures a weaker notion of “dense divisibility” which is what is really needed in the Elliott-Halberstam type estimates, and a second quantity which still measures smoothness but is allowed to be substantially larger than . Through this decoupling, it appears that the type losses in the sieve theoretic part of the argument can be almost completely eliminated (they basically decay exponential in and have only mild dependence on , whereas the Elliott-Halberstam analysis is sensitive only to , allowing one to set far smaller than previously by keeping large). This should lead to noticeable gains in the quantity in our analysis.
To describe this new truncation we need to review some notation. As in all previous posts (in particular, the first post in this series), we have an asymptotic parameter going off to infinity, and all quantities here are implicitly understood to be allowed to depend on (or to range in a set that depends on ) unless they are explicitly declared to be fixed. We use the usual asymptotic notation relative to this parameter . To be able to ignore local factors (such as the singular series ), we also use the “-trick” (as discussed in the first post in this series): we introduce a parameter that grows very slowly with , and set .
For any fixed natural number , define an admissible -tuple to be a fixed tuple of distinct integers which avoids at least one residue class modulo for each prime . Our objective is to obtain the following conjecture for as small a value of the parameter as possible:
Conjecture 1 () Let be a fixed admissible -tuple. Then there exist infinitely many translates of that contain at least two primes.
The twin prime conjecture asserts that holds for as small as , but currently we are only able to establish this result for (see this comment). However, with the new truncated sieve of Pintz described in this post, we expect to be able to lower this threshold somewhat.
In previous posts, we deduced from a technical variant of the Elliot-Halberstam conjecture for certain choices of parameters , . We will use the following formulation of :
and is the set of congruence classes
and is the polynomial
The conjecture is currently known to hold whenever (see this comment and this confirmation). Actually, we can prove a stronger result than in this regime in a couple ways. Firstly, the congruence classes can be replaced by a more general system of congruence classes obeying a certain controlled multiplicity axiom; see this post. Secondly, and more importantly for this post, the requirement that the modulus lies in can be relaxed; see below.
To connect the two conjectures, the previously best known implication was the folowing (see Theorem 2 from this post):
where is the first positive zero of the Bessel function , and are the quantities
Then implies .
Actually there have been some slight improvements to the quantities ; see the comments to this previous post. However, the main error remains roughly of the order , which limits one from taking too small.
To improve beyond this, the first basic observation is that the smoothness condition , which implies that all prime divisors of are less than , can be relaxed in the proof of . Indeed, if one inspects the proof of this proposition (described in these three previous posts), one sees that the key property of needed is not so much the smoothness, but a weaker condition which we will call (for lack of a better term) dense divisibility:
Definition 4 Let . A positive integer is said to be -densely divisible if for every , one can find a factor of in the interval . We let denote the set of positive integers that are -densely divisible.
Certainly every integer which is -smooth (i.e. has all prime factors at most is also -densely divisible, as can be seen from the greedy algorithm; but the property of being -densely divisible is strictly weaker than -smoothness, which is a fact we shall exploit shortly.
We now define to be the same statement as , but with the condition replaced by the weaker condition . The arguments in previous posts then also establish whenever .
The main result of this post is then the following implication, essentially due to Pintz:
Then implies .
This theorem has rather messy constants, but we can isolate some special cases which are a bit easier to compute with. Setting , we see that vanishes (and the argument below will show that we only need rather than ), and we obtain the following slight improvement of Theorem 3:
Then implies .
This is a little better than Theorem 3, because the error has size about , which compares favorably with the error in Theorem 3 which is about . This should already give a “cheap” improvement to our current threshold , though it will fall short of what one would get if one fully optimised over all parameters in the above theorem.
Returning to the full strength of Theorem 5, let us obtain a crude upper bound for that is a little simpler to understand. Extending the summation to infinity and using the Taylor series for the exponential, we have
We can crudely bound
and then optimise in to obtain
Because of the factor in the integrand for and , we expect the ratio to be of the order of , although one will need some theoretical or numerical estimates on Bessel functions to make this heuristic more precise. Setting to be something like , we get a good bound here as long as , which at current values of is a mild condition.
Pintz’s argument uses the elementary Selberg sieve, discussed in this previous post, but with a more efficient estimation of the quantity , in particular avoiding the truncation to moduli between and which was the main source of inefficiency in that previous post. The basic idea is to “linearise” the effect of the truncation of the sieve, so that this contribution can be dealt with by the union bound (basically, bounding the contribution of each large prime one at a time). This mostly avoids the more complicated combinatorial analysis that arose in the analytic Selberg sieve, as seen in this previous post.
This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project to improve the various parameters in Zhang’s proof that bounded gaps between primes occur infinitely often. Given that the comments on that page are getting quite lengthy, this is also a good opportunity to “roll over” that thread.
We will continue the notation from the previous post, including the concept of an admissible tuple, the use of an asymptotic parameter going to infinity, and a quantity depending on that goes to infinity sufficiently slowly with , and (the -trick).
The objective of this portion of the Polymath8 project is to make as efficient as possible the connection between two types of results, which we call and . Let us first state , which has an integer parameter :
Conjecture 1 () Let be a fixed admissible -tuple. Then there are infinitely many translates of which contain at least two primes.
Zhang was the first to prove a result of this type with . Since then the value of has been lowered substantially; at this time of writing, the current record is .
There are two basic ways known currently to attain this conjecture. The first is to use the Elliott-Halberstam conjecture for some :
Conjecture 2 () One has
for all fixed . Here we use the abbreviation for .
Here of course is the von Mangoldt function and the Euler totient function. It is conjectured that holds for all , but this is currently only known for , an important result known as the Bombieri-Vinogradov theorem.
In a breakthrough paper, Goldston, Yildirim, and Pintz established an implication of the form
for any , where depends on . This deduction was very recently optimised by Farkas, Pintz, and Revesz and also independently in the comments to the previous blog post, leading to the following implication:
where is the first positive zero of the Bessel function . Then implies .
Implications of the form Theorem 3 were modified by Motohashi and Pintz, which in our notation replaces by an easier conjecture for some and , at the cost of degrading the sufficient condition (2) slightly. In our notation, this conjecture takes the following form for each choice of parameters :
and is the set of congruence classes
and is the polynomial
This is a weakened version of the Elliott-Halberstam conjecture:
In particular, since is conjecturally true for all , we conjecture to be true for all and .
then the hypothesis (applied to and and then subtracting) tells us that
for any fixed . From the Chinese remainder theorem and the Siegel-Walfisz theorem we have
for any coprime to (and in particular for ). Since , where is the number of prime divisors of , we can thus bound the left-hand side of (3) by
The contribution of the second term is by standard estimates (see Proposition 8 below). Using the very crude bound
and standard estimates we also have
and the claim now follows from the Cauchy-Schwarz inequality.
In practice, the conjecture is easier to prove than due to the restriction of the residue classes to , and also the restriction of the modulus to -smooth numbers. Zhang proved for any . More recently, our Polymath8 group has analysed Zhang’s argument (using in part a corrected version of the analysis of a recent preprint of Pintz) to obtain whenever are such that
The work of Motohashi and Pintz, and later Zhang, implicitly describe arguments that allow one to deduce from provided that is sufficiently large depending on . The best implication of this sort that we have been able to verify thus far is the following result, established in the previous post:
where is the quantity
Then implies .
This complicated version of is roughly of size . It is unlikely to be optimal; the work of Motohashi-Pintz and Pintz suggests that it can essentially be improved to , but currently we are unable to verify this claim. One of the aims of this post is to encourage further discussion as to how to improve the term in results such as Theorem 6.
We remark that as (5) is an open condition, it is unaffected by infinitesimal modifications to , and so we do not ascribe much importance to such modifications (e.g. replacing by for some arbitrarily small ).
The known deductions of from claims such as or rely on the following elementary observation of Goldston, Pintz, and Yildirim (essentially a weighted pigeonhole principle), which we have placed in “-tricked form”:
Lemma 7 (Criterion for DHL) Let . Suppose that for each fixed admissible -tuple and each congruence class such that is coprime to for all , one can find a non-negative weight function , fixed quantities , a quantity , and a fixed positive power of such that one has the upper bound
holds. Then holds. Here is defined to equal when is prime and otherwise.
By (8), this expression is positive for all sufficiently large . On the other hand, (9) can only be positive if at least one summand is positive, which only can happen when contains at least two primes for some with . Letting we obtain as claimed.
In practice, the quantity (referred to as the sieve level) is a power of such as or , and reflects the strength of the distribution hypothesis or that is available; the quantity will also be a key parameter in the definition of the sieve weight . The factor reflects the order of magnitude of the expected density of in the residue class ; it could be absorbed into the sieve weight by dividing that weight by , but it is convenient to not enforce such a normalisation so as not to clutter up the formulae. In practice, will some combination of and .
Once one has decided to rely on Lemma 7, the next main task is to select a good weight for which the ratio is as small as possible (and for which the sieve level is as large as possible. To ensure non-negativity, we use the Selberg sieve
where takes the form
for some weights vanishing for that are to be chosen, where is an interval and is the polynomial . If the distribution hypothesis is , one takes and ; if the distribution hypothesis is instead , one takes and .
is used for some additional parameter to be optimised over. More generally, one can take
for some suitable (in particular, sufficiently smooth) cutoff function . We will refer to this choice of sieve weights as the “analytic Selberg sieve”; this is the choice used in the analysis in the previous post.
for a sufficiently smooth function , where
for is a -variant of the Euler totient function, and
for is a -variant of the function . (The derivative on the cutoff is convenient for computations, as will be made clearer later in this post.) This choice of weights may seem somewhat arbitrary, but it arises naturally when considering how to optimise the quadratic form
(which arises naturally in the estimation of in (6)) subject to a fixed value of (which morally is associated to the estimation of in (7)); this is discussed in any sieve theory text as part of the general theory of the Selberg sieve, e.g. Friedlander-Iwaniec.
The use of the elementary Selberg sieve for the bounded prime gaps problem was studied by Motohashi and Pintz. Their arguments give an alternate derivation of from for sufficiently large, although unfortunately we were not able to confirm some of their calculations regarding the precise dependence of on , and in particular we have not yet been able to improve upon the specific criterion in Theorem 6 using the elementary sieve. However it is quite plausible that such improvements could become available with additional arguments.
Below the fold we describe how the elementary Selberg sieve can be used to reprove Theorem 3, and discuss how they could potentially be used to improve upon Theorem 6. (But the elementary Selberg sieve and the analytic Selberg sieve are in any event closely related; see the appendix of this paper of mine with Ben Green for some further discussion.) For the purposes of polymath8, either developing the elementary Selberg sieve or continuing the analysis of the analytic Selberg sieve from the previous post would be a relevant topic of conversation in the comments to this post.
In this, the final lecture notes of this course, we discuss one of the motivating applications of the theory developed thus far, namely to count solutions to linear equations in primes (or in dense subsets of primes ). Unfortunately, the most famous linear equations in primes: the twin prime equation and the even Goldbach equation – remain out of reach of this technology (because the relevant affine linear forms involved are commensurate, and thus have infinite complexity with respect to the Gowers norms), but most other systems of equations, in particular that of arithmetic progressions for (or equivalently, for ) , as well as the odd Goldbach equation , are tractable.
To illustrate the main ideas, we will focus on the following result of Green:
Theorem 1 (Roth’s theorem in the primes) Let be a subset of primes whose upper density is positive. Then contains infinitely many arithmetic progressions of length three.
This should be compared with Roth’s theorem in the integers (Notes 2), which is the same statement but with the primes replaced by the integers (or natural numbers ). Indeed, Roth’s theorem for the primes is proven by transferring Roth’s theorem for the integers to the prime setting; the latter theorem is used as a “black box”. The key difficulty here in performing this transference is that the primes have zero density inside the integers; indeed, from the prime number theorem we have .
There are a number of generalisations of this transference technique. In a paper of Green and myself, we extended the above theorem to progressions of longer length (thus transferring Szemerédi’s theorem to the primes). In a series of papers (culminating in a paper to appear shortly) of Green, myself, and also Ziegler, related methods are also used to obtain an asymptotic for the number of solutions in the primes to any system of linear equations of bounded complexity. This latter result uses the full power of higher order Fourier analysis, in particular relying heavily on the inverse conjecture for the Gowers norms; in contrast, Roth’s theorem and Szemerédi’s theorem in the primes are “softer” results that do not need this conjecture.
To transfer results from the integers to the primes, there are three basic steps:
- A general transference principle, that transfers certain types of additive combinatorial results from dense subsets of the integers to dense subsets of a suitably “pseudorandom set” of integers (or more precisely, to the integers weighted by a suitably “pseudorandom measure”);
- An application of sieve theory to show that the primes (or more precisely, an affine modification of the primes) lie inside a suitably pseudorandom set of integers (or more precisely, have significant mass with respect to a suitably pseudorandom measure).
- If one is seeking asymptotics for patterns in the primes, and not simply lower bounds, one also needs to control correlations between the primes (or proxies for the primes, such as the Möbius function) with various objects that arise from higher order Fourier analysis, such as nilsequences.
The former step can be accomplished in a number of ways. For progressions of length three (and more generally, for controlling linear patterns of complexity at most one), transference can be accomplished by Fourier-analytic methods. For more complicated patterns, one can use techniques inspired by ergodic theory; more recently, simplified and more efficient methods based on duality (the Hahn-Banach theorem) have also been used. No number theory is used in this step. (In the case of transference to genuinely random sets, rather than pseudorandom sets, similar ideas appeared earlier in the graph theory setting, see this paper of Kohayakawa, Luczak, and Rodl.
The second step is accomplished by fairly standard sieve theory methods (e.g. the Selberg sieve, or the slight variants of this sieve used by Goldston and Yildirim). Remarkably, very little of the formidable apparatus of modern analytic number theory is needed for this step; for instance, the only fact about the Riemann zeta function that is truly needed is that it has a simple pole at , and no knowledge of L-functions is needed.
The third step does draw more significantly on analytic number theory techniques and results (most notably, the method of Vinogradov to compute oscillatory sums over the primes, and also the Siegel-Walfisz theorem that gives a good error term on the prime number theorem in arithemtic progressions). As these techniques are somewhat orthogonal to the main topic of this course, we shall only touch briefly on this aspect of the transference strategy.
I’ve just uploaded to the arXiv the short note “A remark on primality testing and the binary expansion“, submitted to the Journal of the Australian Mathematical Society. In this note I establish the following result: for any sufficiently large integer n, there exists an n-bit prime p, such that the numbers for are all composite. In particular, if one flips any one of the bits in the binary expansion of the prime p, one obtains a composite number. As a consequence, one obtains the (rather plausible) consequence that in order to (deterministically) test whether an n-bit integer is prime or not, one needs (in the worst-case) to read all n bits of the prime. (This question was posed to me by my colleague here at UCLA, Yiannis Moschovakis.)
Primes p of the form mentioned in the above result are somewhat rare at first; the first some prime is . But in fact, the argument in my note shows that the set of such primes actually has positive relative density inside the set of all primes. (Amusingly, this means that one can apply a theorem of Ben Green and myself and conclude that there are arbitrarily long arithmetic progressions of such primes, although I doubt that there is any particular significance or application to this conclusion.)
The same remark applies to other bases; thus, for instance, there exist infinitely many prime numbers with the property that if one changes any one of the base 10 digits of that number, one obtains a composite number.
(Presumably the first such number can be located by computer search, though I did not attempt to do so.) [Update, Feb 25: see comments.]