The twin prime conjecture, still unsolved, asserts that there are infinitely many primes such that
is also prime. A more precise form of this conjecture is (a special case) of the Hardy-Littlewood prime tuples conjecture, which asserts that
as , where
is the von Mangoldt function and
is the twin prime constant
Because is almost entirely supported on the primes, it is not difficult to see that (1) implies the twin prime conjecture.
One can give a heuristic justification of the asymptotic (1) (and hence the twin prime conjecture) via sieve theoretic methods. Recall that the von Mangoldt function can be decomposed as a Dirichlet convolution
where is the Möbius function. Because of this, we can rewrite the left-hand side of (1) as
To compute this double sum, it is thus natural to consider sums such as
or (to simplify things by removing the logarithm)
The prime number theorem in arithmetic progressions suggests that one has an asymptotic of the form
where is the multiplicative function with
for
even and
for odd. Summing by parts, one then expects
and so we heuristically have
The Dirichlet series
has an Euler product factorisation
for ; comparing this with the Euler product factorisation
for the Riemann zeta function, and recalling that has a simple pole of residue
at
, we see that
has a simple zero at with first derivative
From this and standard multiplicative number theory manipulations, one can calculate the asymptotic
which concludes the heuristic justification of (1).
What prevents us from making the above heuristic argument rigorous, and thus proving (1) and the twin prime conjecture? Note that the variable in (2) ranges to be as large as
. On the other hand, the prime number theorem in arithmetic progressions (3) is not expected to hold for
anywhere that large (for instance, the left-hand side of (3) vanishes as soon as
exceeds
). The best unconditional result known of the type (3) is the Siegel-Walfisz theorem, which allows
to be as large as
. Even the powerful generalised Riemann hypothesis (GRH) only lets one prove an estimate of the form (3) for
up to about
.
However, because of the averaging effect of the summation in in (2), we don’t need the asymptotic (3) to be true for all
in a particular range; having it true for almost all
in that range would suffice. Here the situation is much better; the celebrated Bombieri-Vinogradov theorem (sometimes known as “GRH on the average”) implies, roughly speaking, that the approximation (3) is valid for almost all
for any fixed
. While this is not enough to control (2) or (1), the Bombieri-Vinogradov theorem can at least be used to control variants of (1) such as
for various sieve weights whose associated divisor function
is supposed to approximate the von Mangoldt function
, although that theorem only lets one do this when the weights
are supported on the range
. This is still enough to obtain some partial results towards (1); for instance, by selecting weights according to the Selberg sieve, one can use the Bombieri-Vinogradov theorem to establish the upper bound
which is off from (1) by a factor of about . See for instance this blog post for details.
It has been difficult to improve upon the Bombieri-Vinogradov theorem in its full generality, although there are various improvements to certain restricted versions of the Bombieri-Vinogradov theorem, for instance in the famous work of Zhang on bounded gaps between primes. Nevertheless, it is believed that the Elliott-Halberstam conjecture (EH) holds, which roughly speaking would mean that (3) now holds for almost all for any fixed
. (Unfortunately, the
factor cannot be removed, as investigated in a series of papers by Friedlander, Granville, and also Hildebrand and Maier.) This comes tantalisingly close to having enough distribution to control all of (1). Unfortunately, it still falls short. Using this conjecture in place of the Bombieri-Vinogradov theorem leads to various improvements to sieve theoretic bounds; for instance, the factor of
in (4) can now be improved to
.
In two papers from the 1970s (which can be found online here and here respectively, the latter starting on page 255 of the pdf), Bombieri developed what is now known as the Bombieri asymptotic sieve to clarify the situation more precisely. First, he showed that on the Elliott-Halberstam conjecture, while one still could not establish the asymptotic (1), one could prove the generalised asymptotic
for all natural numbers , where the generalised von Mangoldt functions
are defined by the formula
These functions behave like the von Mangoldt function, but are concentrated on -almost primes (numbers with at most
prime factors) rather than primes. The right-hand side of (5) corresponds to what one would expect if one ran the same heuristics used to justify (1). Sadly, the
case of (5), which is just (1), is just barely excluded from Bombieri’s analysis.
More generally, on the assumption of EH, the Bombieri asymptotic sieve provides the asymptotic
for any fixed and any tuple
of natural numbers other than
, where
is a further generalisation of the von Mangoldt function (now concentrated on -almost primes). By combining these asymptotics with some elementary identities involving the
, together with the Weierstrass approximation theorem, Bombieri was able to control a wide family of sums including (1), except for one undetermined scalar
. Namely, he was able to show (again on EH) that for any fixed
and any continuous function
on the simplex
that had suitable vanishing at the boundary, the sum
when was even, where the integral on
is with respect to the measure
(this is Dirac measure in the case
). In particular, we have
and the twin prime conjecture would be proved if one could show that is bounded away from zero, while (1) is equivalent to the assertion that
is equal to
. Unfortunately, no additional bound beyond the inequalities
provided by the Bombieri asymptotic sieve is known, even if one assumes all other major conjectures in number theory than the prime tuples conjecture and its variants (e.g. GRH, GEH, GUE, abc, Chowla, …).
To put it another way, the Bombieri asymptotic sieve is able (on EH) to compute asymptotics for sums
without needing to know the unknown scalar , when
is a function supported on almost primes of the form
for and some fixed
, with
vanishing elsewhere and for some continuous (symmetric) functions
obeying some vanishing at the boundary, so long as the parity condition
is obeyed (informally: gives the same weight to products of an odd number of primes as to products of an even number of primes, or to put it another way,
is asymptotically orthogonal to the Möbius function
). But when
violates the parity condition, the asymptotic involves the unknown
. This scalar
thus embodies the “parity problem” for the twin prime conjecture (discussed in these previous blog posts).
Because the obstruction to the parity problem is only one-dimensional (on EH), one can replace any parity-violating weight (such as ) with any other parity-violating weight and obtain a logically equivalent estimate. For instance, to prove the twin prime conjecture on EH, it would suffice to show that
for some fixed , or equivalently that there are
solutions to the equation
in primes with
and
. (In some cases, this sort of reduction can also be made using other sieves than the Bombieri asymptotic sieve, as was observed by Ng.) As another example, the Bombieri asymptotic sieve can be used to show that the asymptotic (1) is equivalent to the asymptotic
where is the set of numbers that are rough in the sense that they have no prime factors less than
for some fixed
(the function
clearly correlates with
and so must violate the parity condition). One can replace
with similar sieve weights (e.g. a Selberg sieve) that concentrate on almost primes if desired.
As it turns out, if one is willing to strengthen the assumption of the Elliott-Halberstam (EH) conjecture to the assumption of the generalised Elliott-Halberstam (GEH) conjecture (as formulated for instance in Claim 2.6 of the Polymath8b paper), one can also swap the factor in the above asymptotics with other parity-violating weights and obtain a logically equivalent estimate, as the Bombieri asymptotic sieve also applies to weights such as
under the assumption of GEH. For instance, on GEH one can use two such applications of the Bombieri asymptotic sieve to show that the twin prime conjecture would follow if one could show that there are
solutions to the equation
in primes with and
, for some
. Similarly, on GEH the asymptotic (1) is equivalent to the asymptotic
for some fixed , and similarly with
replaced by other sieves. This form of the quantitative twin primes conjecture is appealingly similar to the (special case)
of the Chowla conjecture, for which there has been some recent progress (discussed for instance in these recent posts). Informally, the Bombieri asymptotic sieve lets us (on GEH) view the twin prime conjecture as a sort of Chowla conjecture restricted to almost primes. Unfortunately, the recent progress on the Chowla conjecture relies heavily on the multiplicativity of at small primes, which is completely destroyed by inserting a weight such as
, so this does not yet yield a viable path towards the twin prime conjecture even assuming GEH. Still, the similarity is striking, and one can hope that further ways to attack the Chowla conjecture may emerge that could impact the twin prime conjecture. (Alternatively, if one assumes a sufficiently optimistic version of the GEH, one could perhaps relax the notion of “almost prime” to the extent that one could start usefully using multiplicativity at smallish primes, though this seems rather wishful at present, particularly since the most optimistic versions of GEH are known to be false.)
The Bombieri asymptotic sieve is already well explained in the original two papers of Bombieri; there is also a slightly different treatment of the sieve by Friedlander and Iwaniec, as well as a simplified version in the book of Friedlander and Iwaniec (in which the distribution hypothesis is strengthened in order to shorten the arguments. I’ve decided though to write up my own notes on the sieve below the fold; this is primarily for my own benefit, but may be useful to some readers also. I largely follow the treatment of Bombieri, with the one idiosyncratic twist of replacing the usual “elementary” Selberg sieve with the “analytic” Selberg sieve used in particular in many of the breakthrough works in small gaps between primes; I prefer working with the latter due to its Fourier-analytic flavour.
— 1. Controlling generalised von Mangoldt sums —
To prove (5), we shall first generalise it, by replacing the sequence by a more general sequence
obeying the following axioms:
- (i) (Non-negativity) One has
for all
.
- (ii) (Crude size bound) One has
for all
, where
is the divisor function.
- (iii) (Size) We have
for some constant
.
- (iv) (Elliott-Halberstam type conjecture) For any
, one has
where
is a multiplicative function with
for all primes
and
.
These axioms are a little bit stronger than what is actually needed to make the Bombieri asymptotic sieve work, but we will not attempt to work with the weakest possible axioms here.
We introduce the function
which is analytic for ; in particular it can be evaluated at
to yield
There are two model examples of data to keep in mind. The first, discussed in the introduction, is when
, then
and
is as in the introduction; one of course needs EH to justify axiom (iv) in this case. The other is when
, in which case
and
for all
. We will later take advantage of the second example to avoid doing some (routine, but messy) main term computations.
The main result of this section is then
Theorem 1 Let
be as above. Let
be a tuple of natural numbers (independent of
) that is not equal to
. Then one has the asymptotic
as
, where
.
Note that this recovers (5) (on EH) as a special case.
We now begin the proof of this theorem. Henceforth we allow implied constants in the or
notation to depend on
and
.
It will be convenient to replace the range by a shorter range by the following standard localisation trick. Let
be a large quantity depending on
to be chosen later, and let
denote the interval
. We will show the estimate
from which the original claim follows by a routine summation argument. Observe from axiom (iv) and the triangle inequality that
for any .
Write for the logarithm function
, thus
for any
. Without loss of generality we may assume that
; we then factor
, where
This function is just when
. When
the function is more complicated, but we at least have the following crude bound:
Proof: We induct on . The case
is obvious, so suppose
and the claim has already been proven for
. Since
, we see from induction hypothesis and the triangle inequality that
Since by Möbius inversion, the claim follows.
We can write
In the region , we have
. Thus
for . The contribution of the error term to
to (10) is easily seen to be negligible if
is large enough, so we may freely replace
with
with little difficulty.
If we insert this replacement directly into the left-hand side of (10) and rearrange, we get
We can’t quite control this using axiom (iv) because the range of is a bit too big, as explained in the introduction. So let us introduce a truncated function
where is a small quantity to be chosen later, and
is a smooth function that equals
on
and equals
on
. Suppose one could establish the following two estimates for any fixed
:
where is a quantity that depends on
but not on
. Then on combining the two estimates we would have
One could in principle compute explicitly from the proof of (13), but one can avoid doing so by the following comparison trick. In the special case
, standard multiplicative number theory (noting that the Dirichlet series
has a pole of order
at
, with top Laurent coefficient
) gives the asymptotic
which when compared with (14) for (recalling that
in this case) gives the formula
Inserting this back into (14) and recalling that can be made arbitrarily small, we obtain (10).
As it turns out, the estimate (13) is easy to establish, but the estimate (12) is not, roughly speaking because the typical number in
has too many divisors
in the range
, each of which gives a contribution to the error term. (In the book of Friedlander and Iwaniec, the estimate (13) is established anyway, but only after assuming a stronger version of (iv), roughly speaking in which
is allowed to be as large as
.) To resolve this issue, we will insert a preliminary sieve
that will remove most of the potential divisors
i the range
(leaving only about
such divisors on the average for typical
), making the analogue of (12) easier to prove (at the cost of making the analogue of (13) more difficult). Namely, if one can find a function
for which one has the estimates
for some quantity that depends on
but not on
, then by repeating the previous arguments we will again be able to establish (10).
The key estimate is (16). As we shall see, when comparing with
, the weight
will cost us a factor of
, but the
term in the definitions of
and
will recover a factor of
, which will give the desired bound since we are assuming
.
One has some flexibility in how to select the weight : basically any standard sieve that uses divisors of size at most
to localise (at least approximately) to numbers that are rough in the sense that they have no (or at least very few) factors less than
, will do. We will use the analytic Selberg sieve choice
where is a smooth function supported on
that equals
on
.
It remains to establish the bounds (15), (16), (17). To warm up and introduce the various methods needed, we begin with the standard bound
where denotes the derivative of
. Note the loss of
that had previously been pointed out. In the arguments that follows I will be a little brief with the details, as they are standard (see e.g. this previous post).
We now prove (19). The left-hand side can be expanded as
where denotes the least common multiple of
and
. From the support of
we see that the summand is only non-vanishing when
. We now use axiom (iv) and split the left-hand side into a main term
and an error term that is at most
From axiom (ii) and elementary multiplicative number theory, we have the bound
so from axiom (iv) and Cauchy-Schwarz we see that the error term (20) is acceptable. Thus it will suffice to establish the bound
The summand here is almost, but not quite, multiplicative in . To make it genuinely multiplicative, we perform a (shifted) Fourier expansion
for some rapidly decreasing function (essentially the Fourier transform of
). Thus
and so the left-hand side of (21) can be rearranged using Fubini’s theorem as
We can factorise as an Euler product:
Taking absolute values and using Mertens’ theorem leads to the crude bound
which when combined with the rapid decrease of , allows us to restrict the region of integration in (23) to the square
(say) with negligible error. Next, we use the Euler product
for to factorise
where
For with nonnegative real part, one has
and so by the Weierstrass -test,
is continuous at
. Since
we thus have
Also, since has a pole of order
at
with residue
, we have
and thus
The quantity (23) can thus be written, up to errors of , as
Using the rapid decrease of , we may remove the restriction on
, and it will now suffice to prove the identity
But on differentiating and then squaring (22) we have
and the claim follows by integrating in from zero to infinity (noting that
vanishes for
).
We have the following variant of (19):
for any
. We also have the variant
If in addition
has no prime factors less than
for some fixed
, one has
Roughly speaking, the above estimates assert that is concentrated on those numbers
with no prime factors much less than
, but factors
without such small prime divisors occur with about the same relative density as they do in the integers.
Proof: The left-hand side of (24) can be expanded as
If we define
then the previous expression can be written as
while one has
which gives (25) from Axiom (iv). To prove (24), it now suffices to show that
Arguing as before, the left-hand side is
where
From Mertens’ theorem we have
when , so the contribution of the terms where
can be absorbed into the
error (after increasing that error slightly). For the remaining contributions, we see that
where if
does not divide
, and
if divides
times for some
. In the latter case, Taylor expansion gives the bounds
and the claim (28) follows. When and
we have
and (27) follows by repeating the previous calculations. Finally, (26) is proven similarly to (24) (using in place of
).
Now we can prove (15), (16), (17). We begin with (15). Using the Leibniz rule applied to the identity
and using
and Möbius inversion (and the associativity and commutativity of Dirichlet convolution) we see that
Next, by applying the Leibniz rule to for some
and using (29) we see that
and hence we have the recursive identity
In particular, from induction we see that is supported on numbers with at most
distinct prime factors, and hence
is supported on numbers with at most
distinct prime factors. In particular, from (18) we see that
on the support of
. Thus it will suffice to show that
If and
, then
has at most
distinct prime factors
, with
. If we factor
, where
is the contribution of those
with
, and
is the contribution of those
with
, then at least one of the following two statements hold:
- (a)
(and hence
) is divisible by a square number of size at least
.
- (b)
.
The contribution of case (a) is easily seen to be acceptable by axiom (ii). For case (b), we observe from (30) and induction that
and so it will suffice to show that
where ranges over numbers bounded by
with at most
distinct prime factors, the smallest of which is at most
, and
consists of those numbers with no prime factor less than or equal to
. Applying (26) (with
replaced by
) gives the bound
so by (25) it suffices to show that
subject to the same constraints on as before. The contribution of those
with
distinct prime factors can be bounded by
applying Mertens’ theorem and summing over , one obtains the claim.
Now we show (16). As discussed previously in this section, we can replace by
with negligible error. Comparing this with (16) and (11), we see that it suffices to show that
From the support of , the summand on the left-hand side is only non-zero when
, which makes
, where we use the crucial hypothesis
to gain enough powers of
to make the argument here work. Applying Lemma 2, we reduce to showing that
We can make the change of variables to flip the sum
and then swap the sums to reduce to showing that
By Lemma 3, it suffices to show that
To prove this, we use the Rankin trick, bounding the implied weight by
. We can then bound the left-hand side by the Euler product
which can be bounded by
and the claim follows from Mertens’ theorem.
Finally, we show (17). By (11), the left-hand side expands as
We let be a small constant to be chosen later. We divide the outer sum into two ranges, depending on whether
only has prime factors greater than
or not. In the former case, we can apply (27) to write this contribution as
plus a negligible error, where the is implicitly restricted to numbers with all prime factors greater than
. The main term is messy, but it is of the required form
up to an acceptable error, so there is no need to compute it any further. It remains to consider those
that have at least one prime factor less than
. Here we use (24) instead of (27) as well as Lemma 3 to dominate this contribution by
up to negligible errors, where is now restricted to have at least one prime factor less than
. This makes at least one of the factors
to be at most
. A routine application of Rankin’s trick shows that
and so the total contribution of this case is . Since
can be made arbitrarily small, (17) follows.
— 2. Weierstrass approximation —
Having proved Theorem 1, we now take linear combinations of this theorem, combined with the Weierstrass approximation theorem, to give the asymptotics (7), (8) described in the introduction.
Let ,
,
,
be as in that theorem. It will be convenient to normalise the weights
by
to make their mean value comparable to
. From Theorem 1 and summation by parts we have
whenever does not consist entirely of ones.
We now take a closer look at what happens when does consist entirely of ones. Let
denote the
-tuple
. Convolving the
case of (30) with
copies of
for some
and using the Leibniz rule, we see that
and hence
Multiplying by and summing over
, and using (31) to control the
term, one has
If we define (up to an error of
) by the formula
then an induction then shows that
for odd , and
for even . In particular, after adjusting
by
if necessary, we have
since the left-hand sides are non-negative.
If we now define the comparison sequence , standard multiplicative number theory shows that the above estimates also hold when
is replaced by
; thus
for both odd and even . The bound (31) also holds for
when
does not consist entirely of ones, and hence
for any fixed (which may or may not consist entirely of ones).
Next, from induction (on ), the Leibniz rule, and (30), we see that for any
and
,
, the function
is a finite linear combination of functions of the form for tuples
that may possibly consist entirely of ones. We thus have
whenever is one of these functions (32). Specialising to the case
, we thus have
where . The contribution of those
that are powers of primes can be easily seen to be negligible, leading to
where now . The contribution of the case where two of the primes
agree can also be seen to be negligible, as can the error when replacing
with
, and then by symmetry
By linearity, this implies that
for any polynomial that vanishes on the coordinate hyperplanes
. The right-hand side can also be evaluated by Mertens’ theorem as
when is odd and
when is even. Using the Weierstrass approximation theorem, we then have
for any continuous function that is compactly supported in the interior of
. Computing the right-hand side using Mertens’ theorem as before, we obtain the claimed asymptotics (7), (8).
Remark 4 The Bombieri asymptotic sieve has to use the full power of EH (or GEH); there are constructions due to Ford that show that if one only has a distributional hypothesis up to
for some fixed constant
, then the asymptotics of sums such as (5), or more generally (9), are not determined by a single scalar parameter
, but can also vary in other ways as well. Thus the Bombieri asymptotic sieve really is asymptotic; in order to get
type error terms one needs the level
of distribution to be asymptotically equal to
as
. Related to this, the quantitative decay of the
error terms in the Bombieri asymptotic sieve are extremely poor; in particular, they depend on the dependence of implied constant in axiom (iv) on the parameters
, for which there is no consensus on what one should conjecturally expect.
45 comments
Comments feed for this article
17 July, 2016 at 9:19 am
sylvainjulien
Would a weakened form of Cramer’s conjecture like $g_{n}=\log^{O(1)}p_{n}$ be strong enough to imply the twin prime conjecture?
19 July, 2016 at 3:33 am
sylvainjulien
By the way, this link may give further support to Cramer’s conjecture: http://math.stackexchange.com/questions/1859480/growth-of-pinr-0n-pin-r-0n-with-r-0n-inf-r-ge-0-n-r-n?noredirect=1#comment3818092_1859480
17 July, 2016 at 11:43 am
Bhupinder Singh Anand
1. What seems to prevent a non-heuristic determination of the limiting behaviour of prime counting functions is that the usual approximations of
for finite values of
are apparently derived from real-valued functions which are asymptotic to
, such as
,
and Riemann’s function
.
2. The degree of approximation for finite values of
is thus determined only heuristically, by conjecturing upon an error term in the asymptotic relation that can be seen to yield a closer approximation than others to the actual values of
.
3. Moreover, currently, conventional approaches to evaluating prime counting functions for finite
may also subscribe to the belief:
(i) either—explicitly (see here)—that whether or not a prime
divides an integer
is not independent of whether or not a prime
divides the integer
;
(ii) or—implicitly (since the twin-prime problem is yet open)—that a proof to the contrary must imply that if
is the probability that
is a prime, then
.
4. If so, then conventional approaches seem to conflate the two probabilities:
(i) The probability
of selecting a number that has the property of being prime from a given set
of numbers;
Example 1: I have a bag containing
numbers in which there are twice as many composites as primes. What is the probability that the first number you blindly pick from it is a prime. This is the basis for setting odds in games such as roulette.
(ii) The probability
of determining a proper factor of a given number
.
Example 2: I give you a
-digit combination lock along with a
-digit number
. The lock only opens if you set the combination to a proper factor of
which is greater than
. What is the probability that the first combination you try will open the lock. This is the basis for RSA encryption, which provides the cryptosystem used by many banks for securing their communications.
5. In case 4(i), if the precise proportion of primes to non-primes in
is definable, then clearly
too is definable.
However if
is the set
of all integers, and we cannot define a precise ratio of primes to composites in
, but only an order of magnitude such as
, then equally obviously
cannot be defined in
(see Chapter 2, p.9, Theorem 2.1, here).
6. In case 4(ii) it follows that
, since the argument can be used to show that whether or not a prime
divides a given integer
is independent of whether or not a prime
divides
.
7. We thus have that
, with a binomial standard deviation. Hence, even though we cannot define the probability
of selecting a number from the set
of all natural numbers that has the property of being prime,
can be treated as the de facto probability that a given
is prime.
8. Further, by considering the asymptotic density of the set of all integers that are not divisible by the first
primes
we can show that, for any
, the expected number of such integers in any interval of length
is
.
9. We can then show that a non-heuristic approximation—with a binomial standard deviation—for the number of primes less than or equal to
is given for all
by
for some constant
.
10. We can show, similarly, that the expected number of Dirichlet and twin primes in the interval (
) can be estimated similarly; and conclude that the number of such primes
is, in each case, cumulatively approximated non-heuristically by a function that
.
11. The method can, moreover, be generalised to apply to a large class of prime counting functions.
2 August, 2016 at 2:46 pm
primework123
FYI: The Riemann Prime-Counting involving the zeta zeros of the Riemann zeta function is the real deal at determining the distribution of primes (number and placement) along the natural number line.
Please visiting the following links for all reasons and details:
https://www.researchgate.net/post/What_is_the_most_efficient_way_of_predicting_prime_numbers_accurately
https://www.researchgate.net/post/What_is_the_correct_proof_of_the_famous_and_important_Polignac_Conjecture
18 July, 2016 at 4:56 am
Zak
Breaking the parity barrier has usually been the consequence of controlling bilinear sums, done for instance in Friedlander and Iwaniec’s work on primes of the form
. From the view point that sieve theory is a linear programming problem, I wonder if one could impose additional linear constraints in order to overcome the parity problem. For instance we could impose the condition
is small, where
is the Louisville function.
18 July, 2016 at 8:17 am
Jhon Manugal
Do you have this
$
$
and also this? I don't know enough to correct you
$
$
18 July, 2016 at 12:08 pm
Lior Silberman
In the second equation, the summand is missing a factor of
18 July, 2016 at 8:42 am
Anonymous
Dear Terry,
Happy birthday to you
But the great birthday present is that you solve twin prime conjecture as I expect.That is also a gift for all mathematicians.Time goes by quickly.It never waits everyone.A human becomes older very fast if without doing great thing.
18 July, 2016 at 1:21 pm
Anonymous
It is still not clear from this post what seems to be the most promising approach for a new advance in the twin prime conjecture. Is a breakthrough in the parity problem really necessary?
18 July, 2016 at 4:34 pm
Terence Tao
Well, the twin prime sum
is certainly parity sensitive (viewing one of the factors as the sequence to be sieved and the other factor as the sieve), so any nontrivial estimate on it has to break the parity barrier one way or another, and this to my mind is one of the major reasons why progress on the twin prime conjecture will be interesting (though of course the accessibility and history of the conjecture is also appealing). It is not quite the weakest point in the parity barrier to breach though – given the recent advances on the Chowla conjecture, I would imagine that that will fall first before the twin prime conjecture does, since one has the additional tool of multiplicativity at small primes at one’s disposal in that case. Since (as discussed in the post above) the twin prime conjecture is basically equivalent (on GEH) to Chowla restricted to almost twin primes, one can hope to approach the twin prime conjecture by first proving Chowla, and then somehow removing, or at least greatly reducing, the reliance on multiplicativity at small prime in that proof. We already have the Chowla conjecture in the log-averaged setting, but that proof is currently extremely reliant on this multiplicativity, but perhaps there will be other proofs in the future that are less reliant.
It may also be possible that the twin prime conjecture could be proven by non-analytic means, in a way that does not lead to significantly new estimates on the sum
(though this sum will of course have to go to infinity as
if the twin prime conjecture holds). This is currently the situation in the function field setting, where results of Hall and of Pollack show that there are infinitely many twin irreducible polynomials over any given field of odd order; this relies on some criteria for polynomial irreducibility that are only available to a very sparse set of polynomials, and which are not expected to have analogues in the integer setting. Nevertheless the parity barrier in the function field setting is still very much in effect, and the situation there is not that much better actually than in the integer case (except in the large
limit, where much more is known).
18 July, 2016 at 8:05 pm
Bhupinder Singh Anand
1. Yes, instead of estimating
heuristically by analytic considerations, one could also estimate the number
of twin primes
non-heuristically as
, where
is the de facto probability that a given
is prime.
2. One way of approaching this would be to define an integer
as a
integer if, and only if,
and
for all
, where
is defined for all
by:
3. Note that if
is a
integer, then both
and
are not divisible by any of the first
primes
.
4. The asymptotic density of
integers over the set of natural numbers is then
.
5. Further, if
is a
integer, then
is a prime and either
is also a prime, or
.
6. If we define
as the number of
integers
, the expected number of
integers in any interval
is given—with a binomial standard deviation—by
.
7. Since
is at most one less than the number of twin-primes in the interval
, it follows that:
8. Now, the expected number of
integers in the interval
is given by:
9. We conclude that the number
of twin primes
is given by the cumulative non-heuristic approximation:
18 July, 2016 at 11:07 pm
Anonymous
It seems natural to expect that a combination of sieve methods with the abc conjecture (which is sufficiently strong for many famous problems in additive number theory) should give some progress on the twin prime conjecture. Is it possible? (or perhaps a new version of the abc conjecture is needed for such a progress?)
19 July, 2016 at 9:03 am
Terence Tao
The abc conjecture constrains the behaviour of numbers with many repeated prime factors (such as powerful numbers or very smooth numbers), since it is those numbers for which the radical is going to be small enough that the abc conjecture carries nontrivial content. This is why that conjecture is useful for problems such as Fermat’s last theorem. The twin prime conjecture, by contrast, seems to relate only to the behaviour of numbers with very few factors (i.e. primes and almost primes). So there does not appear to be any obvious mechanism in which the abc conjecture could be deployed to improve upon the known results on the twin prime conjecture.
23 August, 2016 at 4:50 pm
Will Sawin
I think in the function field case one can now get many twin primes and not just a few, using symmetry tricks. For fixed $q>104$, there are a large number of pairs of mimic irreducible polynomia of degree $n$ that differ by $1$, for all sufficiently large $n$ relatively prime to $q-1$. The point is that a pair of polynomials that differ by any constant can often by transformed into a pair that differ by $1$, so bounded gaps implies twin primes. See http://web.stanford.edu/~rjlo/papers/11-BoundedGaps.pdf thm 1.4 which doesn’t state that there are many such pairs, but it follows from the argument.
23 August, 2016 at 10:53 pm
Anonymous
It would be interesting to see if there are also Zhang type (i.e. above
) estimates for levels of distribution for such number (or function) fields and their dependence on some parameters of each particular field.
19 July, 2016 at 10:31 am
Lior Silberman
Erratum (repeated): the second displayed equation after equation (3) is missing a factor of
in the summand.
[Corrected, thanks – T.]
19 July, 2016 at 3:31 pm
primework123
On the Proof Sketch of the famous Polignac Conjecture (https://en.wikipedia.org/wiki/Polignac%27s_conjecture):
We should consider three laws which govern the general behaviour of all prime numbers or their distribution (placement and number) along the natural number line:
Prime Work:
(1) There are infinitely many more positive integers (even or odd) than there are prime numbers, or prime numbers have a zero density relative to the positive integers according to the Prime Number Theorem (PNT), and
(2) prime numbers generate the positive even integers so efficiently that gaps between two consecutive prime numbers increase without bound.
(3) Prime Parity Law (PPL):
Pi(e = m*g = 1 + p_2n) = 2 * Pi(g = 1 + p_n) = 2n where Pi(*) is the prime counting function, and p_n > 2, p_2n are odd prime numbers; 2 < m ≤ 3;
and as g goes to infinity, m goes to 2.
From our understanding of the work of primes, we create an exceptional set, E, of consecutive odd prime numbers, p_n and p_n+1, whose prime gap, |p_n, p_n+1 | is some positive even integer, 2i. And we let P be the set of all odd prime numbers.
For example we construct the exceptional set, E_i:
E_i = {p_n, p_n+1 : p_n, p_n+1 ∈ P and |p_n – p_n+1| = 2i for any positive integer, i}.
And if we assume 0 ≤ |E_i| < ∞, then we must have the following probability calculation:
Prob( |p_n – p_n+1| = 2i | for all odd primes, p_n, p_n+1 ∉ E_i) = 0.
If we cannot verify this result, then we have a contradiction! And Polignac conjecture is true! :-) Bonne chance!
Reference link: https://www.researchgate.net/profile/David_Cole24
26 July, 2016 at 5:34 pm
primework123
Updated Link:
https://www.researchgate.net/post/What_is_the_correct_proof_of_the_famous_and_important_Polignac_Conjecture
27 July, 2016 at 6:29 am
primework123
(2) Prime numbers generate the positive even integers so efficiently that gaps between two consecutive prime numbers increase without bound if and only if the Goldbach Conjecture and the Polignac Conjecture are true.
Reference link: https://www.researchgate.net/post/What_is_the_correct_proof_of_the_famous_and_important_Polignac_Conjecture
27 July, 2016 at 6:34 pm
primework123
An update:
primework123
(2) Prime numbers generate the positive even integers so efficiently according to the Prime Number Theorem that gaps between two consecutive prime numbers increase without bound if and only if the Goldbach Conjecture and the Polignac Conjecture are true.
Reference link:
https://www.researchgate.net/post/What_is_the_correct_proof_of_the_famous_and_important_Polignac_Conjecture
28 July, 2016 at 7:21 am
primework123
Hmm. For the sake of more clarity we should have:
(2) … the gaps between two consecutive prime numbers increase in size without bound …
20 July, 2016 at 12:24 am
Maths student
Dear Prof. Tao,
just as an idea: It would be kind of beneficial to a lot of people if the lectures you give would be available via the internet in the form of a video; for, this would allow for consuming mathematics while eating, something which I find rather difficult because I can’t keep my head in a fixed position, making it impossible to read the stuff on the screen (or alternatively, I have to enlarge stuff, but then it looks bad and one has to scroll all the time).
It would be a great joy to see lectures of yours on topics that I have a chance of understanding!
w/br
22 August, 2016 at 7:32 pm
Anonymous
A number of Prof. Tao’s lectures are on youtube.
26 July, 2016 at 2:20 pm
Gil Kalai
Apropos the twin prime conjecture, are all the difficulties for showing infinitely many primes of gap two apply for any other *fixed* gap? (In other words, is there a possibility that showing infinitely many pairs of primes of gap 23,101 (precisely) might be somehow easier?)
26 July, 2016 at 7:35 pm
Anonymous
Only even gaps (unlike 23,101) should be considered.
26 July, 2016 at 10:12 pm
Gil Kalai
(pppps, Yes, I meant 23,102)
28 July, 2016 at 2:30 am
Bhupinder Singh Anand
“… is there a possibility that showing infinitely many pairs of primes of gap 23,102 (precisely) might be somehow easier?”
1. On the contrary. Although the reasoning should, prima facie, be similar, estimating the number
of twin(2) primes
non-heuristically as
—where
is the de facto probability that a given
is a prime—is obviously less complicated than estimating the number
of twin(23,102) primes
non-heuristically as
, where
is the de facto probability that a given
is composite.
2. In the case of a twin(2) prime, one would define an integer
as a
integer if, and only if,
and
for all
, where
is defined for all
by:
3. Note that if
is a
integer, then both
and
are not divisible by any of the first
primes
.
4. The asymptotic density of
integers over the set of natural numbers is then
.
5. Further, if
is a
integer, then either both
and
are primes, or
.
6. If we define
as the number of
integers
, the expected number of
integers in any interval
is given—with a binomial standard deviation—by
.
7. Since
is at most one less than the number of twin(2) primes in the interval
, it follows that:
8. Now, the expected number of
integers in the interval
is given by:
9. We conclude that the number
of twin(2) primes
is given by the cumulative non-heuristic approximation:
10. However, in the case of a twin(23,102) prime, in order to argue similarly one would need to define, and consider only, integers
as
integers if, and only if,
and
for all
, but with the added qualification that, for each
, we must have
for some
, where
is defined for all
by:
29 July, 2016 at 6:15 am
gowers
The parity problem strikes again …
27 July, 2016 at 1:45 am
Anonymous
According to the Wikipedia article on Polignac’s conjecture, the first Hardy-Littlewood conjecture gives (for each even
) an explicit asymptotic estimate for the number of prime gaps of size
below
.
of
increases the conjectured density (of prime gaps of size
) compared to the density of twin primes by the factor
.
This estimate implies that each odd prime factor
Therefore any prime gap with many distinct odd prime factors seems to be more frequent (and perhaps easier for such a proof).
27 July, 2016 at 8:02 am
Gil Kalai
My very vague intuition is also in this direction. So e.g. now that we know that there are infinitely many pairs of gap at most 600 maybe we can prove that there are infinitely many pairs of gap 600! (Here ! is “factorial”) But specifically I wonder also if the infamous obstructions for twin primes apply also for 600! gap.
Here is another obvious question that maybe is not hopeless now: are there infinitely many consecutive primes p and q such that q-p is between 600 and 600! (Again, ! is factorial!)
27 July, 2016 at 8:25 am
poqv
Yes the obstruction still applies. The “obstruction” otherwise known as the parity barrier is encapsulated in the following problem: A sieve is not able to show that n (n + 2) is infinitely often a product of two primes, and it’s not able to show that this is infinitely often a product of three primes. Showing either of these statements would break the parity barrier. However sieves can show that n (n + 2) is infinitely often the product of either two or three primes. To break through this barrier you need additional techniques that break the parity barrier (such as analytic input from bilinear forms). Now sieves are also able to show results of the type “at least one of the object in this finite set is prime (or other desirable property)”, but can’t tell you which element exactly, for example there are also results on Artin’s conjecture on primitive roots which say that out of any set of three non-squares at least one satisfies Artin’s conjecture.
30 July, 2016 at 12:01 am
Bhupinder Singh Anand
One way to avoid the ‘parity’ barrier when estimating prime counting functions may be to recognise the following, essentially different, instances that involve defining the probability of an integer:
(i) The probability
of selecting an integer that has the property
from a given set
of integers;
Example 1: If
is the set of natural numbers, what is the probability of selecting an integer
that has the property of being a prime?
Since we cannot define a precise ratio of primes to composites in
, but only an order of magnitude such as
, the probability
of selecting an integer that is a prime obviously cannot be defined in
.
(ii) The probability
that an integer, in a given set
of integers, has the property
;
Example 2: If
is the set of positive integers, what is the probability that an integer
is even? This is the basis for setting odds in games such as roulette; or for determining the probability of the spin state of a particle.
Since the ratio of odd to even numbers in
is
, the probability
that an integer
has the property of being even—which obviously cannot depend upon the probability
of selecting an integer
that has the property of being even—must be
, even though
.
(iii) The probability
of determining that a given integer
has the property
.
Example 3: I give you a
-digit combination lock along with a
-digit integer
. The lock only opens if you set the combination to a proper factor of
which is greater than
. What is the probability that the first combination you try will open the lock. This is the basis for RSA encryption, which provides the cryptosystem used by many banks for securing their communications.
By considering the behaviour of
, which is defined for all
by:
appropriately, this example admits that, whether or not a prime
divides a given integer
, is indeed independent of whether or not a prime
divides
; whence we have that
.
Whilst the ‘parity’ barrier is encountered in the first case, the last appears to circumvent it if we define prime counting functions in terms of the residues
, which are best expressed in a 2-dimensional representation of Eratosthenes Sieve.
30 July, 2016 at 3:19 am
Anonymous
It seems that like other famous problems (e.g. FLT or the Poincare conjecture) if there is a solution for the twin prime conjecture, it may follow as a particular case of a solution to a more general problem (whose formulation is still unknown!) which allows the use of additional (and perhaps more general) mathematical methods.
1 August, 2016 at 1:19 pm
primework123
FYI: On A Simpler Proof of Fermat’s Last Theorem,
https://www.researchgate.net/publication/300080369_A_Simpler_Proof_of_Fermat%27s_Last_Theorem
1 August, 2016 at 5:01 pm
Bhupinder Singh Anand
Well yes, in the sense that we can define a Generalised Prime Counting Function,
, which estimates the number of integers
such that there are
values that cannot occur amongst the residues
for
, where
is defined for all
by:
Thus
yields an estimate for the number
of primes
; and
an estimate for the number of
integers
, which approximates the number
of twin primes
.
Note that:
2 August, 2016 at 10:27 am
Bhupinder Singh Anand
At a slightly deeper level, the ‘general’ result—which admits non-heuristic estimations of prime-counting functions such as
,
, and even of Dirichlet primes in an arithmetical progression—is that the prime divisors of an integer are independent.
The argument runs as follows:
(a) The probability
that the spin of an
-faced cryptex wheel—with faces numbered
—will yield the value
, is
by the probability model for such an event as definable over the probability space
.
(b) The probability
that the simultaneous spin of one
-faced cryptex wheel, and one
-faced cryptex wheel, will yield the values
and
, respectively, is
by the probability model for such a simultaneous event as defined over the probability space
.
(c) If
and
are co-prime, the compound probability
of correctly determining that
divides
and
divides
from the simultaneous spin of one
-faced cryptex wheel and one
-faced cryptex wheel, is the product of the probability of correctly determining that
divides
from the spin of an
-faced cryptex wheel, and the probability of correctly determining that
divides
from the spin of a
-faced cryptex wheel.
Reason: If
and
are co-prime, and
, then the
integers
are all incongruent and form a complete system of residues. It follows that
—whence
divides
—and also
—whence
divides
—if, and only if
.
Note: The assumption that
and
be co-prime is necessary, since the above would not follow if
and
were not co-prime. For instance, let
. The probability that the spin of an
-faced cryptex wheel will then yield
—and allow us to correctly conclude that
divides
—is
, and the probability that the spin of a
-faced cryptex wheel will then yield
—and allow us to correctly conclude that
divides
—is
; but the probability of correctly determining both that
divides
, and that
divides
, from a simultaneous spin of the two cryptex wheels is
, and not
.
(d) If
and
are two unequal primes, the probability of determining whether
divides
is thus independent of the probability of determining whether
divides
.
The significance of (d) for non-heuristic estimation of prime-counting functions is that;
(e) The non-heuristic expected number of ‘prime’ combinations—i.e., where a
does not appear, and so the combination corresponds 1-1 to a prime
—which occur in a set of
simultaneous spins of the
cryptex wheels with
faces, respectively, is a non-heuristic estimate of the number
of primes
; whence
, with a binomial standard deviation.
Moreover, even though we cannot define the probability
of selecting an integer from the set
of all natural numbers that has the property of being prime, it follows that:
(f) The non-heuristic probability that an integer
has the property of being a prime is
.
The wider significance of (d) is that it also allows computing the complexity of Integer Factorisation:
(g) Since any given integer
is a prime if, and only if, it is not divisible by any prime
, and since
may be the square of a prime, it follows that we necessarily require at least one logical operation for each prime
in order to logically determine whether
is a prime divisor of
.
(h) Since the number of such primes is of the order
, and the prime divisors of an integer are mutually independent, the number of computations required by any deterministic algorithm that always computes a prime factor of
cannot be polynomial-time—i.e. of order
for any
—in the length of the input
.
3 August, 2016 at 1:03 pm
Anonymous
If true, this short proof must be from “the book” !
6 August, 2016 at 3:08 am
Jhon Hard
It is absolutely true. I hope that the professor Tao he gives us his valuation, as soon as possible.
11 August, 2016 at 8:25 am
MDC
I was more interested in the sifting density had by eye cue balls. The parity problem is kind of “odd” but even as well and your “idiosyncratic twist” with a “Fourier-analytic flavour” is predictable but leaves slit tongues in the mouth of my gift stalking horse. While undergraduates pay more and half of empirical studies are getting invalidated because people can’t learn elementary regression diagnostics from the math departments, I guess prime number theory is fine, as long as you keep the discontinuity-challenged away from Wall Street. But seriously, a trillion zeros and I start falling asleep. However, as Gil can attest, you guys are still making more progress than the quantum computer crowd doing exponential computations on a psi-ontic wave function. Zoom, bank, spill… clowns.
14 August, 2016 at 2:58 am
Jeffrey Helkenberg
I would like to suggest that the twin prime problem has been incorrectly addressed, and that to prove there are an infinite number of them requires thinking along different lines. For example, there are 9 classes of twin primes, given by Sloane’s A224855, A224856, A224859, A224854, A224860, A224862, A224864, A224865 and A224857. To prove that A224854 [Numbers n such that 90*n+11 and 90*n+13 are twin prime] contains infinite n one must produce an algorithm capable of reducing a sequence m [0,1,2,3,4,5,6,7,8,9…] to a smaller sequence n [0,1,2,3,5,7,9…] in such a way the algorithm does not contain a state dependency (the algorithm does not rely in any way on terms from the output to generate further results). What I mean is that such an algorithm cannot have an internal reference, as we find with traditional sieves.
There are 24 sequence generator functions required to produce a list n of twin primes beneath some given limit. I will not attempt to reproduce those generator functions here, rather I will reference you to some code. I am not a mathematician, I am an algorithm designer. As such, I cannot express the content of my algorithms in “your” language. I know that you will instantly see the relationships and will immediately understand the implications of this work.
There is one other sequence that is germane to this discussion, one that I feel will be far more illustrative that the twin prime sequence. Please refer to A255491 for a better understanding of the sieve referenced in the following link.
https://github.com/superobserver/elder-sieve
22 August, 2016 at 2:28 pm
Romain Viguier
Well done!
24 August, 2016 at 8:10 am
Jeffrey Helkenberg
Imagine listening to Beethoven’s 5th Symphony and then trying to write music for a single instrument to recreate what you heard. I feel that most attempts at making sense of the prime distribution amounts to little more than that, to wit, “That dog won’t hunt.” It is a valid approach to separate the twin prime conjecture into 9 separate problems, as the twin primes demonstrably reduce to 9 separate sequences. And as with reverse-engineering a symphony, it gets a lot easier to make sense of things once you realize there is more than one instrument at work. The same is true of the prime distribution; with Eratosthenes we have sq.rt limit functions necessary to reduce a list of numbers to a list of primes. This tells us nothing about the prime distribution as an object unto itself, rather it tells us about the functions that are “discovered along the way.” Perhaps most importantly the SoE is not a method to generate primes it is a method to generate composites while leaving a residue of primes. Well if you segregate the primes into 24 classes distributing composites becomes trivial. So as not to confuse the issue, let’s remain in base-10 and look at sequence A142312. This sequence is a counterpart to A181732. Obviously, being a list of primes, we can state that it is non-trivial to generate. However, as opposed to SoE, it is “insanely trivial” to produce the list of composites. There is no base-10 composite sequence associated with A142312, but there is a composite sequence for A181732, namely A255491. Below I offer you an alternative sieve method for generating the base-10 equivalent to A255491. Now, I am no mathematician, but I think that this is a radical improvement over SoE in terms of having no internal references and therefore it allows for OoOE (unlike SoE which requires strictly sequential processing). Regarding the python code below you can of course change the limit to suit your curiosity. Note: The 24 classes reduce to digital root and last digit preserving sequences, indicating that simple algebraic relations underpin the allowable p*q relationships.
limit = 1
limit2 = 2
#composites for 19,19
var_19 = [19 + (90*x) for x in xrange(0, limit)]
var_19a = [y * (19 + (90*z)) for y in var_19 for z in xrange(0, limit2)]
print var_19a
#composites for 91,91
var_91 = [91 + (90*x) for x in xrange(0, limit)]
var_91a = [y * (91 + (90*z)) for y in var_91 for z in xrange(0, limit2)]
print var_91a
#composites for 37,73
var_37 = [37 + (90*x) for x in xrange(0, limit)]
var_73 = [y * (73 + (90*z)) for y in var_37 for z in xrange(0, limit2)]
var_73a = [73 + (90*x) for x in xrange(0, limit)]
var_37a = [y * (37 + (90*z)) for y in var_73a for z in xrange(0, limit2)]
print var_73
print var_37a
#composites for 11,41
var_11 = [11 + (90*x) for x in xrange(0, limit)]
var_41 = [y * (41 + (90*z)) for y in var_11 for z in xrange(0, limit2)]
var_41a = [41 + (90*x) for x in xrange(0, limit)]
var_11a = [y * (11 + (90*z)) for y in var_41a for z in xrange(0, limit2)]
print var_41
print var_11a
#composites for 47,23
var_47 = [47 + (90*x) for x in xrange(0, limit)]
var_23 = [y * (23 + (90*z)) for y in var_47 for z in xrange(0, limit2)]
var_23a = [23 + (90*x) for x in xrange(0, limit)]
var_47a = [y * (47 + (90*z)) for y in var_23a for z in xrange(0, limit2)]
print var_23
print var_47a
#composites for 83,77
var_83 = [83 + (90*x) for x in xrange(0, limit)]
var_77 = [y * (77 + (90*z)) for y in var_83 for z in xrange(0, limit2)]
var_77a = [77 + (90*x) for x in xrange(0, limit)]
var_83a = [y * (83 + (90*z)) for y in var_77a for z in xrange(0, limit2)]
print var_77
print var_83a
#composites for 13,7
var_13 = [13 + (90*x) for x in xrange(0, limit)]
var_7 = [y * (7 + (90*z)) for y in var_13 for z in xrange(0, limit2)]
var_7a = [7 + (90*x) for x in xrange(0, limit)]
var_13a = [y * (13 + (90*z)) for y in var_7a for z in xrange(0, limit2)]
print var_7
print var_13a
#composites for 31,61
var_31 = [31 + (90*x) for x in xrange(0, limit)]
var_61 = [y * (61 + (90*z)) for y in var_31 for z in xrange(0, limit2)]
var_61a = [61 + (90*x) for x in xrange(0, limit)]
var_31a = [y * (31 + (90*z)) for y in var_61a for z in xrange(0, limit2)]
print var_61
print var_31a
#composites for 49,79
var_49 = [49 + (90*x) for x in xrange(0, limit)]
var_79 = [y * (79 + (90*z)) for y in var_49 for z in xrange(0, limit2)]
var_79a = [79 + (90*x) for x in xrange(0, limit)]
var_49a = [y * (49 + (90*z)) for y in var_79a for z in xrange(0, limit2)]
print var_79
print var_49a
#composites for 67,43
var_67 = [67 + (90*x) for x in xrange(0, limit)]
var_43 = [y * (43 + (90*z)) for y in var_67 for z in xrange(0, limit2)]
var_43a = [43 + (90*x) for x in xrange(0, limit)]
var_67a = [y * (67 + (90*z)) for y in var_43a for z in xrange(0, limit2)]
print var_43
print var_67a
#composites for 17,53
var_17 = [17 + (90*x) for x in xrange(0, limit)]
var_53 = [y * (53 + (90*z)) for y in var_17 for z in xrange(0, limit2)]
var_53a = [53 + (90*x) for x in xrange(0, limit)]
var_17a = [y * (17 + (90*z)) for y in var_53a for z in xrange(0, limit2)]
print var_53
print var_17a
#composites for 71,71
var_71 = [71 + (90*x) for x in xrange(0, limit)]
var_71a = [y * (71 + (90*z)) for y in var_71 for z in xrange(0, limit2)]
print var_71a
#composites for 89,89
var_89 = [89 + (90*x) for x in xrange(0, limit)]
var_89a = [y * (89 + (90*z)) for y in var_89 for z in xrange(0, limit2)]
print var_89a
sys.exit(0)
26 August, 2016 at 2:45 am
Jeffrey Helkenberg
Left one of the functions out. You can add the mergedlist lines at the end of the script to make more sense of the output, in terms of generating a csv output file that can be easily sorted. Captures all the composites provided the ranges are adequate.
#composites for 29, 59
var_29 = [29 + (90*x) for x in xrange(0, limit)]
var_59 = [y * (59 + (90*z)) for y in var_29 for z in xrange (0, limit2)]
print var_59
mergedlist = list(set(var_59 + var_29a + var_19a + var_91a + var_73 + var_37a + var_41 + var_11a + var_23 + var_47a + var_77 + var_83a + var_7 + var_13a + var_61 + var_31a + var_79 + var_49a + var_43 + var_67a + var_53 + var_17a + var_71a + var_89a))
values = mergedlist
thecsv = csv.writer(open(“composites.csv”, ‘wb’))
for value in values:
thecsv.writerow([value])
10 May, 2018 at 11:53 am
Anonymous
ie:
If this is true, the conjecture is true
Hint: No, no, no.
Also, read this: https://terrytao.wordpress.com/career-advice/dont-prematurely-obsess-on-a-single-big-problem-or-big-theory/