You are currently browsing the tag archive for the ‘Paul Erdos’ tag.
This post contains two unrelated announcements. Firstly, I would like to promote a useful list of resources for AI in Mathematics, that was initiated by Talia Ringer (with the crowdsourced assistance of many others) during the National Academies workshop on “AI in mathematical reasoning” last year. This list is now accepting new contributions, updates, or corrections; please feel free to submit them directly to the list (which I am helping Talia to edit). Incidentally, next week there will be a second followup webinar to the aforementioned workshop, building on the topics covered there. (The first webinar may be found here.)
Secondly, I would like to advertise the erdosproblems.com website, launched recently by Thomas Bloom. This is intended to be a living repository of the many mathematical problems proposed in various venues by Paul Erdős, who was particularly noted for his influential posing of such problems. For a tour of the site and an explanation of its purpose, I can recommend Thomas’s recent talk on this topic at a conference last week in honor of Timothy Gowers.
Thomas is currently issuing a call for help to develop the erdosproblems.com website in a number of ways (quoting directly from that page):
- You know Github and could set a suitable project up to allow people to contribute new problems (and corrections to old ones) to the database, and could help me maintain the Github project;
- You know things about web design and have suggestions for how this website could look or perform better;
- You know things about Python/Flask/HTML/SQL/whatever and want to help me code cool new features on the website;
- You know about accessibility and have an idea how I can make this website more accessible (to any group of people);
- You are a mathematician who has thought about some of the problems here and wants to write an expanded commentary for one of them, with lots of references, comparisons to other problems, and other miscellaneous insights (mathematician here is interpreted broadly, in that if you have thought about the problems on this site and are willing to write such a commentary you qualify);
- You knew Erdős and have any memories or personal correspondence concerning a particular problem;
- You have solved an Erdős problem and I’ll update the website accordingly (and apologies if you solved this problem some time ago);
- You have spotted a mistake, typo, or duplicate problem, or anything else that has confused you and I’ll correct things;
- You are a human being with an internet connection and want to volunteer a particular Erdős paper or problem list to go through and add new problems from (please let me know before you start, to avoid duplicate efforts);
- You have any other ideas or suggestions – there are probably lots of things I haven’t thought of, both in ways this site can be made better, and also what else could be done from this project. Please get in touch with any ideas!
I for instance contributed a problem to the site (#587) that Erdős himself gave to me personally (this was the topic of a somewhat well known photo of Paul and myself, and which he communicated again to be shortly afterwards on a postcard; links to both images can be found by following the above link). As it turns out, this particular problem was essentially solved in 2010 by Nguyen and Vu.
(Incidentally, I also spoke at the same conference that Thomas spoke at, on my recent work with Gowers, Green, and Manners; here is the video of my talk, and here are my slides.)
I have just uploaded to the arXiv my paper “The convergence of an alternating series of Erdős, assuming the Hardy–Littlewood prime tuples conjecture“. This paper concerns an old problem of Erdős concerning whether the alternating series converges, where denotes the prime. The main result of this paper is that the answer to this question is affirmative assuming a sufficiently strong version of the Hardy–Littlewood prime tuples conjecture.
The alternating series test does not apply here because the ratios are not monotonically decreasing. The deviations of monotonicity arise from fluctuations in the prime gaps , so the enemy arises from biases in the prime gaps for odd and even . By changing variables from to (or more precisely, to integers in the range between and ), this is basically equivalent to biases in the parity of the prime counting function. Indeed, it is an unpublished observation of Said that the convergence of is equivalent to the convergence of . So this question is really about trying to get a sufficiently strong amount of equidistribution for the parity of .
The prime tuples conjecture does not directly say much about the value of ; however, it can be used to control differences for and not too large. Indeed, it is a famous calculation of Gallagher that for fixed , and chosen randomly from to , the quantity is distributed according to the Poisson distribution of mean asymptotically if the prime tuples conjecture holds. In particular, the parity of this quantity should have mean asymptotic to . An application of the van der Corput -process then gives some decay on the mean of as well. Unfortunately, this decay is a bit too weak for this problem; even if one uses the most quantitative version of Gallagher’s calculation, worked out in a recent paper of (Vivian) Kuperberg, the best bound on the mean is something like , which is not quite strong enough to overcome the doubly logarithmic divergence of .
To get around this obstacle, we take advantage of the random sifted model of the primes that was introduced in a paper of Banks, Ford, and myself. To model the primes in an interval such as with drawn randomly from say , we remove one random residue class from this interval for all primes up to Pólya’s “magic cutoff” . The prime tuples conjecture can then be intepreted as the assertion that the random set produced by this sieving process is statistically a good model for the primes in . After some standard manipulations (using a version of the Bonferroni inequalities, as well as some upper bounds of Kuperberg), the problem then boils down to getting sufficiently strong estimates for the expected parity of the random sifted set .
For this problem, the main advantage of working with the random sifted model, rather than with the primes or the singular series arising from the prime tuples conjecture, is that the sifted model can be studied iteratively from the partially sifted sets arising from sifting primes up to some intermediate threshold , and that the expected parity of the experiences some decay in . Indeed, once exceeds the length of the interval , sifting by an additional prime will cause to lose one element with probability , and remain unchanged with probability . If concentrates around some value , this suggests that the expected parity will decay by a factor of about as one increases to , and iterating this should give good bounds on the final expected parity . It turns out that existing second moment calculations of Montgomery and Soundararajan suffice to obtain enough concentration to make this strategy work.
One of the basic problems in analytic number theory is to obtain bounds and asymptotics for sums of the form
in the limit , where ranges over natural numbers less than , and is some arithmetic function of number-theoretic interest. (It is also often convenient to replace this sharply truncated sum with a smoother sum such as , but we will not discuss this technicality here.) For instance, the prime number theorem is equivalent to the assertion
where is the von Mangoldt function, while the Riemann hypothesis is equivalent to the stronger assertion
It is thus of interest to develop techniques to estimate such sums . Of course, the difficulty of this task depends on how “nice” the function is. The functions that come up in number theory lie on a broad spectrum of “niceness”, with some particularly nice functions being quite easy to sum, and some being insanely difficult.
At the easiest end of the spectrum are those functions that exhibit some sort of regularity or “smoothness”. Examples of smoothness include “Archimedean” smoothness, in which is the restriction of some smooth function from the reals to the natural numbers, and the derivatives of are well controlled. A typical example is
One can already get quite good bounds on this quantity by comparison with the integral , namely
with sharper bounds available by using tools such as the Euler-Maclaurin formula (see this blog post). Exponentiating such asymptotics, incidentally, leads to one of the standard proofs of Stirling’s formula (as discussed in this blog post).
One can also consider “non-Archimedean” notions of smoothness, such as periodicity relative to a small period . Indeed, if is periodic with period (and is thus essentially a function on the cyclic group ), then one has the easy bound
In particular, we have the fundamental estimate
This is a good estimate when is much smaller than , but as approaches in magnitude, the error term begins to overwhelm the main term , and one needs much more delicate information on the fractional part of in order to obtain good estimates at this point.
One can also consider functions which combine “Archimedean” and “non-Archimedean” smoothness into an “adelic” smoothness. We will not define this term precisely here (though the concept of a Schwartz-Bruhat function is one way to capture this sort of concept), but a typical example might be
where is periodic with some small period . By using techniques such as summation by parts, one can estimate such sums using the techniques used to estimate sums of periodic functions or functions with (Archimedean) smoothness.
Another class of functions that is reasonably well controlled are the multiplicative functions, in which whenever are coprime. Here, one can use the powerful techniques of multiplicative number theory, for instance by working with the Dirichlet series
which are clearly related to the partial sums (essentially via the Mellin transform, a cousin of the Fourier and Laplace transforms); for this post we ignore the (important) issue of how to make sense of this series when it is not absolutely convergent (but see this previous blog post for more discussion). A primary reason that this technique is effective is that the Dirichlet series of a multiplicative function factorises as an Euler product
One also obtains similar types of representations for functions that are not quite multiplicative, but are closely related to multiplicative functions, such as the von Mangoldt function (whose Dirichlet series is not given by an Euler product, but instead by the logarithmic derivative of an Euler product).
Moving another notch along the spectrum between well-controlled and ill-controlled functions, one can consider functions that are divisor sums such as
for some other arithmetic function , and some level . This is a linear combination of periodic functions and is thus technically periodic in (with period equal to the least common multiple of all the numbers from to ), but in practice this periodic is far too large to be useful (except for extremely small levels , e.g. ). Nevertheless, we can still control the sum simply by rearranging the summation:
and thus by (1) one can bound this by the sum of a main term and an error term . As long as the level is significantly less than , one may expect the main term to dominate, and one can often estimate this term by a variety of techniques (for instance, if is multiplicative, then multiplicative number theory techniques are quite effective, as mentioned previously). Similarly for other slight variants of divisor sums, such as expressions of the form
or expressions of the form
where each is periodic with period .
One of the simplest examples of this comes when estimating the divisor function
which counts the number of divisors up to . This is a multiplicative function, and is therefore most efficiently estimated using the techniques of multiplicative number theory; but for reasons that will become clearer later, let us “forget” the multiplicative structure and estimate the above sum by more elementary methods. By applying the preceding method, we see that
Here, we are (barely) able to keep the error term smaller than the main term; this is right at the edge of the divisor sum method, because the level in this case is equal to . Unfortunately, at this high choice of level, it is not always possible to always keep the error term under control like this. For instance, if one wishes to use the standard divisor sum representation
where is the Möbius function, to compute , then one ends up looking at
From Dirichlet series methods, it is not difficult to establish the identities
and
This suggests (but does not quite prove) that one has
in the sense of conditionally convergent series. Assuming one can justify this (which, ultimately, requires one to exclude zeroes of the Riemann zeta function on the line , as discussed in this previous post), one is eventually left with the estimate , which is useless as a lower bound (and recovers only the classical Chebyshev estimate as the upper bound). The inefficiency here when compared to the situation with the divisor function can be attributed to the signed nature of the Möbius function , which causes some cancellation in the divisor sum expansion that needs to be compensated for with improved estimates.
However, there are a number of tricks available to reduce the level of divisor sums. The simplest comes from exploiting the change of variables , which can in principle reduce the level by a square root. For instance, when computing the divisor function , one can observe using this change of variables that every divisor of above is paired with one below , and so we have
except when is a perfect square, in which case one must subtract one from the right-hand side. Using this reduced-level divisor sum representation, one can obtain an improvement to (2), namely
This type of argument is also known as the Dirichlet hyperbola method. A variant of this argument can also deduce the prime number theorem from (3), (4) (and with some additional effort, one can even drop the use of (4)); this is discussed at this previous blog post.
Using this square root trick, one can now also control divisor sums such as
(Note that has no multiplicativity properties in , and so multiplicative number theory techniques cannot be directly applied here.) The level of the divisor sum here is initially of order , which is too large to be useful; but using the square root trick, we can expand this expression as
which one can rewrite as
The constraint is periodic in with period , so we can write this as
where is the number of solutions in to the equation , and so
The function is multiplicative, and can be easily computed at primes and prime powers using tools such as quadratic reciprocity and Hensel’s lemma. For instance, by Fermat’s two-square theorem, is equal to for and for . From this and standard multiplicative number theory methods (e.g. by obtaining asymptotics on the Dirichlet series ), one eventually obtains the asymptotic
and also
and thus
Similar arguments give asymptotics for on other quadratic polynomials; see for instance this paper of Hooley and these papers by McKee. Note that the irreducibility of the polynomial will be important. If one considers instead a sum involving a reducible polynomial, such as , then the analogous quantity becomes significantly larger, leading to a larger growth rate (of order rather than ) for the sum.
However, the square root trick is insufficient by itself to deal with higher order sums involving the divisor function, such as
the level here is initially of order , and the square root trick only lowers this to about , creating an error term that overwhelms the main term. And indeed, the asymptotic for such this sum has not yet been rigorously established (although if one heuristically drops error terms, one can arrive at a reasonable conjecture for this asymptotic), although some results are known if one averages over additional parameters (see e.g. this paper of Greaves, or this paper of Matthiesen).
Nevertheless, there is an ingenious argument of Erdös that allows one to obtain good upper and lower bounds for these sorts of sums, in particular establishing the asymptotic
for any fixed irreducible non-constant polynomial that maps to (with the implied constants depending of course on the choice of ). There is also the related moment bound
for any fixed (not necessarily irreducible) and any fixed , due to van der Corput; this bound is in fact used to dispose of some error terms in the proof of (6). These should be compared with what one can obtain from the divisor bound and the trivial bound , giving the bounds
for any fixed .
The lower bound in (6) is easy, since one can simply lower the level in (5) to obtain the lower bound
for any , and the preceding methods then easily allow one to obtain the lower bound by taking small enough (more precisely, if has degree , one should take equal to or less). The upper bounds in (6) and (7) are more difficult. Ideally, if we could obtain upper bounds of the form
for any fixed , then the preceding methods would easily establish both results. Unfortunately, this bound can fail, as illustrated by the following example. Suppose that is the product of distinct primes , each of which is close to . Then has divisors, with of them close to for each . One can think of (the logarithms of) these divisors as being distributed according to what is essentially a Bernoulli distribution, thus a randomly selected divisor of has magnitude about , where is a random variable which has the same distribution as the number of heads in independently tossed fair coins. By the law of large numbers, should concentrate near when is large, which implies that the majority of the divisors of will be close to . Sending , one can show that the bound (8) fails whenever .
This however can be fixed in a number of ways. First of all, even when , one can show weaker substitutes for (8). For instance, for any fixed and one can show a bound of the form
for some depending only on . This nice elementary inequality (first observed by Landreau) already gives a quite short proof of van der Corput’s bound (7).
For Erdös’s upper bound (6), though, one cannot afford to lose these additional factors of , and one must argue more carefully. Here, the key observation is that the counterexample discussed earlier – when the natural number is the product of a large number of fairly small primes – is quite atypical; most numbers have at least one large prime factor. For instance, the number of natural numbers less than that contain a prime factor between and is equal to
which, thanks to Mertens’ theorem
for some absolute constant , is comparable to . In a similar spirit, one can show by similarly elementary means that the number of natural numbers less than that are -smooth, in the sense that all prime factors are at most , is only about or so. Because of this, one can hope that the bound (8), while not true in full generality, will still be true for most natural numbers , with some slightly weaker substitute available (such as (7)) for the exceptional numbers . This turns out to be the case by an elementary but careful argument.
The Erdös argument is quite robust; for instance, the more general inequality
for fixed irreducible and , which improves van der Corput’s inequality (8) was shown by Delmer using the same methods. (A slight error in the original paper of Erdös was also corrected in this latter paper.) In a forthcoming revision to my paper on the Erdös-Straus conjecture, Christian Elsholtz and I have also applied this method to obtain bounds such as
which turn out to be enough to obtain the right asymptotics for the number of solutions to the equation .
Below the fold I will provide some more details of the arguments of Landreau and of Erdös.
Recent Comments