You are currently browsing the category archive for the ‘non-technical’ category.

Over the last few years, a large group of mathematicians have been developing an online database to systematically collect the known facts, numerical data, and algorithms concerning some of the most central types of objects in modern number theory, namely the L-functions associated to various number fields, curves, and modular forms, as well as further data about these modular forms.  This of course includes the most famous examples of L-functions and modular forms respectively, namely the Riemann zeta function \zeta(s) and the discriminant modular form \Delta(q), but there are countless other examples of both. The connections between these classes of objects lie at the heart of the Langlands programme.

As of today, the “L-functions and modular forms database” is now out of beta, and open to the public; at present the database is mostly geared towards specialists in computational number theory, but will hopefully develop into a more broadly useful resource as time develops.  An article by John Cremona summarising the purpose of the database can be found here.

(Thanks to Andrew Sutherland and Kiran Kedlaya for the information.)

The International Mathematical Union (with the assistance of the Friends of the International Mathematical Union and The World Academy of Sciences, and supported by Ian Agol, Simon Donaldson, Maxim Kontsevich, Jacob Lurie, Richard Taylor, and myself) has just launched the Graduate Breakout Fellowships, which will offer highly qualified students from developing countries a full scholarship to study for a PhD in mathematics at an institution that is also located in a developing country.  Nominations for this fellowship (which should be from a sponsoring mathematician, preferably a mentor of the nominee) have just opened (with an application deadline of June 22); details on the nomination process and eligibility requirements can be found at this page.

Nominations for the 2017 Breakthrough Prize in mathematics and the New Horizons Prizes in mathematics are now open.  In 2016, the Breakthrough Prize was awarded to Ian Agol.  The New Horizons prizes are for breakthroughs given by junior mathematicians, usually restricted to within 10 years of PhD; the 2016 prizes were awarded to Andre Neves, Larry Guth, and Peter Scholze (declined).

The rules for the prizes are listed on this page, and nominations can be made at this page.  (No self-nominations are allowed, for the obvious reasons; also, a third-party letter of recommendation is also required.)

Just a quick post to note that the arXiv overlay journal Discrete Analysis, managed by Timothy Gowers, has now gone live with its permanent (and quite modern looking) web site, which is run using the Scholastica platform, as well as the first half-dozen or so accepted papers (including one of my own).  See Tim’s announcement for more details.  I am one of the editors of this journal (and am already handling a few submissions). Needless to say, we are happy to take in more submissions (though they will have to be peer reviewed if they are to be accepted, of course).

The Institute for Pure and Applied Mathematics (IPAM) here at UCLA is seeking applications for its new director in 2017 or 2018, to replace Russ Caflisch, who is nearing the end of his five-year term as IPAM director.  The previous directors of IPAM (Tony Chan, Mark Green, and Russ Caflisch) were also from the mathematics department here at UCLA, but the position is open to all qualified applicants with extensive scientific and administrative experience in mathematics, computer science, or statistics.  Applications will be reviewed on June 1, 2016 (though the applications process will remain open through to Dec 1, 2016).

Over on the polymath blog, I’ve posted (on behalf of Dinesh Thakur) a new polymath proposal, which is to explain some numerically observed identities involving the irreducible polynomials P in the polynomial ring {\bf F}_2[t] over the finite field of characteristic two, the simplest of which is

\displaystyle \sum_P \frac{1}{1+P} = 0

(expanded in terms of Taylor series in u = 1/t).  Comments on the problem should be placed in the polymath blog post; if there is enough interest, we can start a formal polymath project on it.

Klaus Roth, who made fundamental contributions to analytic number theory, died this Tuesday, aged 90.

I never met or communicated with Roth personally, but was certainly influenced by his work; he wrote relatively few papers, but they tended to have outsized impact. For instance, he was one of the key people (together with Bombieri) to work on simplifying and generalising the large sieve, taking it from the technically formidable original formulation of Linnik and Rényi to the clean and general almost orthogonality principle that we have today (discussed for instance in these lecture notes of mine). The paper of Roth that had the most impact on my own personal work was his three-page paper proving what is now known as Roth’s theorem on arithmetic progressions:

Theorem 1 (Roth’s theorem on arithmetic progressions) Let {A} be a set of natural numbers of positive upper density (thus {\limsup_{N \rightarrow\infty} |A \cap \{1,\dots,N\}|/N > 0}). Then {A} contains infinitely many arithmetic progressions {a,a+r,a+2r} of length three (with {r} non-zero of course).

At the heart of Roth’s elegant argument was the following (surprising at the time) dichotomy: if {A} had some moderately large density within some arithmetic progression {P}, either one could use Fourier-analytic methods to detect the presence of an arithmetic progression of length three inside {A \cap P}, or else one could locate a long subprogression {P'} of {P} on which {A} had increased density. Iterating this dichotomy by an argument now known as the density increment argument, one eventually obtains Roth’s theorem, no matter which side of the dichotomy actually holds. This argument (and the many descendants of it), based on various “dichotomies between structure and randomness”, became essential in many other results of this type, most famously perhaps in Szemerédi’s proof of his celebrated theorem on arithmetic progressions that generalised Roth’s theorem to progressions of arbitrary length. More recently, my recent work on the Chowla and Elliott conjectures that was a crucial component of the solution of the Erdös discrepancy problem, relies on an entropy decrement argument which was directly inspired by the density increment argument of Roth.

The Erdös discrepancy problem also is connected with another well known theorem of Roth:

Theorem 2 (Roth’s discrepancy theorem for arithmetic progressions) Let {f(1),\dots,f(n)} be a sequence in {\{-1,+1\}}. Then there exists an arithmetic progression {a+r, a+2r, \dots, a+kr} in {\{1,\dots,n\}} with {r} positive such that

\displaystyle  |\sum_{j=1}^k f(a+jr)| \geq c n^{1/4}

for an absolute constant {c>0}.

In fact, Roth proved a stronger estimate regarding mean square discrepancy, which I am not writing down here; as with the Roth theorem in arithmetic progressions, his proof was short and Fourier-analytic in nature (although non-Fourier-analytic proofs have since been found, for instance the semidefinite programming proof of Lovasz). The exponent {1/4} is known to be sharp (a result of Matousek and Spencer).

As a particular corollary of the above theorem, for an infinite sequence {f(1), f(2), \dots} of signs, the sums {|\sum_{j=1}^k f(a+jr)|} are unbounded in {a,r,k}. The Erdös discrepancy problem asks whether the same statement holds when {a} is restricted to be zero. (Roth also established discrepancy theorems for other sets, such as rectangles, which will not be discussed here.)

Finally, one has to mention Roth’s most famous result, cited for instance in his Fields medal citation:

Theorem 3 (Roth’s theorem on Diophantine approximation) Let {\alpha} be an irrational algebraic number. Then for any {\varepsilon > 0} there is a quantity {c_{\alpha,\varepsilon}} such that

\displaystyle  |\alpha - \frac{a}{q}| > \frac{c_{\alpha,\varepsilon}}{q^{2+\varepsilon}}.

From the Dirichlet approximation theorem (or from the theory of continued fractions) we know that the exponent {2+\varepsilon} in the denominator cannot be reduced to {2} or below. A classical and easy theorem of Liouville gives the claim with the exponent {2+\varepsilon} replaced by the degree of the algebraic number {\alpha}; work of Thue and Siegel reduced this exponent, but Roth was the one who obtained the near-optimal result. An important point is that the constant {c_{\alpha,\varepsilon}} is ineffective – it is a major open problem in Diophantine approximation to produce any bound significantly stronger than Liouville’s theorem with effective constants. This is because the proof of Roth’s theorem does not exclude any single rational {a/q} from being close to {\alpha}, but instead very ingeniously shows that one cannot have two different rationals {a/q}, {a'/q'} that are unusually close to {\alpha}, even when the denominators {q,q'} are very different in size. (I refer to this sort of argument as a “dueling conspiracies” argument; they are strangely prevalent throughout analytic number theory.)

Chantal David, Andrew Granville, Emmanuel Kowalski, Phillipe Michel, Kannan Soundararajan, and I are running a program at MSRI in the Spring of 2017 (more precisely, from Jan 17, 2017 to May 26, 2017) in the area of analytic number theory, with the intention to bringing together many of the leading experts in all aspects of the subject and to present recent work on the many active areas of the subject (e.g. the distribution of the prime numbers, refinements of the circle method, a deeper understanding of the asymptotics of bounded multiplicative functions (and applications to Erdos discrepancy type problems!) and of the “pretentious” approach to analytic number theory, more “analysis-friendly” formulations of the theorems of Deligne and others involving trace functions over fields, and new subconvexity theorems for automorphic forms, to name a few).  Like any other semester MSRI program, there will be a number of workshops, seminars, and similar activities taking place while the members are in residence.  I’m personally looking forward to the program, which should be occurring in the midst of a particularly productive time for the subject.  Needless to say, I (and the rest of the organising committee) plan to be present for most of the program.

Applications for Postdoctoral Fellowships and Research Memberships for this program (and for other MSRI programs in this time period, namely the companion program in Harmonic Analysis and the Fall program in Geometric Group Theory, as well as the complementary program in all other areas of mathematics) remain open until Dec 1.  Applications are open to everyone, but require supporting documentation, such as a CV, statement of purpose, and letters of recommendation from other mathematicians; see the application page for more details.

Chantal David, Andrew Granville, Emmanuel Kowalski, Phillipe Michel, Kannan Soundararajan, and I are running a program at MSRI in the Spring of 2017 (more precisely, from Jan 17, 2017 to May 26, 2017) in the area of analytic number theory, with the intention to bringing together many of the leading experts in all aspects of the subject and to present recent work on the many active areas of the subject (the discussion on previous blog posts here have mostly focused on advances in the study of the distribution of the prime numbers, but there have been many other notable recent developments too, such as refinements of the circle method, a deeper understanding of the asymptotics of bounded multiplicative functions and of the “pretentious” approach to analytic number theory, more “analysis-friendly” formulations of the theorems of Deligne and others involving trace functions over fields, and new subconvexity theorems for automorphic forms, to name a few).  Like any other semester MSRI program, there will be a number of workshops, seminars, and similar activities taking place while the members are in residence.  I’m personally looking forward to the program, which should be occurring in the midst of a particularly productive time for the subject.  Needless to say, I (and the rest of the organising committee) plan to be present for most of the program.

Applications for Postdoctoral Fellowships, Research Memberships, and Research Professorships for this program (and for other MSRI programs in this time period, namely the companion program in Harmonic Analysis and the Fall program in Geometric Group Theory, as well as the complementary program in all other areas of mathematics) have just opened up today.  Applications are open to everyone (until they close on Dec 1), but require supporting documentation, such as a CV, statement of purpose, and letters of recommendation from other mathematicians; see the application page for more details.

In the winter quarter (starting January 5) I will be teaching a graduate topics course entitled “An introduction to analytic prime number theory“. As the name suggests, this is a course covering many of the analytic number theory techniques used to study the distribution of the prime numbers {{\mathcal P} = \{2,3,5,7,11,\dots\}}. I will list the topics I intend to cover in this course below the fold. As with my previous courses, I will place lecture notes online on my blog in advance of the physical lectures.

The type of results about primes that one aspires to prove here is well captured by Landau’s classical list of problems:

  1. Even Goldbach conjecture: every even number {N} greater than two is expressible as the sum of two primes.
  2. Twin prime conjecture: there are infinitely many pairs {n,n+2} which are simultaneously prime.
  3. Legendre’s conjecture: for every natural number {N}, there is a prime between {N^2} and {(N+1)^2}.
  4. There are infinitely many primes of the form {n^2+1}.

All four of Landau’s problems remain open, but we have convincing heuristic evidence that they are all true, and in each of the four cases we have some highly non-trivial partial results, some of which will be covered in this course. We also now have some understanding of the barriers we are facing to fully resolving each of these problems, such as the parity problem; this will also be discussed in the course.

One of the main reasons that the prime numbers {{\mathcal P}} are so difficult to deal with rigorously is that they have very little usable algebraic or geometric structure that we know how to exploit; for instance, we do not have any useful prime generating functions. One of course can create non-useful functions of this form, such as the ordered parameterisation {n \mapsto p_n} that maps each natural number {n} to the {n^{th}} prime {p_n}, or one could invoke Matiyasevich’s theorem to produce a polynomial of many variables whose only positive values are prime, but these sorts of functions have no usable structure to exploit (for instance, they give no insight into any of the Landau problems listed above; see also Remark 2 below). The various primality tests in the literature, while useful for practical applications (e.g. cryptography) involving primes, have also proven to be of little utility for these sorts of problems; again, see Remark 2. In fact, in order to make plausible heuristic predictions about the primes, it is best to take almost the opposite point of view to the structured viewpoint, using as a starting point the belief that the primes exhibit strong pseudorandomness properties that are largely incompatible with the presence of rigid algebraic or geometric structure. We will discuss such heuristics later in this course.

It may be in the future that some usable structure to the primes (or related objects) will eventually be located (this is for instance one of the motivations in developing a rigorous theory of the “field with one element“, although this theory is far from being fully realised at present). For now, though, analytic and combinatorial methods have proven to be the most effective way forward, as they can often be used even in the near-complete absence of structure.

In this course, we will not discuss combinatorial approaches (such as the deployment of tools from additive combinatorics) in depth, but instead focus on the analytic methods. The basic principles of this approach can be summarised as follows:

  1. Rather than try to isolate individual primes {p} in {{\mathcal P}}, one works with the set of primes {{\mathcal P}} in aggregate, focusing in particular on asymptotic statistics of this set. For instance, rather than try to find a single pair {n,n+2} of twin primes, one can focus instead on the count {|\{ n \leq x: n,n+2 \in {\mathcal P} \}|} of twin primes up to some threshold {x}. Similarly, one can focus on counts such as {|\{ n \leq N: n, N-n \in {\mathcal P} \}|}, {|\{ p \in {\mathcal P}: N^2 < p < (N+1)^2 \}|}, or {|\{ n \leq x: n^2 + 1 \in {\mathcal P} \}|}, which are the natural counts associated to the other three Landau problems. In all four of Landau’s problems, the basic task is now to obtain a non-trivial lower bounds on these counts.
  2. If one wishes to proceed analytically rather than combinatorially, one should convert all these counts into sums, using the fundamental identity

    \displaystyle |A| = \sum_n 1_A(n),

    (or variants thereof) for the cardinality {|A|} of subsets {A} of the natural numbers {{\bf N}}, where {1_A} is the indicator function of {A} (and {n} ranges over {{\bf N}}). Thus we are now interested in estimating (and particularly in lower bounding) sums such as

    \displaystyle \sum_{n \leq N} 1_{{\mathcal P}}(n) 1_{{\mathcal P}}(N-n),

    \displaystyle \sum_{n \leq x} 1_{{\mathcal P}}(n) 1_{{\mathcal P}}(n+2),

    \displaystyle \sum_{N^2 < n < (N+1)^2} 1_{{\mathcal P}}(n),

    or

    \displaystyle \sum_{n \leq x} 1_{{\mathcal P}}(n^2+1).

  3. Once one expresses number-theoretic problems in this fashion, we are naturally led to the more general question of how to accurately estimate (or, less ambitiously, to lower bound or upper bound) sums such as

    \displaystyle \sum_n f(n)

    or more generally bilinear or multilinear sums such as

    \displaystyle \sum_n \sum_m f(n,m)

    or

    \displaystyle \sum_{n_1,\dots,n_k} f(n_1,\dots,n_k)

    for various functions {f} of arithmetic interest. (Importantly, one should also generalise to include integrals as well as sums, particularly contour integrals or integrals over the unit circle or real line, but we postpone discussion of these generalisations to later in the course.) Indeed, a huge portion of modern analytic number theory is devoted to precisely this sort of question. In many cases, we can predict an expected main term for such sums, and then the task is to control the error term between the true sum and its expected main term. It is often convenient to normalise the expected main term to be zero or negligible (e.g. by subtracting a suitable constant from {f}), so that one is now trying to show that a sum of signed real numbers (or perhaps complex numbers) is small. In other words, the question becomes one of rigorously establishing a significant amount of cancellation in one’s sums (also referred to as a gain or savings over a benchmark “trivial bound”). Or to phrase it negatively, the task is to rigorously prevent a conspiracy of non-cancellation, caused for instance by two factors in the summand {f(n)} exhibiting an unexpectedly large correlation with each other.

  4. It is often difficult to discern cancellation (or to prevent conspiracy) directly for a given sum (such as {\sum_n f(n)}) of interest. However, analytic number theory has developed a large number of techniques to relate one sum to another, and then the strategy is to keep transforming the sum into more and more analytically tractable expressions, until one arrives at a sum for which cancellation can be directly exhibited. (Note though that there is often a short-term tradeoff between analytic tractability and algebraic simplicity; in a typical analytic number theory argument, the sums will get expanded and decomposed into many quite messy-looking sub-sums, until at some point one applies some crude estimation to replace these messy sub-sums by tractable ones again.) There are many transformations available, ranging such basic tools as the triangle inequality, pointwise domination, or the Cauchy-Schwarz inequality to key identities such as multiplicative number theory identities (such as the Vaughan identity and the Heath-Brown identity), Fourier-analytic identities (e.g. Fourier inversion, Poisson summation, or more advanced trace formulae), or complex analytic identities (e.g. the residue theorem, Perron’s formula, or Jensen’s formula). The sheer range of transformations available can be intimidating at first; there is no shortage of transformations and identities in this subject, and if one applies them randomly then one will typically just transform a difficult sum into an even more difficult and intractable expression. However, one can make progress if one is guided by the strategy of isolating and enhancing a desired cancellation (or conspiracy) to the point where it can be easily established (or dispelled), or alternatively to reach the point where no deep cancellation is needed for the application at hand (or equivalently, that no deep conspiracy can disrupt the application).
  5. One particularly powerful technique (albeit one which, ironically, can be highly “ineffective” in a certain technical sense to be discussed later) is to use one potential conspiracy to defeat another, a technique I refer to as the “dueling conspiracies” method. This technique may be unable to prevent a single strong conspiracy, but it can sometimes be used to prevent two or more such conspiracies from occurring, which is particularly useful if conspiracies come in pairs (e.g. through complex conjugation symmetry, or a functional equation). A related (but more “effective”) strategy is to try to “disperse” a single conspiracy into several distinct conspiracies, which can then be used to defeat each other.

As stated before, the above strategy has not been able to establish any of the four Landau problems as stated. However, they can come close to such problems (and we now have some understanding as to why these problems remain out of reach of current methods). For instance, by using these techniques (and a lot of additional effort) one can obtain the following sample partial results in the Landau problems:

  1. Chen’s theorem: every sufficiently large even number {N} is expressible as the sum of a prime and an almost prime (the product of at most two primes). The proof proceeds by finding a nontrivial lower bound on {\sum_{n \leq N} 1_{\mathcal P}(n) 1_{{\mathcal E}_2}(N-n)}, where {{\mathcal E}_2} is the set of almost primes.
  2. Zhang’s theorem: There exist infinitely many pairs {p_n, p_{n+1}} of consecutive primes with {p_{n+1} - p_n \leq 7 \times 10^7}. The proof proceeds by giving a non-negative lower bound on the quantity {\sum_{x \leq n \leq 2x} (\sum_{i=1}^k 1_{\mathcal P}(n+h_i) - 1)} for large {x} and certain distinct integers {h_1,\dots,h_k} between {0} and {7 \times 10^7}. (The bound {7 \times 10^7} has since been lowered to {246}.)
  3. The Baker-Harman-Pintz theorem: for sufficiently large {x}, there is a prime between {x} and {x + x^{0.525}}. Proven by finding a nontrivial lower bound on {\sum_{x \leq n \leq x+x^{0.525}} 1_{\mathcal P}(n)}.
  4. The Friedlander-Iwaniec theorem: There are infinitely many primes of the form {n^2+m^4}. Proven by finding a nontrivial lower bound on {\sum_{n,m: n^2+m^4 \leq x} 1_{{\mathcal P}}(n^2+m^4)}.

We will discuss (simpler versions of) several of these results in this course.

Of course, for the above general strategy to have any chance of succeeding, one must at some point use some information about the set {{\mathcal P}} of primes. As stated previously, usefully structured parametric descriptions of {{\mathcal P}} do not appear to be available. However, we do have two other fundamental and useful ways to describe {{\mathcal P}}:

  1. (Sieve theory description) The primes {{\mathcal P}} consist of those numbers greater than one, that are not divisible by any smaller prime.
  2. (Multiplicative number theory description) The primes {{\mathcal P}} are the multiplicative generators of the natural numbers {{\bf N}}: every natural number is uniquely factorisable (up to permutation) into the product of primes (the fundamental theorem of arithmetic).

The sieve-theoretic description and its variants lead one to a good understanding of the almost primes, which turn out to be excellent tools for controlling the primes themselves, although there are known limitations as to how much information on the primes one can extract from sieve-theoretic methods alone, which we will discuss later in this course. The multiplicative number theory methods lead one (after some complex or Fourier analysis) to the Riemann zeta function (and other L-functions, particularly the Dirichlet L-functions), with the distribution of zeroes (and poles) of these functions playing a particularly decisive role in the multiplicative methods.

Many of our strongest results in analytic prime number theory are ultimately obtained by incorporating some combination of the above two fundamental descriptions of {{\mathcal P}} (or variants thereof) into the general strategy described above. In contrast, more advanced descriptions of {{\mathcal P}}, such as those coming from the various primality tests available, have (until now, at least) been surprisingly ineffective in practice for attacking problems such as Landau’s problems. One reason for this is that such tests generally involve operations such as exponentiation {a \mapsto a^n} or the factorial function {n \mapsto n!}, which grow too quickly to be amenable to the analytic techniques discussed above.

To give a simple illustration of these two basic approaches to the primes, let us first give two variants of the usual proof of Euclid’s theorem:

Theorem 1 (Euclid’s theorem) There are infinitely many primes.

Proof: (Multiplicative number theory proof) Suppose for contradiction that there were only finitely many primes {p_1,\dots,p_n}. Then, by the fundamental theorem of arithmetic, every natural number is expressible as the product of the primes {p_1,\dots,p_n}. But the natural number {p_1 \dots p_n + 1} is larger than one, but not divisible by any of the primes {p_1,\dots,p_n}, a contradiction.

(Sieve-theoretic proof) Suppose for contradiction that there were only finitely many primes {p_1,\dots,p_n}. Then, by the Chinese remainder theorem, the set of natural numbers {A} that is not divisible by any of the {p_1,\dots,p_n} has density {\prod_{i=1}^n (1-\frac{1}{p_i})}, that is to say

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N} | A \cap \{1,\dots,N\} | = \prod_{i=1}^n (1-\frac{1}{p_i}).

In particular, {A} has positive density and thus contains an element larger than {1}. But the least such element is one further prime in addition to {p_1,\dots,p_n}, a contradiction. \Box

Remark 1 One can also phrase the proof of Euclid’s theorem in a fashion that largely avoids the use of contradiction; see this previous blog post for more discussion.

Both proofs in fact extend to give a stronger result:

Theorem 2 (Euler’s theorem) The sum {\sum_{p \in {\mathcal P}} \frac{1}{p}} is divergent.

Proof: (Multiplicative number theory proof) By the fundamental theorem of arithmetic, every natural number is expressible uniquely as the product {p_1^{a_1} \dots p_n^{a_n}} of primes in increasing order. In particular, we have the identity

\displaystyle \sum_{n=1}^\infty \frac{1}{n} = \prod_{p \in {\mathcal P}} ( 1 + \frac{1}{p} + \frac{1}{p^2} + \dots )

(both sides make sense in {[0,+\infty]} as everything is unsigned). Since the left-hand side is divergent, the right-hand side is as well. But

\displaystyle ( 1 + \frac{1}{p} + \frac{1}{p^2} + \dots ) = \exp( \frac{1}{p} + O( \frac{1}{p^2} ) )

and {\sum_{p \in {\mathcal P}} \frac{1}{p^2}\leq \sum_{n=1}^\infty \frac{1}{n^2} < \infty}, so {\sum_{p \in {\mathcal P}} \frac{1}{p}} must be divergent.

(Sieve-theoretic proof) Suppose for contradiction that the sum {\sum_{p \in {\mathcal P}} \frac{1}{p}} is convergent. For each natural number {k}, let {A_k} be the set of natural numbers not divisible by the first {k} primes {p_1,\dots,p_k}, and let {A} be the set of numbers not divisible by any prime in {{\mathcal P}}. As in the previous proof, each {A_k} has density {\prod_{i=1}^k (1-\frac{1}{p_i})}. Also, since {\{1,\dots,N\}} contains at most {\frac{N}{p}} multiples of {p}, we have from the union bound that

\displaystyle | A \cap \{1,\dots,N \}| = |A_k \cap \{1,\dots,N\}| - O( N \sum_{i > k} \frac{1}{p_i} ).

Since {\sum_{i=1}^\infty \frac{1}{p_i}} is assumed to be convergent, we conclude that the density of {A_k} converges to the density of {A}; thus {A} has density {\prod_{i=1}^\infty (1-\frac{1}{p_i})}, which is non-zero by the hypothesis that {\sum_{i=1}^\infty \frac{1}{p_i}} converges. On the other hand, since the primes are the only numbers greater than one not divisible by smaller primes, {A} is just {\{1\}}, which has density zero, giving the desired contradiction. \Box

Remark 2 We have seen how easy it is to prove Euler’s theorem by analytic methods. In contrast, there does not seem to be any known proof of this theorem that proceeds by using any sort of prime-generating formula or a primality test, which is further evidence that such tools are not the most effective way to make progress on problems such as Landau’s problems. (But the weaker theorem of Euclid, Theorem 1, can sometimes be proven by such devices.)

The two proofs of Theorem 2 given above are essentially the same proof, as is hinted at by the geometric series identity

\displaystyle 1 + \frac{1}{p} + \frac{1}{p^2} + \dots = (1 - \frac{1}{p})^{-1}.

One can also see the Riemann zeta function begin to make an appearance in both proofs. Once one goes beyond Euler’s theorem, though, the sieve-theoretic and multiplicative methods begin to diverge significantly. On one hand, sieve theory can still handle to some extent sets such as twin primes, despite the lack of multiplicative structure (one simply has to sieve out two residue classes per prime, rather than one); on the other, multiplicative number theory can attain results such as the prime number theorem for which purely sieve theoretic techniques have not been able to establish. The deepest results in analytic number theory will typically require a combination of both sieve-theoretic methods and multiplicative methods in conjunction with the many transforms discussed earlier (and, in many cases, additional inputs from other fields of mathematics such as arithmetic geometry, ergodic theory, or additive combinatorics).

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 6,039 other followers