- For the twin prime conjecture, one can use the linear forms , , and the property in question is the assertion that and are both prime.
- For the even Goldbach conjecture, the claim is similar but one uses the linear forms , for some even integer .
- For Chen’s theorem, we use the same linear forms as in the previous two cases, but now is the assertion that is prime and is an almost prime (in the sense that there are at most two prime factors).
- In the recent results establishing bounded gaps between primes, we use the linear forms for some admissible tuple , and take to be the assertion that at least two of are prime.

For these sorts of results, one can try a sieve-theoretic approach, which can broadly be formulated as follows:

- First, one chooses a carefully selected
*sieve weight*, which could for instance be a non-negative function having a divisor sum formfor some coefficients , where is a natural scale parameter. The precise choice of sieve weight is often quite a delicate matter, but will not be discussed here. (In some cases, one may work with multiple sieve weights .)

- Next, one uses tools from analytic number theory (such as the Bombieri-Vinogradov theorem) to obtain upper and lower bounds for sums such as
where is some “arithmetic” function involving the prime factorisation of (we will be a bit vague about what this means precisely, but a typical choice of might be a Dirichlet convolution of two other arithmetic functions ).

- Using some combinatorial arguments, one manipulates these upper and lower bounds, together with the non-negative nature of , to conclude the existence of an in the support of (or of at least one of the sieve weights being considered) for which holds

For instance, in the recent results on bounded gaps between primes, one selects a sieve weight for which one has upper bounds on

and lower bounds on

so that one can show that the expression

is strictly positive, which implies the existence of an in the support of such that at least two of are prime. As another example, to prove Chen’s theorem to find such that is prime and is almost prime, one uses a variety of sieve weights to produce a lower bound for

and an upper bound for

and

where is some parameter between and , and “rough” means that all prime factors are at least . One can observe that if , then there must be at least one for which is prime and is almost prime, since for any rough number , the quantity

is only positive when is an almost prime (if has three or more factors, then either it has at least two factors less than , or it is of the form for some ). The upper and lower bounds on are ultimately produced via asymptotics for expressions of the form (1), (2), (3) for various divisor sums and various arithmetic functions .

Unfortunately, there is an obstruction to sieve-theoretic techniques working for certain types of properties , which Zeb Brady and I recently formalised at an AIM workshop this week. To state the result, we recall the Liouville function , defined by setting whenever is the product of exactly primes (counting multiplicity). Define a *sign pattern* to be an element of the discrete cube . Given a property of natural numbers , we say that a sign pattern is *forbidden* by if there does not exist any natural numbers obeying for which

Example 1Let be the property that at least two of are prime. Then the sign patterns , , , are forbidden, because prime numbers have a Liouville function of , so that can only occur when at least two of are equal to .

Example 2Let be the property that is prime and is almost prime. Then the only forbidden sign patterns are and .

Example 3Let be the property that and are both prime. Then are all forbidden sign patterns.

We then have a parity obstruction as soon as has “too many” forbidden sign patterns, in the following (slightly informal) sense:

Claim 1 (Parity obstruction)Suppose is such that that the convex hull of the forbidden sign patterns of contains the origin. Then one cannot use the above sieve-theoretic approach to establish the existence of an such that holds.

Thus for instance, the property in Example 3 is subject to the parity obstruction since is a convex combination of and , whereas the properties in Examples 1, 2 are not. One can also check that the property “at least of the numbers ” are subject to the parity obstruction as soon as .

This claim is not precisely a theorem, because it presumes a certain “Liouville pseudorandomness conjecture” (a very close cousin of the more well known “Möbius pseudorandomness conjecture”) which is a bit difficult to formalise precisely. However, this conjecture is widely believed by analytic number theorists, see e.g. this blog post for a discussion. (Note though that there are scenarios, most notably the “Siegel zero” scenario, in which there is a severe breakdown of this pseudorandomness conjecture, and the parity obstruction then disappears. A typical instance of this is Heath-Brown’s proof of the twin prime conjecture (which would ordinarily be subject to the parity obstruction) under the hypothesis of a Siegel zero.) The obstruction also does not prevent the establishment of an such that holds by introducing additional sieve axioms beyond upper and lower bounds on quantities such as (1), (2), (3). The proof of the Friedlander-Iwaniec theorem is a good example of this latter scenario.

Now we give a (slightly nonrigorous) proof of the claim.

*Proof:* (Nonrigorous) Suppose that the convex hull of the forbidden sign patterns contain the origin. Then we can find non-negative numbers for sign patterns , which sum to , are non-zero only for forbidden sign patterns, and which have mean zero in the sense that

for all . By Fourier expansion (or Lagrange interpolation), one can then write as a polynomial

where is a polynomial in variables that is a linear combination of monomials with and (thus has no constant or linear terms, and no monomials with repeated terms). If we now consider the weight function

then is non-negative, is supported solely on for which is a forbidden pattern, and is equal to plus a linear combination of monomials with .

The Liouville pseudorandomness principle then predicts that sums of the form

and

or more generally

should be asymptotically negligible; intuitively, the point here is that the prime factorisation of should not influence the Liouville function of , even on the short arithmetic progressions that the divisor sum is built out of, and so any monomial occurring in should exhibit strong cancellation for any of the above sums. If one accepts this principle, then all the expressions (1), (2), (3) should be essentially unchanged when is replaced by .

Suppose now for sake of contradiction that one could use sieve-theoretic methods to locate an in the support of some sieve weight obeying . Then, by reweighting all sieve weights by the additional multiplicative factor of , the same arguments should also be able to locate in the support of for which holds. But is only supported on those whose Liouville sign pattern is forbidden, a contradiction.

Claim 1 is sharp in the following sense: if the convex hull of the forbidden sign patterns of do *not* contain the origin, then by the Hahn-Banach theorem (in the hyperplane separation form), there exist real coefficients such that

for all forbidden sign patterns and some . On the other hand, from Liouville pseudorandomness one expects that

is negligible (as compared against for any reasonable sieve weight . We conclude that for some in the support of , that

and hence is not a forbidden sign pattern. This does not actually imply that holds, but it does not prevent from holding purely from parity considerations. Thus, we do not expect a parity obstruction of the type in Claim 1 to hold when the convex hull of forbidden sign patterns does not contain the origin.

Example 4Let be a graph on vertices , and let be the property that one can find an edge of with both prime. We claim that this property is subject to the parity problem precisely when is two-colourable. Indeed, if is two-colourable, then we can colour into two colours (say, red and green) such that all edges in connect a red vertex to a green vertex. If we then consider the two sign patterns in which all the red vertices have one sign and the green vertices have the opposite sign, these are two forbidden sign patterns which contain the origin in the convex hull, and so the parity problem applies. Conversely, suppose that is not two-colourable, then it contains an odd cycle. Any forbidden sign pattern then must contain more s on this odd cycle than s (since otherwise two of the s are adjacent on this cycle by the pigeonhole principle, and this is not forbidden), and so by convexity any tuple in the convex hull of this sign pattern has a positive sum on this odd cycle. Hence the origin is not in the convex hull, and the parity obstruction does not apply. (See also this previous post for a similar obstruction ultimately coming from two-colourability).

Example 5An example of a parity-obstructed property (supplied by Zeb Brady) that does not come from two-colourability: we let be the property that are prime for some collection of pair sets that cover . For instance, this property holds if are both prime, or if are all prime, but not if are the only primes. An example of a forbidden sign pattern is the pattern where are given the sign , and the other three pairs are given . Averaging over permutations of we see that zero lies in the convex hull, and so this example is blocked by parity. However, there is no sign pattern such that it and its negation are both forbidden, which is another formulation of two-colourability.

Of course, the absence of a parity obstruction does not automatically mean that the desired claim is true. For instance, given an admissible -tuple , parity obstructions do not prevent one from establishing the existence of infinitely many such that at least three of are prime, however we are not yet able to actually establish this, even assuming strong sieve-theoretic hypotheses such as the generalised Elliott-Halberstam hypothesis. (However, the argument giving (4) does easily give the far weaker claim that there exist infinitely many such that at least three of have a Liouville function of .)

Remark 1Another way to get past the parity problem in some cases is to take advantage of linear forms that are constant multiples of each other (which correlates the Liouville functions to each other). For instance, on GEH we can find two numbers (products of exactly three primes) that differ by exactly ; a direct sieve approach using the linear forms fails due to the parity obstruction, but instead one can first find such that two of are prime, and then among the pairs of linear forms , , one can find a pair of numbers that differ by exactly . See this paper of Goldston, Graham, Pintz, and Yildirim for more examples of this type.

I thank John Friedlander and Sid Graham for helpful discussions and encouragement.

Filed under: expository, math.NT Tagged: parity problem, sieve theory, Zeb Brady ]]>

The type of results about primes that one aspires to prove here is well captured by Landau’s classical list of problems:

- Even Goldbach conjecture: every even number greater than two is expressible as the sum of two primes.
- Twin prime conjecture: there are infinitely many pairs which are simultaneously prime.
- Legendre’s conjecture: for every natural number , there is a prime between and .
- There are infinitely many primes of the form .

All four of Landau’s problems remain open, but we have convincing heuristic evidence that they are all true, and in each of the four cases we have some highly non-trivial partial results, some of which will be covered in this course. We also now have some understanding of the barriers we are facing to fully resolving each of these problems, such as the parity problem; this will also be discussed in the course.

One of the main reasons that the prime numbers are so difficult to deal with rigorously is that they have very little usable algebraic or geometric structure that we know how to exploit; for instance, we do not have any useful prime generating functions. One of course create *non-useful* functions of this form, such as the ordered parameterisation that maps each natural number to the prime , or invoke Matiyasevich’s theorem to produce a polynomial of many variables whose only positive values are prime, but these sorts of functions have no usable structure to exploit (for instance, they give no insight into any of the Landau problems listed above; see also Remark 2 below). The various primality tests in the literature, while useful for practical applications (e.g. cryptography) involving primes, have also proven to be of little utility for these sorts of problems; again, see Remark 2. In fact, in order to make plausible heuristic predictions about the primes, it is best to take almost the opposite point of view to the structured viewpoint, using as a starting point the belief that the primes exhibit strong pseudorandomness properties that are largely incompatible with the presence of rigid algebraic or geometric structure. We will discuss such heuristics later in this course.

It may be in the future that some usable structure to the primes (or related objects) will eventually be located (this is for instance one of the motivations in developing a rigorous theory of the “field with one element“, although this theory is far from being fully realised at present). For now, though, analytic and combinatorial methods have proven to be the most effective way forward, as they can often be used even in the near-complete absence of structure.

In this course, we will not discuss combinatorial approaches (such as the deployment of tools from additive combinatorics) in depth, but instead focus on the analytic methods. The basic principles of this approach can be summarised as follows:

- Rather than try to isolate individual primes in , one works with the set of primes in
*aggregate*, focusing in particular on*asymptotic statistics*of this set. For instance, rather than try to find a single pair of twin primes, one can focus instead on the*count*of twin primes up to some threshold . Similarly, one can focus on counts such as , , or , which are the natural counts associated to the other three Landau problems. In all four of Landau’s problems, the basic task is now to obtain a non-trivial lower bounds on these counts. - If one wishes to proceed analytically rather than combinatorially, one should convert all these counts into sums, using the fundamental identity
(or variants thereof) for the cardinality of subsets of the natural numbers , where is the indicator function of (and ranges over ). Thus we are now interested in estimating (and particularly in lower bounding) sums such as

or

- Once one expresses number-theoretic problems in this fashion, we are naturally led to the more general question of how to accurately estimate (or, less ambitiously, to lower bound or upper bound) sums such as
or more generally bilinear or multilinear sums such as

or

for various functions of arithmetic interest. (Importantly, one should also generalise to include integrals as well as sums, particularly contour integrals or integrals over the unit circle or real line, but we postpone discussion of these generalisations to later in the course.) Indeed, a huge portion of modern analytic number theory is devoted to precisely this sort of question. In many cases, we can predict an

*expected main term*for such sums, and then the task is to control the*error term*between the true sum and its expected main term. It is often convenient to normalise the expected main term to be zero or negligible (e.g. by subtracting a suitable constant from ), so that one is now trying to show that a sum of signed real numbers (or perhaps complex numbers) is small. In other words, the question becomes one of rigorously establishing a significant amount of*cancellation*in one’s sums (also referred to as a*gain*or*savings*over a benchmark “trivial bound”). Or to phrase it negatively, the task is to rigorously prevent a*conspiracy*of non-cancellation, caused for instance by two factors in the summand exhibiting an unexpectedly large correlation with each other. - It is often difficult to discern cancellation (or to prevent conspiracy) directly for a given sum (such as ) of interest. However, analytic number theory has developed a large number of techniques to relate one sum to another, and then the strategy is to keep transforming the sum into more and more analytically tractable expressions, until one arrives at a sum for which cancellation can be directly exhibited. (Note though that there is often a short-term tradeoff between analytic tractability and algebraic simplicity; in a typical analytic number theory argument, the sums will get expanded and decomposed into many quite messy-looking sub-sums, until at some point one applies some crude estimation to replace these messy sub-sums by tractable ones again.) There are many transformations available, ranging such basic tools as the triangle inequality, pointwise domination, or the Cauchy-Schwarz inequality to key identities such as multiplicative number theory identities (such as the Vaughan identity and the Heath-Brown identity), Fourier-analytic identities (e.g. Fourier inversion, Poisson summation, or more advanced trace formulae), or complex analytic identities (e.g. the residue theorem, Perron’s formula, or Jensen’s formula). The sheer range of transformations available can be intimidating at first; there is no shortage of transformations and identities in this subject, and if one applies them randomly then one will typically just transform a difficult sum into an even more difficult and intractable expression. However, one can make progress if one is guided by the strategy of isolating and enhancing a desired cancellation (or conspiracy) to the point where it can be easily established (or dispelled), or alternatively to reach the point where no deep cancellation is needed for the application at hand (or equivalently, that no deep conspiracy can disrupt the application).
- One particularly powerful technique (albeit one which, ironically, can be highly “ineffective” in a certain technical sense to be discussed later) is to use one potential conspiracy to defeat another, a technique I refer to as the “dueling conspiracies” method. This technique may be unable to prevent a single strong conspiracy, but it can sometimes be used to prevent
*two or more*such conspiracies from occurring, which is particularly useful if conspiracies come in pairs (e.g. through complex conjugation symmetry, or a functional equation). A related (but more “effective”) strategy is to try to “disperse” a single conspiracy into several distinct conspiracies, which can then be used to defeat each other.

As stated before, the above strategy has not been able to establish any of the four Landau problems as stated. However, they can come close to such problems (and we now have some understanding as to why these problems remain out of reach of current methods). For instance, by using these techniques (and a lot of additional effort) one can obtain the following sample partial results in the Landau problems:

- Chen’s theorem: every sufficiently large even number is expressible as the sum of a prime and an almost prime (the product of at most two primes). The proof proceeds by finding a nontrivial lower bound on , where is the set of almost primes.
- Zhang’s theorem: There exist infinitely many pairs of consecutive primes with . The proof proceeds by giving a non-negative lower bound on the quantity for large and certain distinct integers between and . (The bound has since been lowered to .)
- The Baker-Harman-Pintz theorem: for sufficiently large , there is a prime between and . Proven by finding a nontrivial lower bound on .
- The Friedlander-Iwaniec theorem: There are infinitely many primes of the form . Proven by finding a nontrivial lower bound on .

We will discuss (simpler versions of) several of these results in this course.

Of course, for the above general strategy to have any chance of succeeding, one must at some point use *some* information about the set of primes. As stated previously, usefully structured parametric descriptions of do not appear to be available. However, we do have two other fundamental and useful ways to describe :

- (Sieve theory description) The primes consist of those numbers greater than one, that are not divisible by any smaller prime.
- (Multiplicative number theory description) The primes are the multiplicative generators of the natural numbers : every natural number is uniquely factorisable (up to permutation) into the product of primes (the fundamental theorem of arithmetic).

The sieve-theoretic description and its variants lead one to a good understanding of the *almost primes*, which turn out to be excellent tools for controlling the primes themselves, although there are known limitations as to how much information on the primes one can extract from sieve-theoretic methods alone, which we will discuss later in this course. The multiplicative number theory methods lead one (after some complex or Fourier analysis) to the Riemann zeta function (and other L-functions, particularly the Dirichlet L-functions), with the distribution of zeroes (and poles) of these functions playing a particularly decisive role in the multiplicative methods.

Many of our strongest results in analytic prime number theory are ultimately obtained by incorporating some combination of the above two fundamental descriptions of (or variants thereof) into the general strategy described above. In contrast, more advanced descriptions of , such as those coming from the various primality tests available, have (until now, at least) been surprisingly ineffective in practice for attacking problems such as Landau’s problems. One reason for this is that such tests generally involve operations such as exponentiation or the factorial function , which grow too quickly to be amenable to the analytic techniques discussed above.

To give a simple illustration of these two basic approaches to the primes, let us first give two variants of the usual proof of Euclid’s theorem:

Theorem 1 (Euclid’s theorem)There are infinitely many primes.

*Proof:* (Multiplicative number theory proof) Suppose for contradiction that there were only finitely many primes . Then, by the fundamental theorem of arithmetic, every natural number is expressible as the product of the primes . But the natural number is larger than one, but not divisible by any of the primes , a contradiction.

(Sieve-theoretic proof) Suppose for contradiction that there were only finitely many primes . Then, by the Chinese remainder theorem, the set of natural numbers that is not divisible by any of the has density , that is to say

In particular, has positive density and is thus contains an element larger than . But the least such element is one further prime in addition to , a contradiction.

Remark 1One can also phrase the proof of Euclid’s theorem in a fashion that largely avoids the use of contradiction; see this previous blog post for more discussion.

Both proofs in fact extend to give a stronger result:

*Proof:* (Multiplicative number theory proof) By the fundamental theorem of arithmetic, every natural number is expressible uniquely as the product of primes in increasing order. In particular, we have the identity

(both sides make sense in as everything is unsigned). Since the left-hand side is divergent, the right-hand side is as well. But

and , so must be divergent.

(Sieve-theoretic proof) Suppose for contradiction that the sum is convergent. For each natural number , let be the set of natural numbers not divisible by the first primes , and let be the set of numbers not divisible by any prime in . As in the previous proof, each has density . Also, since contains at most multiples of , we have from the union bound that

Since is assumed to be convergent, we conclude that the density of converges to the density of ; thus has density , which is non-zero by the hypothesis that converges. On the other hand, since the primes are the only numbers greater than one not divisible by smaller primes, is just , which has density zero, giving the desired contradiction.

Remark 2We have seen how easy it is to prove Euler’s theorem by analytic methods. In contrast, there does not seem to be any known proof of this theorem that proceeds by using any sort of prime-generating formula or a primality test, which is further evidence that such tools are not the most effective way to make progress on problems such as Landau’s problems. (But the weaker theorem of Euclid, Theorem 1, can sometimes be proven by such devices.)

The two proofs of Theorem 2 given above are essentially the same proof, as is hinted at by the geometric series identity

One can also see the Riemann zeta function begin to make an appearance in both proofs. Once one goes beyond Euler’s theorem, though, the sieve-theoretic and multiplicative methods begin to diverge significantly. On one hand, sieve theory can still handle to some extent sets such as twin primes, despite the lack of multiplicative structure (one simply has to sieve out two residue classes per prime, rather than one); on the other, multiplicative number theory can attain results such as the prime number theorem for which purely sieve theoretic techniques have not been able to establish. The deepest results in analytic number theory will typically require a combination of both sieve-theoretic methods and multiplicative methods in conjunction with the many transforms discussed earlier (and, in many cases, additional inputs from other fields of mathematics such as arithmetic geometry, ergodic theory, or additive combinatorics).

** â€” 1. Topics covered â€” **

Analytic prime number theory is a vast subject (the 615-page text of Iwaniec and Kowalski, for instance, gives a good indication as to its scope). I will therefore have to be somewhat selective in deciding what subset of this field to cover. I have chosen the following “core” topics to focus on:

- Elementary multiplicative number theory.
- Heuristic random models for the primes.
- The basic theory of the Riemann zeta function and Dirichlet L-functions, and their relationship with the primes.
- Zero-free regions for the zeta function and the Dirichet L-function, including Siegel’s theorem.
- The prime number theorem, the Siegel-Walfisz theorem, and the Bombieri-Vinogradov theorem.
- Sieve theory, small and large gaps between the primes, and the parity problem.
- Exponential sum estimates over the integers, and the Vinogradov-Korobov zero-free region.
- Zero density estimates, Hohiesel’s theorem, and Linnik’s theorem.
- Exponential sum estimates over finite fields, and improved distribution estimates for the primes.
- (If time permits) Exponential sum estimates over the primes, the circle method, and Vinogradov’s three-primes theorem.

In order to cover all this material, I will focus on more qualitative results, as opposed to the strongest quantitative results, in particular I will not attempt to optimise many of the numerical constants and exponents appearing in various estimates. This also allows me to downplay the role of some key components of the field which are not essential for establishing the core results of this course at such a qualitative level:

- I will minimise the use of algebraic number theory tools (such as the class number formula).
- I will avoid deploying the functional equation (or related identities, such as Poisson summation) if they are unnecessary at a qualitative level (though I will note when the functional equation can be used to improve the quantitative results). As it turns out,
*all*of the core results mentioned above can in fact be derived without ever invoking the functional equation, although one usually gets poorer numerical exponents as a consequence. - Somewhat related to this, I will reduce the reliance on complex analytic methods as compared to more traditional presentations of the material, relying in some places instead on Fourier-analytic substitutes, or on results about harmonic functions. (But I will not go as far as deploying the primarily real-variable “pretentious” approach to analytic number theory currently in development by Granville and Soundararajan, although my approach here does align in spirit with that approach.)
- The discussion on sieve methods will be somewhat abridged, focusing primarily on the Selberg sieve, which is a good general-purpose sieve for qualitative applications at least.
- I will almost certainly avoid any discussion of automorphic forms methods.
- Similarly, I will not cover methods that rely on additive combinatorics or ergodic theory.

Of course, many of these additional topics are well covered in existing textbooks, such as the above-mentioned text of Iwaniec and Kowalski (or, for the finer points of sieve theory, the text of Friedlander and Iwaniec). Other good texts that can be used for supplementary reading are Davenport’s “Multiplicative number theory” and Montgomery-Vaughan’s “Multiplicative number theory I.”. As for prerequisites: some exposure to complex analysis, Fourier analysis, and real analysis will be particularly helpful, although we will review some of this material as needed (particularly with regard to complex analysis and the theory of harmonic functions). Experience with other quantitative areas of mathematics in which lower bounds, upper bounds, and other forms of estimation are emphasised (e.g. asymptotic combinatorics or theoretical computer science) will also be useful. Knowledge of algebraic number theory or arithmetic geometry will add a valuable additional perspective to the course, but will not be necessary to follow most of the material.

** â€” 2. Notation â€” **

In this course, all sums will be understood to be over the natural numbers unless otherwise specified, with the exception of sums over the variable (or variants such as , , etc.), which will be understood to be over primes.

We will use asymptotic notation in two contexts, one in which there is no asymptotic parameter present, and one in which there is an asymptotic parameter (such as ) that is going to infinity. In the non-asymptotic setting (which is the default context if no asymptotic parameter is explicitly specified), we use , , or to denote an estimate of the form , where is an absolute constant. In some cases we would like the implied constant to depend on some additional parameters such as , in which case we will denote this by subscripts, for instance denotes the claim that for some depending on .

In some cases it will instead be convenient to work in an asymptotic setting, in which there is an explicitly designated asymptotic parameter (such as ) going to infinity. In that case, all mathematical objects will be permitted to depend on this asymptotic parameter, unless they are explicitly referred to as being *fixed*. We then use , , or to denote the claim that for some fixed . Note that in slight contrast to the non-asymptotic setting, the implied constant here is allowed to depend on other parameters, so long as these parameters are also fixed. As such, the asymptotic setting can be a convenient way to manage dependencies of various implied constants on parameters. In the asymptotic setting we also use to denote the claim that , where is a quantity which goes to zero as the asymptotic parameter goes to infinity.

Remark 3In later posts we will make a distinction between implied constants that areeffective(they can be computed, at least in principle, by some explicit method) and those at areineffective(they can be proven to be finite, but there is no algorithm known to compute them in finite time).

We use to denote the assertion that divides , and to denote the residue class of modulo .

We use to denote the indicator function of a set , thus when and otherwise. Similarly, for any mathematical statement , we use to denote the value when is true and when is false. Thus for instance is the indicator function of the even numbers.

We use to denote the cardinality of a set .

Filed under: 254A - analytic prime number theory, admin ]]>

where is a function of both time and space , with being the Laplacian operator. One can generalise this equation in a number of ways, for instance by replacing the spatial domain with some other manifold and replacing the Laplacian with the Laplace-Beltrami operator or adding lower order terms (such as a potential, or a coupling with a magnetic field). But for sake of discussion let us work with the classical wave equation on . We will work formally in this post, being unconcerned with issues of convergence, justifying interchange of integrals, derivatives, or limits, etc.. One then has a conserved energy

which we can rewrite using integration by parts and the inner product on as

A key feature of the wave equation is *finite speed of propagation*: if, at time (say), the initial position and initial velocity are both supported in a ball , then at any later time , the position and velocity are supported in the larger ball . This can be seen for instance (formally, at least) by inspecting the exterior energy

and observing (after some integration by parts and differentiation under the integral sign) that it is non-increasing in time, non-negative, and vanishing at time .

The wave equation is second order in time, but one can turn it into a first order system by working with the pair rather than just the single field , where is the velocity field. The system is then

and the conserved energy is now

Finite speed of propagation then tells us that if are both supported on , then are supported on for all . One also has time reversal symmetry: if is a solution, then is a solution also, thus for instance one can establish an analogue of finite speed of propagation for negative times using this symmetry.

If one has an eigenfunction

of the Laplacian, then we have the explicit solutions

of the wave equation, which formally can be used to construct all other solutions via the principle of superposition.

When one has vanishing initial velocity , the solution is given via functional calculus by

and the propagator can be expressed as the average of half-wave operators:

One can view as a minor of the full wave propagator

which is unitary with respect to the energy form (1), and is the fundamental solution to the wave equation in the sense that

Viewing the contraction as a minor of a unitary operator is an instance of the “dilation trick“.

It turns out (as I learned from Yuval Peres) that there is a useful discrete analogue of the wave equation (and of all of the above facts), in which the time variable now lives on the integers rather than on , and the spatial domain can be replaced by discrete domains also (such as graphs). Formally, the system is now of the form

where is now an integer, take values in some Hilbert space (e.g. functions on a graph ), and is some operator on that Hilbert space (which in applications will usually be a self-adjoint contraction). To connect this with the classical wave equation, let us first consider a rescaling of this system

where is a small parameter (representing the discretised time step), now takes values in the integer multiples of , and is the wave propagator operator or the heat propagator (the two operators are different, but agree to fourth order in ). One can then formally verify that the wave equation emerges from this rescaled system in the limit . (Thus, is not exactly the direct analogue of the Laplacian , but can be viewed as something like in the case of small , or if we are not rescaling to the small case. The operator is sometimes known as the *diffusion operator*)

Assuming is self-adjoint, solutions to the system (3) formally conserve the energy

This energy is positive semi-definite if is a contraction. We have the same time reversal symmetry as before: if solves the system (3), then so does . If one has an eigenfunction

to the operator , then one has an explicit solution

to (3), and (in principle at least) this generates all other solutions via the principle of superposition.

Finite speed of propagation is a lot easier in the discrete setting, though one has to offset the support of the “velocity” field by one unit. Suppose we know that has unit speed in the sense that whenever is supported in a ball , then is supported in the ball . Then an easy induction shows that if are supported in respectively, then are supported in .

The fundamental solution to the discretised wave equation (3), in the sense of (2), is given by the formula

where and are the Chebyshev polynomials of the first and second kind, thus

and

In particular, is now a minor of , and can also be viewed as an average of with its inverse :

As before, is unitary with respect to the energy form (4), so this is another instance of the dilation trick in action. The powers and are discrete analogues of the heat propagators and wave propagators respectively.

One nice application of all this formalism, which I learned from Yuval Peres, is the Varopoulos-Carne inequality:

Theorem 1 (Varopoulos-Carne inequality)Let be a (possibly infinite) regular graph, let , and let be vertices in . Then the probability that the simple random walk at lands at at time is at most , where is the graph distance.

This general inequality is quite sharp, as one can see using the standard Cayley graph on the integers . Very roughly speaking, it asserts that on a regular graph of reasonably controlled growth (e.g. polynomial growth), random walks of length concentrate on the ball of radius or so centred at the origin of the random walk.

*Proof:* Let be the graph Laplacian, thus

for any , where is the degree of the regular graph and sum is over the vertices that are adjacent to . This is a contraction of unit speed, and the probability that the random walk at lands at at time is

where are the Dirac deltas at . Using (5), we can rewrite this as

where we are now using the energy form (4). We can write

where is the simple random walk of length on the integers, that is to say where are independent uniform Bernoulli signs. Thus we wish to show that

By finite speed of propagation, the inner product here vanishes if . For we can use Cauchy-Schwarz and the unitary nature of to bound the inner product by . Thus the left-hand side may be upper bounded by

and the claim now follows from the Chernoff inequality.

This inequality has many applications, particularly with regards to relating the entropy, mixing time, and concentration of random walks with volume growth of balls; see this text of Lyons and Peres for some examples.

For sake of comparison, here is a continuous counterpart to the Varopoulos-Carne inequality:

Theorem 2 (Continuous Varopoulos-Carne inequality)Let , and let be supported on compact sets respectively. Thenwhere is the Euclidean distance between and .

*Proof:* By Fourier inversion one has

for any real , and thus

By finite speed of propagation, the inner product vanishes when ; otherwise, we can use Cauchy-Schwarz and the contractive nature of to bound this inner product by . Thus

Bounding by , we obtain the claim.

Observe that the argument is quite general and can be applied for instance to other Riemannian manifolds than .

Filed under: expository, math.AP, math.MG, math.OA Tagged: Cayley graphs, random walks, Varopoulos-Carne bound, wave equation, Yuval Peres ]]>

for any fixed . Unconditionally, the best result so far (up to logarithmic factors) that holds for all primes is by Burgess, who showed that

for any fixed . See this previous post for a proof of these bounds.

In this paper, we show that the Vinogradov conjecture is a consequence of the Elliott-Halberstam conjecture. Using a variant of the argument, we also show that the “Type II” estimates established by Zhang and numerically improved by the Polymath8a project can be used to improve a little on the Vinogradov bound (1), to

although this falls well short of the Burgess bound. However, the method is somewhat different (although in both cases it is the Weil exponential sum bounds that are the source of the gain over (1)) and it is conceivable that a numerically stronger version of the Type II estimates could obtain results that are more competitive with the Burgess bound. At any rate, this demonstrates that the equidistribution estimates introduced by Zhang may have further applications beyond the family of results related to bounded gaps between primes.

The connection between the least quadratic nonresidue problem and Elliott-Halberstam is follows. Suppose for contradiction we can find a prime with unusually large. Letting be the quadratic character modulo , this implies that the sums are also unusually large for a significant range of (e.g. ), although the sum is also quite small for large (e.g. ), due to the cancellation present in . It turns out (by a sort of “uncertainty principle” for multiplicative functions, as per this paper of Granville and Soundararajan) that these facts force to be unusually large in magnitude for some large (with for two large absolute constants ). By the periodicity of , this means that

must be unusually large also. However, because is large, one can factorise as for a fairly sparsely supported function . The Elliott-Halberstam conjecture, which controls the distribution of in arithmetic progressions on the average can then be used to show that is small, giving the required contradiction.

The implication involving Type II estimates is proven by a variant of the argument. If is large, then a character sum is unusually large for a certain . By multiplicativity of , this shows that correlates with , and then by periodicity of , this shows that correlates with for various small . By the Cauchy-Schwarz inequality (cf. this previous blog post), this implies that correlates with for some distinct . But this can be ruled out by using Type II estimates.

I’ll record here a well-known observation concerning potential counterexamples to any improvement to the Burgess bound, that is to say an infinite sequence of primes with . Suppose we let be the asymptotic mean value of the quadratic character at and the mean value of ; these quantities are defined precisely in my paper, but roughly speaking one can think of

and

Thanks to the basic Dirichlet convolution identity , one can establish the *Wirsing integral equation*

for all ; see my paper for details (actually far sharper claims than this appear in previous work of Wirsing and Granville-Soundararajan). If we have an infinite sequence of counterexamples to any improvement to the Burgess bound, then we have

while from the Burgess exponential sum estimates we have

These two constraints, together with the Wirsing integral equation, end up determining and completely. It turns out that we must have

and

and then for , evolves by the integral equation

For instance

and then oscillates in a somewhat strange fashion towards zero as . This very odd behaviour of is surely impossible, but it seems remarkably hard to exclude it without invoking a strong hypothesis, such as GRH or the Elliott-Halberstam conjecture (or weaker versions thereof).

Filed under: math.NT, paper Tagged: Elliott-Halberstam conjecture, quadratic nonresidue, Type II estimate, Vinogradov conjecture ]]>

is the von Mangoldt function. It is a basic result in analytic number theory, but requires a bit of effort to prove. One “elementary” proof of this theorem proceeds through the Selberg symmetry formula

where the second von Mangoldt function is defined by the formula

(We are avoiding the use of the symbol here to denote Dirichlet convolution, as we will need this symbol to denote ordinary convolution shortly.) For the convenience of the reader, we give a proof of the Selberg symmetry formula below the fold. Actually, for the purposes of proving the prime number theorem, the weaker estimate

In this post I would like to record a somewhat “soft analysis” reformulation of the elementary proof of the prime number theorem in terms of Banach algebras, and specifically in Banach algebra structures on (completions of) the space of compactly supported continuous functions equipped with the convolution operation

This soft argument does not easily give any quantitative decay rate in the prime number theorem, but by the same token it avoids many of the quantitative calculations in the traditional proofs of this theorem. Ultimately, the key “soft analysis” fact used is the spectral radius formula

for any element of a unital commutative Banach algebra , where is the space of characters (i.e., continuous unital algebra homomorphisms from to ) of . This formula is due to Gelfand and may be found in any text on Banach algebras; for sake of completeness we prove it below the fold.

The connection between prime numbers and Banach algebras is given by the following consequence of the Selberg symmetry formula.

Theorem 1 (Construction of a Banach algebra norm)For any , let denote the quantityThen is a seminorm on with the bound

We prove this theorem below the fold. The prime number theorem then follows from Theorem 1 and the following two assertions. The first is an application of the spectral radius formula (6) and some basic Fourier analysis (in particular, the observation that contains a plentiful supply of local units:

Theorem 2 (Non-trivial Banach algebras with many local units have non-trivial spectrum)Let be a seminorm on obeying (7), (8). Suppose that is not identically zero. Then there exists such thatfor all . In particular, by (7), one has

whenever is a non-negative function.

The second is a consequence of the Selberg symmetry formula and the fact that is real (as well as Mertens’ theorem, in the case), and is closely related to the non-vanishing of the Riemann zeta function on the line :

Theorem 3 (Breaking the parity barrier)Let . Then there exists such that is non-negative, and

Assuming Theorems 1, 2, 3, we may now quickly establish the prime number theorem as follows. Theorem 2 and Theorem 3 imply that the seminorm constructed in Theorem 1 is trivial, and thus

as for any Schwartz function (the decay rate in may depend on ). Specialising to functions of the form for some smooth compactly supported on , we conclude that

as ; by the smooth Urysohn lemma this implies that

as for any fixed , and the prime number theorem then follows by a telescoping series argument.

The same argument also yields the prime number theorem in arithmetic progressions, or equivalently that

for any fixed Dirichlet character ; the one difference is that the use of Mertens’ theorem is replaced by the basic fact that the quantity is non-vanishing.

** — 1. Proof of Selberg symmetry formula — **

We now prove (2). From (3) we have

From the integral test we have the estimates

for some absolute constants whose exact value is unimportant for us, and for any . We conclude that

for some further absolute constants . Replacing by and inserting this into (9), one obtains

The error term can be computed to be . The main term simplifies by Möbius inversion to , and the claim follows.

** — 2. Constructing the Banach algebra — **

We now prove Theorem 1. It is convenient to transform the situation from the classical context of arithmetic functions on (such as or ) to the more Fourier-analytic context of Radon measures on the real line . Define the discrete Radon measure

and for any , let denote the left translate of the measure by , thus

for any continuous compactly supported . We note in passing that the prime number theorem (1) is equivalent to the assertion that the translates converge in the vague topology to Lebesgue measure as .

where is the convolution of the Radon measures , and is the measure multiplied by the identity function . From (4) one has

We claim that the Selberg symmetry formula (5) implies (in fact, it is equivalent to) the assertion that the translates converge in the vague topology to . Indeed, (5) implies for any fixed that

or equivalently that

which we rewrite as

Since for , we thus have

which implies that converges vaguely to , and the claim follows.

Now we begin the proof of Theorem 1. Observe that the quantity can be rewritten as

and converges vaguely to , we see that the measures are precompact in the vague topology, thanks to the Helly selection principle or Prokhorov theorem. In particular, we have

for some limit point of the translates in the vague topology. From (12) we have

Finally, we prove (8). By(11), it suffices to show that

for any , where the decay errors are allowed to depend on . Since converges vaguely to , we already have from (10) that

so it suffices to show that

Let be Lebesgue measure on the half-line . Then , so converges vaguely to . The measure is equal to times the function , so by Mertens’ theorem this function also converges vaguely to . We conclude that

converges vaguely to , and so it suffices to show that

We rewrite this as

On the support of , we have , so it suffices to show that

(The error term in can be controlled by using (15) with replaced by , and modifying the preceding arguments to replace by .)

From Fubini’s theorem we have

The integrand vanishes unless . By (11), we have

and

and the claim (15) follows.

** — 3. Non-trivial algebras with many local units have non-trivial spectrum — **

We now prove Theorem 2. Let be the Banach algebra completion of under the seminorm (thus is the space of Cauchy sequences in , quotiented out by the sequences that go to zero in the seminorm ). Since is not identically zero, is a non-trivial commutative Banach algebra (but it is not necessarily unital).

It is convenient to adjoin a unit to to create a unital commutative Banach algebra with the extended norm

for and ; one easily verifies that is a unital commutative Banach algebra.

Suppose that all elements of have zero spectral radius (as defined in (6)). Let be a Schwartz function with compactly supported Fourier transform. Then we can find another Schwartz function with compactly supported Fourier transform such that (by ensuring that on the support of ; thus is a “local unit” on the Fourier support of ). Thus for all . But has spectral radius zero, thus is zero in . By density this implies that is trivial, a contradiction.

Thus there is an element of with positive spectral radius. Then by (6), there is a character that is does not vanish identically on . Suppose that for each there exists in the kernel of whose Fourier coefficient is non-vanishing. Since the kernel of is a space closed with respect to convolutions by functions, some Fourier analysis and a smooth partition of unity then shows that the kernel of contains any Schwartz function with compactly supported Fourier transform, and thus by density is trivial, a contradiction. Thus there must exist such that contains all test functions with Fourier coefficient vanishing at . From this we conclude that on is a constant multiple of the Fourier coefficient map ; being a non-trivial algebra homomorphism on , we thus have

for all . Since characters have norm at most (as can be seen for instance from (6)), we obtain the claim.

** — 4. Breaking the parity barrier — **

We now prove Theorem 3. We divide into two cases, depending on whether or . If , we let be a continuous function that equals on and is supported on for some large . From Mertens’ theorem we have

for sufficiently large depending on , and thus

The claim then follows by taking sufficiently large.

Now suppose . In the language of Section 2, we have

for some limit point of the . We can write the right-hand side as

for some phase . From (14), is a real measure between and , so by the triangle inequality we have

Now we set , where is as before. Then

Since is periodic with period and has mean value strictly less than (in fact, it has mean ), we thus have

if is sufficiently large depending on . The claim follows.

** — 5. The prime number theorem in arithmetic progressions — **

Let be a non-principal Dirichlet character of some period . We allow all implied constants in the notation to depend on . In this section we sketch the changes to the above arguments needed to establish

which gives the prime number in arithmetic progressions by the usual Fourier expansion into Dirichlet characters.

We have the twisted versions

and

of (3), (4). Since has mean zero, a decomposition into intervals of length reveals that

from which we obtain the twisted Selberg symmetry formula

If we define the twisted measures

and

then

and hence converges weakly to zero as . Introducing the twisted norms

we may verify that obeys the conclusions of Theorem 1.

By repeating the previous arguments, it will suffice that the analogue of Theorem 3 for holds. When , we can argue as in Section 4, where the role of Mertens’ theorem is replaced by Dirichlet’s theorem

which is ultimately a consequence of the non-vanishing of .

For , the argument in Section 4 works with minimal changes if is real-valued. If is complex valued, it still takes only a finite number of values in the unit disk. Then the limit measures appearing in Section 4 are equal to Lebesgue measure times a density taking values in the convex hull of this finite set of values, which is a polygon in the unit disk. One can then modify the arguments in Section 4 to bound

for some phase . If we set as before, we again observe that the function is periodic and has mean strictly less than one, and so we can again establish the required bound if is large enough.

** — 6. Proof of Gelfand formula — **

We now prove (6).

If is a character, then it has an operator norm:

But we may eliminate this norm by using the “tensor power trick”: replacing with and then taking roots we conclude that

and then on sending we have

Replacing by again, taking roots, and sending we conclude that

(The limit exists because is submultiplicative.) This gives one direction of (6). To give the other direction, suppose for sake of contradiction that we could find an such that

There are two cases, depending on whether we can find a complex number with and non-invertible. First suppose that such a exists; then generates an ideal of , which by Zorn’s lemma is contained in a maximal ideal , whose quotient is then a field. By Neumann series, any element of sufficiently close to the identity is invertible and thus not in ; since is a field, we conclude that the complement of is open, and so is closed. This makes a Banach algebra as well as a field. If is not a multiple of the identity, then is invertible for every and so (by Neumann series) is an analytic function from to which goes to zero at infinity, contradicting Liouville’s theorem. Thus is one-dimensional (this is the Banach-Mazur theorem) and thus isomorphic to ; this gives a continuous unital algebra homomorphism with in the kernel, thus , contradicting the second inequality in (16).

Now suppose that is invertible for all . Then, as in the preceding argument, is an analytic function from to which decays to zero at infinity, so we have the Cauchy integral formula

for any natural number . From the triangle inequality we conclude in particular that

which contradicts the first inequality in (16).

Filed under: expository, math.NT, math.OA, math.SP Tagged: Banach algebra, prime number theorem, spectral theorem ]]>

converge to the integral

the triangle density

converges to the integral

the four-cycle density

converges to the integral

and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.

One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter ) to obtain a nonstandard graph , where is the ultraproduct of the , and similarly for the . The set can then be viewed as a symmetric subset of which is measurable with respect to the Loeb -algebra of the product (see this previous blog post for the construction of Loeb measure). A crucial point is that this -algebra is larger than the product of the Loeb -algebra of the individual vertex set . This leads to a decomposition

where the “graphon” is the orthogonal projection of onto , and the “regular error” is orthogonal to all product sets for . The graphon then captures the statistics of the nonstandard graph , in exact analogy with the more traditional graph limits: for instance, the edge density

(or equivalently, the limit of the along the ultrafilter ) is equal to the integral

where denotes Loeb measure on a nonstandard finite set ; the triangle density

(or equivalently, the limit along of the triangle densities of ) is equal to the integral

and so forth. Note that with this construction, the graphon is living on the Cartesian square of an abstract probability space , which is likely to be inseparable; but it is possible to cut down the Loeb -algebra on to minimal countable -algebra for which remains measurable (up to null sets), and then one can identify with , bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)

Additive combinatorics, which studies things like the additive structure of finite subsets of an abelian group , has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.

It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an *ultra approximate group* in a nonstandard group , defined as the ultraproduct of finite -approximate groups for some standard . (A -approximate group is a symmetric set containing the origin such that can be covered by or fewer translates of .) We then let be the external subgroup of generated by ; equivalently, is the union of over all standard . This space has a Loeb measure , defined by setting

whenever is an internal subset of for any standard , and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.

The Loeb measure is a translation invariant measure on , normalised so that has Loeb measure one. As such, one should think of as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that is not *actually* a locally compact group with Haar measure, for two reasons:

- There is not an obvious topology on that makes it simultaneously locally compact, Hausdorff, and -compact. (One can get one or two out of three without difficulty, though.)
- The addition operation is not measurable from the product Loeb algebra to . Instead, it is measurable from the coarser Loeb algebra to (compare with the analogous situation for nonstandard graphs).

Nevertheless, the analogy is a useful guide for the arguments that follow.

Let denote the space of bounded Loeb measurable functions (modulo almost everywhere equivalence) that are supported on for some standard ; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation , defined by setting

whenever , are bounded nonstandard functions (extended by zero to all of ), and then extending to arbitrary elements of by density. Equivalently, is the pushforward of the -measurable function under the map .

The basic structural theorem is then as follows.

Theorem 1 (Kronecker factor)Let be an ultra approximate group. Then there exists a (standard) locally compact abelian group of the formfor some standard and some compact abelian group , equipped with a Haar measure and a measurable homomorphism (using the Loeb -algebra on and the Borel -algebra on ), with the following properties:

- (i) has dense image, and is the pushforward of Loeb measure by .
- (ii) There exists sets with open and compact, such that
- (iii) Whenever with compact and open, there exists a nonstandard finite set such that
- (iv) If , then we have the convolution formula
where are the pushforwards of to , the convolution on the right-hand side is convolution using , and is the pullback map from to . In particular, if , then for all .

One can view the locally compact abelian group as a “model “or “Kronecker factor” for the ultra approximate group (in close analogy with the Kronecker factor from ergodic theory). In the case that is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components of the Kronecker group are trivial, and this theorem was implicitly established by Szegedy. The compact group is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions , one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor .

Given any sequence of uniformly bounded functions for some fixed , we can view the function defined by

as an “additive limit” of the , in much the same way that graphons are limits of the indicator functions . The additive limits capture some of the statistics of the , for instance the normalised means

converge (along the ultrafilter ) to the mean

and for three sequences of functions, the normalised correlation

converges along to the correlation

the normalised Gowers norm

converges along to the Gowers norm

and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised norm

does not necessarily converge to the norm

but can converge instead to a larger quantity, due to the presence of the orthogonal projection in the definition (4) of .

An important special case of an additive limit occurs when the functions involved are indicator functions of some subsets of . The additive limit does not necessarily remain an indicator function, but instead takes values in (much as a graphon takes values in even though the original indicators take values in ). The convolution is then the ultralimit of the normalised convolutions ; in particular, the measure of the support of provides a lower bound on the limiting normalised cardinality of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset could contain a large number of elements which have very few () representations as the sum of two elements of , and in the limit these portions of the sumset fall outside of the support of . (One can think of the support of as describing the “essential” sumset of , discarding those elements that have only very few representations.) Similarly for higher convolutions of . Thus one can use additive limits to partially control the growth of iterated sumsets of subsets of approximate groups , in the regime where stays bounded and goes to infinity.

Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.

Example 1 (Bohr sets)We take to be the intervals , where is a sequence going to infinity; these are -approximate groups for all . Let be an irrational real number, let be an interval in , and for each natural number let be the Bohr setIn this case, the (reduced) Kronecker factor can be taken to be the infinite cylinder with the usual Lebesgue measure . The additive limits of and end up being and , where is the finite cylinder

and is the rectangle

Geometrically, one should think of and as being wrapped around the cylinder via the homomorphism , and then one sees that is converging in some normalised weak sense to , and similarly for and . In particular, the additive limit predicts the growth rate of the iterated sumsets to be quadratic in until becomes comparable to , at which point the growth transitions to linear growth, in the regime where is bounded and is large.

If were rational instead of irrational, then one would need to replace by the finite subgroup here.

Example 2 (Structured subsets of progressions)We take be the rank two progressionwhere is a sequence going to infinity; these are -approximate groups for all . Let be the subset

Then the (reduced) Kronecker factor can be taken to be with Lebesgue measure , and the additive limits of the and are then and , where is the square

and is the circle

Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism for to embed the original sets into the plane . In particular, one now expects the growth rate of the iterated sumsets and to be quadratic in , in the regime where is bounded and is large.

Example 3 (Dissociated sets)Let be a fixed natural number, and takewhere are randomly chosen elements of a large cyclic group , where is a sequence of primes going to infinity. These are -approximate groups. The (reduced) Kronecker factor can (almost surely) then be taken to be with counting measure, and the additive limit of is , where and is the standard basis of . In particular, the growth rates of should grow approximately like for bounded and large.

Example 4 (Random subsets of groups)Let be a sequence of finite additive groups whose order is going to infinity. Let be a random subset of of some fixed density . Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group , and the additive limit of the is the constant function . The convolutions then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of ; this reflects the fact that of the elements of can be represented as the sum of two elements of in ways. In particular, occupies a proportion of .

Example 5 (Trigonometric series)Take for a sequence of primes going to infinity, and for each let be an infinite sequence of frequencies chosen uniformly and independently from . Let denote the random trigonometric seriesThen (almost surely) we can take the reduced Kronecker factor to be the infinite torus (with the Haar probability measure ), and the additive limit of the then becomes the function defined by the formula

In fact, the pullback is the ultralimit of the . As such, for any standard exponent , the normalised norm

can be seen to converge to the limit

The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.

It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.

Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.

** — 1. Proof of theorem — **

By Freiman’s theorem for arbitrary abelian groups (see this paper of Green and Ruzsa), we can find an *ultra coset progression* such that

for some standard ; we abbreviate the latter inclusion as . By an ultra coset progression, we mean the sumset of a nonstandard finite group and a nonstandard generalised arithmetic progression

with (known as the *rank*) standard, the *generators* in and the *dimensions* being nonstandard natural numbers. (To get the containment , one can first use the Bogulybov lemma to get a large ultra coset progression inside , so that can be covered by translates of ; one can then add these translates to the generators of to obtain an ultra coset progression with the required properties.

We call the ultra coset progression *-proper* if the sums for and for are all distinct. If fails to be -proper, then we can find a containment

where the coset progression has strictly smaller rank than ; see e.g. Lemma 5.1 of this paper of Van Vu and myself). Iterating this fact, we see that we may assume without loss of generality that is -proper. In particular, the group can now be parameterised by the sums with for , with each element of having exactly one representation of this form.

The dimensions are either bounded (and thus standard natural numbers) or unbounded. After permuting the generators if necessary, we may assume that are unbounded and are bounded for some with . We then have an external surjective group homomorphism defined by

this will end up being the non-compact portion of the projection map that we will eventually construct. The image is precompact in (in fact it is compact, thanks to countable saturation).

Now we perform some Fourier analysis on (analogous to the usual theory of Fourier analysis on locally compact abelian groups). Define a *frequency* to be a measurable homomorphism from to , and let denote the space of such frequencies; this is an additive group, which should be thought of as a “Pontryagin dual” to (even though is not a locally compact group). Meanwhile, we have the (genuine) Pontryagin dual of , using the identification

The homomorphism then induces a dual homomorphism , defined by the formula

for all and . This homomorphism is easily seen to be injective. If we let denote the cokernel of this map, then is an abelian group (which we will view as a discrete group) and we have the short exact sequence

Observe that is a divisible group. From this and a Zorn’s lemma argument we can split this short exact sequence, lifting up to a subgroup of , so that the latter group can be viewed as the direct sum of and .

Let be the Pontryagin dual of , that is to say the space of all homomorphisms from to (with no measurability or continuity hypotheses imposed). This is a compact abelian group (it is a closed subset of , which is compact by Tychonoff’s theorem). Set . We have a homomorphism , defined by

We claim that has dense image. Since is surjective, it suffices to show that the map from to has dense image from to , where

is the kernel of . The closure of the image of is a compact subgroup of , so this map did not have dense image, there would exist a non-trivial in the Pontryagin dual of which annihilates all of . The map then factors through and thus can be identified with an element of ; but and only intersect at , a contradiction. Thus has dense image.

It is a routine matter to verify that is measurable, that is precompact, and that the inverse image of any compact set is contained in for some standard . From this and the Riesz representation theorem, we can define a Haar measure on by defining

for all continuous, compactly supported functions ; the translation invariance of this measure follows from the surjectivity of . From Urysohn’s lemma and the inner and outer regularity of Haar measures, one can then show that is the pushforward of Loeb measure under .

Now we show the convolution property (3). First suppose that , which in particular implies that

for all , since the function factors through . By the Loeb measure construction, we can write as the limit (in ) of functions , where are uniformly bounded nonstandard functions and is some standard natural number. Then we have

which in particular implies that

where ranges over all nonstandard maps of the form

for some and nonstandard homomorphism . From (nonstandard) Fourier analysis, we conclude that

for any bounded nonstandard function , or equivalently that

and thus on taking limits we see that , and on taking further limits we see that for any , as required. This proves (3) when ; similarly when . To finish off the general case of (3), it suffices to show that

for bounded measurable . By Fourier decomposition, we may assume that takes the form

for some and some continuous compactly supported , and similarly

for some and continuous compactly supported .

If , then for some , and one can use this to show that and both vanish. Thus we may assume that ; using modulation symmetry we may then assume that . It thus suffices to show that

A direct calculation shows that the left and right hand sides agree up to constants; but both sides also have integral when integrated against , so they must agree identically.

Now, we prove the inclusions (1). The outer inclusion comes from the compactness (or precompactness) of . For the inner inclusion, we note from (3) and the positive measure and symmetry of that is the pullback of a continuous function on that is positive at the origin, and thus also bounded away from zero on a neighbourhood of the origin. This implies that the set has full measure in . We then let be a smaller symmetric neighbourhood of the origin such that . We then see that for any , the sets and both have full measure in , and hence lies in . This gives the inner inclusion of (1).

Finally, we show the regularity claim (2). Given , we may apply Urysohn’s lemma to find non-negative bounded continuous functions such that is supported in and is at least on . Letting be the pullbacks of by , we conclude using (3) that is at least on and vanishes outside of . Approximating in by bounded nonstandard functions supported in , we conclude that is at least (say) on and less than (say) outside of . If one then sets to be the non-standard set where , we obtain the claim.

** — 2. Sample applications of theorem — **

In this section we illustrate how this theorem can be used to reprove some existing results in additive combinatorics, reducing them to various statements in continuous harmonic analysis. We begin with a qualitative version of a result of Croot and Sisask on almost periods, which reduces to the classical fact that the convolution of two square-integrable functions is continuous.

Proposition 2 (Croot-Sisask)Let be a -approximate group in an additive group , let be subsets of , and let . Then there exists a subset of with such thatfor all (using the non-normalised convolution ).

The Croot-Sisask argument in fact gives a quantitative lower bound of exponential type on , but such bounds are not available through the qualitative limiting arguments given here. The Croot-Sisask argument also works in non-commutative groups ; it is likely the arguments here would also extend to that setting once one developed a non-commutative version of Theorem 1, but we have not investigated that here.

*Proof:* By the usual transfer arguments, it suffices to show that when is an ultra approximate group, are non-standard subsets of , and , there exists a nonstandard subset of with such that

for all (using the normalised convolution ). But by (3), is the pullback via of a continuous compactly supported function, so (5) holds for in for some neighbourhood of the identity, and thus by (2) it also holds for all in some nonstandard of positive Loeb measure. The claim follows.

Now we give a proof of Roth’s theorem (in the averaged form of Varnavides), at least for groups of odd order.

Proposition 3 (Roth’s theorem)Let , let be a finite group of odd order, and let be such that . Then there are pairs such that .

*Proof:* By the usual transfer arguments, it suffices to show that when is a nonstandard finite group of odd order, and is a nonstandard subset of with , then there are pairs such that ; equivalently, we need to show that

where . As has positive measure, is not identically zero. By a version of the Lebesgue differentiation theorem, we can then find a point in the Kronecker factor group such that has positive density on every precompact neighbourhood of , and is bounded away from zero on a subset of a symmetric open precompact neighbourhood of of density greater than . From this and (3) we see that is bounded away from zero on almost all of for some neighbourhood of . Also, as has odd order, the map is a measure-preserving map on , it must also be so on , and so we conclude that has positive measure in , and (6) follows.

Finally, we give a more advanced application of additive limits, namely reproving a lemma of Eberhard, Green, and Manners.

Proposition 4For every there is such that if is such that , then there is an arithmetic progression such that and .

*Proof:* Here, we perform the transfer step more explicitly, as it is slightly trickier here. Suppose for contradiction that the claim failed, then there exists an and a sequence with

but such that there is no arithmetic progression with such that . Note that must go to infinity (otherwise one could take to consist of a single element of , which must be non-empty from (7). Taking ultraproducts, we arrive at a nonstandard subset of for some unbounded natural number , such that

and such that there is no nonstandard arithmetic progression with and (say). Here is the Loeb measure associated with the approximate group and the group that it generates. By inspection of the proof of Theorem 1, the Kronecker factor of this group can be taken to be for some compact group , with projection given by for some measurable map , and Haar measure given by the product of Lebesgue measure on and the Haar probability measure on . If we let , then is supported in and takes values in , and from (3) and (8) we see that the set

is such that

We can view as a measurable function , defined by , and similarly the indicator function can be viewed as a measurable function defined by . Being measurable, may be approximated in by piecewise constant functions. One can then adapt the proof of the Lebesgue differentiation theorem to show that almost all are a Lebesgue point of , in the sense that

Similarly, almost all is a Lebesgue point of in the sense that

From this, we see that for almost all , we have the inclusion

up to null sets in , where the convolution is now with respect to the Haar probability measure . On the other hand, from Fubini’s theorem we have

and

Also is supported on . Thus by the pigeonhole principle, we may find an such that

and such that (9) holds up to null sets, and such that is a Lebesgue point for . If we fix this and now set and , we thus have

At this point it is convenient to split the compact abelian group as

where is the connected component of the identity, thus is a totally disconnected group. Let be the pushforward of to via the projection map , thus is a measurable function of total integral . We claim that

To see this, suppose for contradiction that

We may disintegrate

where is a measurable map from to finite measures on , such that for almost every , is supported on and has total mass . For almost every and , we then have that

is supported in . By Fubini’s theorem we have

where is the Haar probability measure on . From (10), we conclude that for almost every , there is a positive measure set of such that

for a positive measure set of (in particular ), and that is supported in . On the other hand, and are supported on sets of measure at least and . Applying Kemperman’s theorem (see this previous post) , we conclude that

for almost every with , and for a positive measure set of . But this leads to a contradiction if we take to be within of the essential supremum of . This proves (11).

As is totally disconnected, we can express the origin as the intersection of open subgroups. From this, (11), and a Lebesgue differentiation argument, we may find a coset of an open subgroup of such that

Letting be a pullback of to , we thus have

Since is a Lebesgue point for , we may thus find a neighbourhood of in such that

or equivalently

To finish the proof of the claim, it then suffices to show that differs from a nonstandard arithmetic progression by a set of arbitrarily small Loeb measure.

Consider the quotient homomorphism formed by first using to project to , then projecting to , then to . This is a Loeb measurable map, and thus the pointwise limit (up to null sets) by a nonstandard function. But observe that for , one has if and only if for almost every . In particuar, if is a nonstandard function which is sufficiently close to , then if and only if is the most common value of for . Using this, one can find a representative of that is precisely a nonstandard function on (say). Thus is now a nonstandard map from to the standard finite group , and from construction one can check that for all (and not merely almost all) . From this it is easy to see that is periodic with some bounded period, and that the level sets of are infinite nonstandard arithmetic progressions of bounded spacing. The claim then follows.

Filed under: expository, math.CO, math.GR, math.LO Tagged: additive combinatorics, locally compact abelian groups, Loeb measure, nonstandard analysis ]]>

Theorem 1 (Cayley’s theorem)Let be a group of some finite order . Then is isomorphic to a subgroup of the symmetric group on elements . Furthermore, this subgroup is simply transitive: given two elements of , there is precisely one element of such that .

One can therefore think of as a sort of “universal” group that contains (up to isomorphism) all the possible groups of order .

*Proof:* The group acts on itself by multiplication on the left, thus each element may be identified with a permutation on given by the map . This can be easily verified to identify with a simply transitive permutation group on . The claim then follows by arbitrarily identifying with .

More explicitly, the permutation group arises by arbitrarily enumerating as and then associating to each group element the permutation defined by the formula

The simply transitive group given by Cayley’s theorem is not unique, due to the arbitrary choice of identification of with , but is unique up to conjugation by an element of . On the other hand, it is easy to see that every simply transitive subgroup of is of order , and that two such groups are isomorphic if and only if they are conjugate by an element of . Thus Cayley’s theorem in fact identifies the moduli space of groups of order (up to isomorphism) with the simply transitive subgroups of (up to conjugacy by elements of ).

One can generalise Cayley’s theorem to groups of infinite order without much difficulty. But in this post, I would like to note an (easy) generalisation of Cayley’s theorem in a different direction, in which the group is no longer assumed to be of order , but rather to have an index subgroup that is isomorphic to a fixed group . The generalisation is:

Theorem 2 (Cayley’s theorem for -sets)Let be a group, and let be a group that contains an index subgroup isomorphic to . Then is isomorphic to a subgroup of the semidirect product , defined explicitly as the set of tuples with productand inverse

(This group is a wreath product of with , and is sometimes denoted , or more precisely .) Furthermore, is simply transitive in the following sense: given any two elements of and , there is precisely one in such that and .

Of course, Theorem 1 is the special case of Theorem 2 when is trivial. This theorem allows one to view as a “universal” group for modeling all groups containing a copy of as an index subgroup, in exactly the same way that is a universal group for modeling groups of order . This observation is not at all deep, but I had not seen it before, so I thought I would record it here. (EDIT: as pointed out in comments, this is a slight variant of the universal embedding theorem of Krasner and Kaloujnine, which covers the case when is normal, in which case one can embed into the wreath product , which is a subgroup of .)

*Proof:* The basic idea here is to replace the category of sets in Theorem 1 by the category of -sets, by which we mean sets with a right-action of the group . A morphism between two -sets is a function which respects the right action of , thus for all and .

Observe that if contains a copy of as a subgroup, then one can view as an -set, using the right-action of (which we identify with the indicated subgroup of ). The left action of on itself commutes with the right-action of , and so we can represent by -set automorphisms on the -set .

As has index in , we see that is (non-canonically) isomorphic (as an -set) to the -set with the obvious right action of : . It is easy to see that the group of -set automorphisms of can be identified with , with the latter group acting on the former -set by the rule

(it is routine to verify that this is indeed an action of by -set automorphisms. It is then a routine matter to verify the claims (the simple transitivity of follows from the simple transitivity of the action of on itself).

More explicitly, the group arises by arbitrarily enumerating the left-cosets of in as and then associating to each group element the element , where the permutation and the elements are defined by the formula

By noting that is an index normal subgroup of , we recover the classical result of Poincaré that any group that contains as an index subgroup, contains a normal subgroup of index dividing that is contained in . (Quotienting out the right-action, we recover also the classical *proof* of this result, as the action of on itself then collapses to the action of on the quotient space , the stabiliser of which is .)

Exercise 1Show that a simply transitive subgroup of contains a copy of as an index subgroup; in particular, there is a canonical embedding of into , and can be viewed as an -set.

Exercise 2Show that any two simply transitive subgroups of are isomorphic simultaneously as groups and as -sets (that is, there is a bijection that is simultaneously a group isomorphism and an -set isomorphism) if and only if they are conjugate by an element of .

[UPDATE: Exercises corrected; thanks to Keith Conrad for some additional corrections and comments.]

Filed under: expository, math.GR Tagged: Cayley's theorem ]]>

This post will also serve as the latest (and probably last) of the Polymath8 threads (rolling over this previous post), to wrap up any remaining discussion about any aspect of this project.

Filed under: polymath Tagged: polymath8 ]]>

Regardless of what commutative ring is in used here, we observe that Dirichlet convolution is commutative, associative, and bilinear over .

An important class of arithmetic functions in analytic number theory are the multiplicative functions, that is to say the arithmetic functions such that and

for all coprime . A subclass of these functions are the completely multiplicative functions, in which the restriction that be coprime is dropped. Basic examples of completely multiplicative functions (in the classical setting ) include

- the Kronecker delta , defined by setting for and otherwise;
- the constant function and the linear function (which by abuse of notation we denote by );
- more generally monomials for any fixed complex number (in particular, the “Archimedean characters” for any fixed ), which by abuse of notation we denote by ;
- Dirichlet characters ;
- the Liouville function ;
- the indicator function of the -smooth numbers (numbers whose prime factors are all at most ), for some given ; and
- the indicator function of the -rough numbers (numbers whose prime factors are all greater than ), for some given .

Examples of multiplicative functions that are not completely multiplicative include

- the MÃ¶bius function ;
- the divisor function (also referred to as );
- more generally, the higher order divisor functions for ;
- the Euler totient function ;
- the number of roots of a given polynomial defined over ;
- more generally, the point counting function of a given algebraic variety defined over (closely tied to the Hasse-Weil zeta function of );
- the function that counts the number of representations of as the sum of two squares;
- more generally, the function that maps a natural number to the number of ideals in a given number field of absolute norm (closely tied to the Dedekind zeta function of ).

These multiplicative functions interact well with the multiplication and convolution operations: if are multiplicative, then so are and , and if is completely multiplicative, then we also have

Finally, the product of completely multiplicative functions is again completely multiplicative. On the other hand, the sum of two multiplicative functions will never be multiplicative (just look at what happens at ), and the convolution of two completely multiplicative functions will usually just be multiplicative rather than completley multiplicative.

The specific multiplicative functions listed above are also related to each other by various important identities, for instance

where is an arbitrary arithmetic function.

On the other hand, analytic number theory also is very interested in certain arithmetic functions that are *not* exactly multiplicative (and certainly not completely multiplicative). One particularly important such function is the von Mangoldt function . This function is certainly not multiplicative, but is clearly closely related to such functions via such identities as and , where is the natural logarithm function. The purpose of this post is to point out that functions such as the von Mangoldt function lie in a class closely related to multiplicative functions, which I will call the *derived multiplicative functions*. More precisely:

Definition 1Aderived multiplicative functionis an arithmetic function that can be expressed as the formal derivativeat the origin of a family of multiplicative functions parameterised by a formal parameter . Equivalently, is a derived multiplicative function if it is the coefficient of a multiplicative function in the extension of by a nilpotent infinitesimal ; in other words, there exists an arithmetic function such that the arithmetic function is multiplicative, or equivalently that is multiplicative and one has the Leibniz rule

More generally, for any , a

-derived multiplicative functionis an arithmetic function that can be expressed as the formal derivativeat the origin of a family of multiplicative functions parameterised by formal parameters . Equivalently, is the coefficient of a multiplicative function in the extension of by nilpotent infinitesimals .

We define the notion of a -derived completely multiplicative function similarly by replacing “multiplicative” with “completely multiplicative” in the above discussion.

There are Leibniz rules similar to (2) but they are harder to state; for instance, a doubly derived multiplicative function comes with singly derived multiplicative functions and a multiplicative function such that

for all coprime .

One can then check that the von Mangoldt function is a derived multiplicative function, because is multiplicative in the ring with one infinitesimal . Similarly, the logarithm function is derived completely multiplicative because is completely multiplicative in . More generally, any additive function is derived multiplicative because it is the top order coefficient of .

Remark 1One can also phrase these concepts in terms of the formal Dirichlet series associated to an arithmetic function . A function is multiplicative if admits a (formal) Euler product; is derived multiplicative if is the (formal) first derivative of an Euler product with respect to some parameter (not necessarily , although this is certainly an option); and so forth.

Using the definition of a -derived multiplicative function as the top order coefficient of a multiplicative function of a ring with infinitesimals, it is easy to see that the product or convolution of a -derived multiplicative function and a -derived multiplicative function is necessarily a -derived multiplicative function (again taking values in ). Thus, for instance, the higher-order von Mangoldt functions are -derived multiplicative functions, because is a -derived completely multiplicative function. More explicitly, is the top order coeffiicent of the completely multiplicative function , and is the top order coefficient of the multiplicative function , with both functions taking values in the ring of complex numbers with infinitesimals attached.

It then turns out that most (if not all) of the basic identities used by analytic number theorists concerning derived multiplicative functions, can in fact be viewed as coefficients of identities involving purely multiplicative functions, with the latter identities being provable primarily from multiplicative identities, such as (1). This phenomenon is analogous to the one in linear algebra discussed in this previous blog post, in which many of the trace identities used there are derivatives of determinant identities. For instance, the Leibniz rule

for any arithmetic functions can be viewed as the top order term in

in the ring with one infinitesimal , and then we see that the Leibniz rule is a special case (or a derivative) of (1), since is completely multiplicative. Similarly, the formulae

are top order terms of

and the variant formula is the top order term of

which can then be deduced from the previous identities by noting that the completely multiplicative function inverts multiplicatively, and also noting that annihilates . The Selberg symmetry formula

which plays a key role in the ErdÃ¶s-Selberg elementary proof of the prime number theorem (as discussed in this previous blog post), is the top order term of the identity

involving the multiplicative functions , , , with two infinitesimals , and this identity can be proven while staying purely within the realm of multiplicative functions, by using the identities

and (1). Similarly for higher identities such as

which arise from expanding out using (1) and the above identities; we leave this as an exercise to the interested reader.

An analogous phenomenon arises for identities that are not purely multiplicative in nature due to the presence of truncations, such as the Vaughan identity

for any , where is the restriction of a multiplicative function to the natural numbers greater than , and similarly for , , . In this particular case, (4) is the top order coefficient of the identity

which can be easily derived from the identities and . Similarly for the Heath-Brown identity

valid for natural numbers up to , where and are arbitrary parameters and denotes the -fold convolution of , and discussed in this previous blog post; this is the top order coefficient of

and arises by first observing that

vanishes up to , and then expanding the left-hand side using the binomial formula and the identity .

One consequence of this phenomenon is that identities involving derived multiplicative functions tend to have a dimensional consistency property: all terms in the identity have the same order of derivation in them. For instance, all the terms in the Selberg symmetry formula (3) are doubly derived functions, all the terms in the Vaughan identity (4) or the Heath-Brown identity (5) are singly derived functions, and so forth. One can then use dimensional analysis to help ensure that one has written down a key identity involving such functions correctly, much as is done in physics.

In addition to the dimensional analysis arising from the order of derivation, there is another dimensional analysis coming from the value of multiplicative functions at primes (which is more or less equivalent to the order of pole of the Dirichlet series at ). Let us say that a multiplicative function has a *pole of order * if one has on the average for primes , where we will be a bit vague as to what “on the average” means as it usually does not matter in applications. Thus for instance, or has a pole of order (a simple pole), or has a pole of order (i.e. neither a zero or a pole), Dirichlet characters also have a pole of order (although this is slightly nontrivial, requiring Dirichlet’s theorem), has a pole of order (a simple zero), has a pole of order , and so forth. Note that the convolution of a multiplicative function with a pole of order with a multiplicative function with a pole of order will be a multiplicative function with a pole of order . If there is no oscillation in the primes (e.g. if for *all* primes , rather than on the average), it is also true that the product of a multiplicative function with a pole of order with a multiplicative function with a pole of order will be a multiplicative function with a pole of order . The situation is significantly different though in the presence of oscillation; for instance, if is a quadratic character then has a pole of order even though has a pole of order .

A -derived multiplicative function will then be said to have an *underived pole of order * if it is the top order coefficient of a multiplicative function with a pole of order ; in terms of Dirichlet series, this roughly means that the Dirichlet series has a pole of order at . For instance, the singly derived multiplicative function has an underived pole of order , because it is the top order coefficient of , which has a pole of order ; similarly has an underived pole of order , being the top order coefficient of . More generally, and have underived poles of order and respectively for any .

By taking top order coefficients, we then see that the convolution of a -derived multiplicative function with underived pole of order and a -derived multiplicative function with underived pole of order is a -derived multiplicative function with underived pole of order . If there is no oscillation in the primes, the product of these functions will similarly have an underived pole of order , for instance has an underived pole of order . We then have the dimensional consistency property that in any of the standard identities involving derived multiplicative functions, all terms not only have the same derived order, but also the same underived pole order. For instance, in (3), (4), (5) all terms have underived pole order (with any Mobius function terms being counterbalanced by a matching term of or ). This gives a second way to use dimensional analysis as a consistency check. For instance, any identity that involves a linear combination of and is suspect because the underived pole orders do not match (being and respectively), even though the derived orders match (both are ).

One caveat, though: this latter dimensional consistency breaks down for identities that involve infinitely many terms, such as Linnik’s identity

In this case, one can still rewrite things in terms of multiplicative functions as

so the former dimensional consistency is still maintained.

I thank Andrew Granville, Kannan Soundararajan, and Emmanuel Kowalski for helpful conversations on these topics.

Filed under: expository, math.AC, math.NT Tagged: Dirichlet convolution, Heath-Brown identity, infinitesimals, multiplicative number theory, Vaughan identity ]]>

About a decade ago, Ben Green and I showed that the primes contained arbitrarily long arithmetic progressions: given any , one could find a progression with consisting entirely of primes. In fact we showed the same statement was true if the primes were replaced by any subset of the primes of positive relative density.

A little while later, Tamar Ziegler and I obtained the following generalisation: given any and any polynomials with , one could find a “polynomial progression” with consisting entirely of primes. Furthermore, we could make this progression somewhat “narrow” by taking (where denotes a quantity that goes to zero as goes to infinity). Again, the same statement also applies if the primes were replaced by a subset of positive relative density. My previous result with Ben corresponds to the linear case .

In this paper we were able to make the progressions a bit narrower still: given any and any polynomials with , one could find a “polynomial progression” with consisting entirely of primes, and such that , where depends only on and (in fact it depends only on and the degrees of ). The result is still true if the primes are replaced by a subset of positive density , but unfortunately in our arguments we must then let depend on . However, in the linear case , we were able to make independent of (although it is still somewhat large, of the order of ).

The polylogarithmic factor is somewhat necessary: using an upper bound sieve, one can easily construct a subset of the primes of density, say, , whose arithmetic progressions of length all obey the lower bound . On the other hand, the prime tuples conjecture predicts that if one works with the actual primes rather than dense subsets of the primes, then one should have infinitely many length arithmetic progressions of bounded width for any fixed . The case of this is precisely the celebrated theorem of Yitang Zhang that was the focus of the recently concluded Polymath8 project here. The higher case is conjecturally true, but appears to be out of reach of known methods. (Using the multidimensional Selberg sieve of Maynard, one can get primes inside an interval of length , but this is such a sparse set of primes that one would not expect to find even a progression of length three within such an interval.)

The argument in the previous paper was unable to obtain a polylogarithmic bound on the width of the progressions, due to the reliance on a certain technical “correlation condition” on a certain Selberg sieve weight . This correlation condition required one to control arbitrarily long correlations of , which was not compatible with a bounded value of (particularly if one wanted to keep independent of ).

However, thanks to recent advances in this area by Conlon, Fox, and Zhao (who introduced a very nice “densification” technique), it is now possible (in principle, at least) to delete this correlation condition from the arguments. Conlon-Fox-Zhao did this for my original theorem with Ben; and in the current paper we apply the densification method to our previous argument to similarly remove the correlation condition. This method does not fully eliminate the need to control arbitrarily long correlations, but allows most of the factors in such a long correlation to be *bounded*, rather than merely controlled by an unbounded weight such as . This turns out to be significantly easier to control, although in the non-linear case we still unfortunately had to make large compared to due to a certain “clearing denominators” step arising from the complicated nature of the Gowers-type uniformity norms that we were using to control polynomial averages. We believe though that this an artefact of our method, and one should be able to prove our theorem with an that is uniform in .

Here is a simple instance of the densification trick in action. Suppose that one wishes to establish an estimate of the form

for some real-valued functions which are bounded in magnitude by a weight function , but which are not expected to be bounded; this average will naturally arise when trying to locate the pattern in a set such as the primes. Here I will be vague as to exactly what range the parameters are being averaged over. Suppose that the factor (say) has enough uniformity that one can already show a smallness bound

whenever are bounded functions. (One should think of as being like the indicator functions of “dense” sets, in contrast to which are like the normalised indicator functions of “sparse” sets). The bound (2) cannot be directly applied to control (1) because of the unbounded (or “sparse”) nature of and . However one can “densify” and as follows. Since is bounded in magnitude by , we can bound the left-hand side of (1) as

The weight function will be normalised so that , so by the Cauchy-Schwarz inequality it suffices to show that

The left-hand side expands as

Now, it turns out that after an enormous (but finite) number of applications of the Cauchy-Schwarz inequality to steadily eliminate the factors, as well as a certain “polynomial forms condition” hypothesis on , one can show that

(Because of the polynomial shifts, this requires a method known as “PET induction”, but let me skip over this point here.) In view of this estimate, we now just need to show that

Now we can reverse the previous steps. First, we collapse back to

One can bound by , which can be shown to be “bounded on average” in a suitable sense (e.g. bounded norm) via the aforementioned polynomial forms condition. Because of this and the Hölder inequality, the above estimate is equivalent to

By setting to be the signum of , this is equivalent to

This is halfway between (1) and (2); the sparsely supported function has been replaced by its “densification” , but we have not yet densified to . However, one can shift by and repeat the above arguments to achieve a similar densificiation of , at which point one has reduced (1) to (2).

Filed under: math.NT, paper Tagged: densification, Tamar Ziegler ]]>