You are currently browsing the monthly archive for July 2013.
As in all previous posts in this series, we adopt the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on
unless explicitly declared to be “fixed”. The asymptotic notation
is then defined relative to this parameter. A quantity
is said to be of polynomial size if one has
, and bounded if
. We also write
for
, and
for
.
The purpose of this (rather technical) post is both to roll over the polymath8 research thread from this previous post, and also to record the details of the latest improvement to the Type I estimates (based on exploiting additional averaging and using Deligne’s proof of the Weil conjectures) which lead to a slight improvement in the numerology.
In order to obtain this new Type I estimate, we need to strengthen the previously used properties of “dense divisibility” or “double dense divisibility” as follows.
Definition 1 (Multiple dense divisibility) Let
. For each natural number
, we define a notion of
-tuply
-dense divisibility recursively as follows:
- Every natural number
is
-tuply
-densely divisible.
- If
and
is a natural number, we say that
is
-tuply
-densely divisible if, whenever
are natural numbers with
, and
, one can find a factorisation
with
such that
is
-tuply
-densely divisible and
is
-tuply
-densely divisible.
We let
denote the set of
-tuply
-densely divisible numbers. We abbreviate “
-tuply densely divisible” as “densely divisible”, “
-tuply densely divisible” as “doubly densely divisible”, and so forth; we also abbreviate
as
.
Given any finitely supported sequence and any primitive residue class
, we define the discrepancy
We now recall the key concept of a coefficient sequence, with some slight tweaks in the definitions that are technically convenient for this post.
Definition 2 A coefficient sequence is a finitely supported sequence
that obeys the bounds
for all
, where
is the divisor function.
- (i) A coefficient sequence
is said to be located at scale
for some
if it is supported on an interval of the form
for some
.
- (ii) A coefficient sequence
located at scale
for some
is said to obey the Siegel-Walfisz theorem if one has
for any
, any fixed
, and any primitive residue class
.
- (iii) A coefficient sequence
is said to be smooth at scale
for some
is said to be smooth if it takes the form
for some smooth function
supported on an interval of size
and obeying the derivative bounds
for all fixed
(note that the implied constant in the
notation may depend on
).
Note that we allow sequences to be smooth at scale without being located at scale
; for instance if one arbitrarily translates of a sequence that is both smooth and located at scale
, it will remain smooth at this scale but may not necessarily be located at this scale any more. Note also that we allow the smoothness scale
of a coefficient sequence to be less than one. This is to allow for the following convenient rescaling property: if
is smooth at scale
,
, and
is an integer, then
is smooth at scale
, even if
is less than one.
Now we adapt the Type I estimate to the -tuply densely divisible setting.
Definition 3 (Type I estimates) Let
,
, and
be fixed quantities, and let
be a fixed natural number. We let
be an arbitrary bounded subset of
, let
, and let
a primitive congruence class. We say that
holds if, whenever
are quantities with
for some fixed
, and
are coefficient sequences located at scales
respectively, with
obeying a Siegel-Walfisz theorem, we have
for any fixed
. Here, as in previous posts,
denotes the square-free natural numbers whose prime factors lie in
.
The main theorem of this post is then
Theorem 4 (Improved Type I estimate) We have
whenever
and
In practice, the first condition here is dominant. Except for weakening double dense divisibility to quadruple dense divisibility, this improves upon the previous Type I estimate that established under the stricter hypothesis
As in previous posts, Type I estimates (when combined with existing Type II and Type III estimates) lead to distribution results of Motohashi-Pintz-Zhang type. For any fixed and
, we let
denote the assertion that
for any fixed , any bounded
, and any primitive
, where
is the von Mangoldt function.
Proof: Setting sufficiently close to
, we see from the above theorem that
holds whenever
and
The second condition is implied by the first and can be deleted.
From this previous post we know that (which we define analogously to
from previous sections) holds whenever
while holds with
sufficiently close to
whenever
Again, these conditions are implied by (8). The claim then follows from the Heath-Brown identity and dyadic decomposition as in this previous post.
As before, we let denote the claim that given any admissible
-tuple
, there are infinitely many translates of
that contain at least two primes.
This follows from the Pintz sieve, as discussed below the fold. Combining this with the best known prime tuples, we obtain that there are infinitely many prime gaps of size at most , improving slightly over the previous record of
.
If and
are two absolutely integrable functions on a Euclidean space
, then the convolution
of the two functions is defined by the formula
A simple application of the Fubini-Tonelli theorem shows that the convolution is well-defined almost everywhere, and yields another absolutely integrable function. In the case that
,
are indicator functions, the convolution simplifies to
where denotes Lebesgue measure. One can also define convolution on more general locally compact groups than
, but we will restrict attention to the Euclidean case in this post.
The convolution can also be defined by duality by observing the identity
for any bounded measurable function . Motivated by this observation, we may define the convolution
of two finite Borel measures on
by the formula
for any bounded (Borel) measurable function , or equivalently that
for all Borel measurable . (In another equivalent formulation:
is the pushforward of the product measure
with respect to the addition map
.) This can easily be verified to again be a finite Borel measure.
If and
are probability measures, then the convolution
also has a simple probabilistic interpretation: it is the law (i.e. probability distribution) of a random varible of the form
, where
are independent random variables taking values in
with law
respectively. Among other things, this interpretation makes it obvious that the support of
is the sumset of the supports of
(when the supports are compact; the situation is more subtle otherwise) and
, and that
will also be a probability measure.
While the above discussion gives a perfectly rigorous definition of the convolution of two measures, it does not always give helpful guidance as to how to compute the convolution of two explicit measures (e.g. the convolution of two surface measures on explicit examples of surfaces, such as the sphere). In simple cases, one can work from first principles directly from the definition (2), (3), perhaps after some application of tools from several variable calculus, such as the change of variables formula. Another technique proceeds by regularisation, approximating the measures involved as the weak limit (or vague limit) of absolutely integrable functions
(where we identify an absolutely integrable function with the associated absolutely continuous measure
) which then implies (assuming that the sequences
are tight) that
is the weak limit of the
. The latter convolutions
, being convolutions of functions rather than measures, can be computed (or at least estimated) by traditional integration techniques, at which point the only difficulty is to ensure that one has enough uniformity in
to maintain control of the limit as
.
A third method proceeds using the Fourier transform
of (and of
). We have
and so one can (in principle, at least) compute by taking Fourier transforms, multiplying them together, and applying the (distributional) inverse Fourier transform. Heuristically, this formula implies that the Fourier transform of
should be concentrated in the intersection of the frequency region where the Fourier transform of
is supported, and the frequency region where the Fourier transform of
is supported. As the regularity of a measure is related to decay of its Fourier transform, this also suggests that the convolution
of two measures will typically be more regular than each of the two original measures, particularly if the Fourier transforms of
and
are concentrated in different regions of frequency space (which should happen if the measures
are suitably “transverse”). In particular, it can happen that
is an absolutely continuous measure, even if
and
are both singular measures.
Using intuition from microlocal analysis, we can combine our understanding of the spatial and frequency behaviour of convolution to the following heuristic: a convolution should be supported in regions of phase space
of the form
, where
lies in the region of phase space where
is concentrated, and
lies in the region of phase space where
is concentrated. It is a challenge to make this intuition perfectly rigorous, as one has to somehow deal with the obstruction presented by the Heisenberg uncertainty principle, but it can be made rigorous in various asymptotic regimes, for instance using the machinery of wave front sets (which describes the high frequency limit of the phase space distribution).
Let us illustrate these three methods and the final heuristic with a simple example. Let be a singular measure on the horizontal unit interval
, given by weighting Lebesgue measure on that interval by some test function
supported on
:
Similarly, let be a singular measure on the vertical unit interval
given by weighting Lebesgue measure on that interval by another test function
supported on
:
We can compute the convolution using (2), which in this case becomes
and we thus conclude that is an absolutely continuous measure on
with density function
:
In particular, is supported on the unit square
, which is of course the sumset of the two intervals
and
.
We can arrive at the same conclusion from the regularisation method; the computations become lengthier, but more geometric in nature, and emphasises the role of transversality between the two segments supporting and
. One can view
as the weak limit of the functions
as (where we continue to identify absolutely integrable functions with absolutely continuous measures, and of course we keep
positive). We can similarly view
as the weak limit of
Let us first look at the model case when , so that
are renormalised indicator functions of thin rectangles:
By (1), the convolution is then given by
where is the intersection of two rectangles:
When lies in the square
, one readily sees (especially if one draws a picture) that
consists of an
square and thus has measure
; conversely, if
lies outside
,
is empty and thus has measure zero. In the intermediate region,
will have some measure between
and
. From this we see that
converges pointwise almost everywhere to
while also being dominated by an absolutely integrable function, and so converges weakly to
, giving a special case of the formula (4).
Exercise 1 Use a similar method to verify (4) in the case that
are continuous functions on
. (The argument also works for absolutely integrable
, but one needs to invoke the Lebesgue differentiation theorem to make it run smoothly.)
Now we compute with the Fourier-analytic method. The Fourier transform of
is given by
where we abuse notation slightly by using to refer to the one-dimensional Fourier transform of
. In particular,
decays in the
direction (by the Riemann-Lebesgue lemma) but has no decay in the
direction, which reflects the horizontally grained structure of
. Similarly we have
so that decays in the
direction. The convolution
then has decay in both the
and
directions,
and by inverting the Fourier transform we obtain (4).
Exercise 2 Let
and
be two non-parallel line segments in the plane
. If
is the uniform probability measure on
and
is the uniform probability measure on
, show that
is the uniform probability measure on the parallelogram
with vertices
. What happens in the degenerate case when
and
are parallel?
Finally, we compare the above answers with what one gets from the microlocal analysis heuristic. The measure is supported on the horizontal interval
, and the cotangent bundle at any point on this interval points in the vertical direction. Thus, the wave front set of
should be supported on those points
in phase space with
,
and
. Similarly, the wave front set of
should be supported at those points
with
,
, and
. The convolution
should then have wave front set supported on those points
with
,
,
,
,
, and
, i.e. it should be spatially supported on the unit square and have zero (rescaled) frequency, so the heuristic predicts a smooth function on the unit square, which is indeed what happens. (The situation is slightly more complicated in the non-smooth case
, because
and
then acquire some additional singularities at the endpoints; namely, the wave front set of
now also contains those points
with
,
, and
arbitrary, and
similarly contains those points
with
,
, and
arbitrary. I’ll leave it as an exercise to the reader to compute what this predicts for the wave front set of
, and how this compares with the actual wave front set.)
Exercise 3 Let
be the uniform measure on the unit sphere
in
for some
. Use as many of the above methods as possible to establish multiple proofs of the following fact: the convolution
is an absolutely continuous multiple
of Lebesgue measure, with
supported on the ball
of radius
and obeying the bounds
for
and
for
, where the implied constants are allowed to depend on the dimension
. (Hint: try the
case first, which is particularly simple due to the fact that the addition map
is mostly a local diffeomorphism. The Fourier-based approach is instructive, but requires either asymptotics of Bessel functions or the principle of stationary phase.)
[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]
The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function , defined by
for and extended meromorphically to other values of
, and asserts that the only zeroes of
in the critical strip
lie on the critical line
.
One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number has a unique factorisation
into primes. Taking logarithms, we obtain the identity
for any natural number , where
is the von Mangoldt function, thus
when
is a power of a prime
and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that
formally at least. Writing , the right-hand side factors as
whereas the left-hand side is (formally, at least) equal to . We conclude the identity
(formally, at least). If we integrate this, we are formally led to the identity
or equivalently to the exponential identity
which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as has a simple pole at
and zeroes at various places
on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form
(where we will be intentionally vague about what is hiding in the terms) and so we expect an expansion of the form
Note that
and hence on integrating in we formally have
and thus we have the heuristic approximation
Comparing this with (3), we are led to a heuristic form of the explicit formula
When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function to obtain the formula
which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that for all zeroes
, it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that
as , giving a near-optimal “square root cancellation” for the sum
. Conversely, if one can somehow establish a bound of the form
for any fixed , then the explicit formula can be used to then deduce that all zeroes
of
have real part at most
, which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form
can be automatically amplified to the stronger bound
with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line , and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem
see e.g. this previous blog post for more discussion.
The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character . The analogue of the Riemann zeta function is then the https://en.wikipedia.org/wiki/Multiplicative_function, the equation (1), which encoded the fundamental theorem of arithmetic, can be twisted by
to obtain
and essentially the same manipulations as before eventually lead to the exponential identity
which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form
for non-principal , where
now ranges over the zeroes of
in the critical strip, rather than the zeroes of
; a more accurate formulation, following (5), would be
(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet -function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of
in the critical strip also lie on the critical line, then we obtain the bound
for any non-principal Dirichlet character , again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound
(where denotes a quantity that goes to zero as
for any fixed
). Next, one can consider other number systems than the natural numbers
and integers
. For instance, one can replace the integers
with rings
of integers in other number fields
(i.e. finite extensions of
), such as the quadratic extensions
of the rationals for various square-free integers
, in which case the ring of integers would be the ring of quadratic integers
for a suitable generator
(it turns out that one can take
if
, and
if
). Here, it is not immediately obvious what the analogue of the natural numbers
is in this setting, since rings such as
do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number
generates a principal ideal
in the integers, and conversely every non-trivial ideal
in the integers is associated to precisely one natural number
in this fashion, namely the norm
of that ideal. So one can identify the natural numbers with the ideals of
. Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if
is prime, and
are integers, then
if and only if one of
or
is true. Finally, even in number systems (such as
) in which the classical version of the fundamental theorem of arithmetic fail (e.g.
), we have the fundamental theorem of arithmetic for ideals: every ideal
in a Dedekind domain (which includes the ring
of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals
(although these ideals might not necessarily be principal). For instance, in
, the principal ideal
factors as the product of four prime (but non-principal) ideals
,
,
,
. (Note that the first two ideals
are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function
where the summation is over all non-trivial ideals in . One can also define a von Mangoldt function
, defined as
when
is a power of a prime ideal
, and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),
which leads as before to an exponential identity
and an explicit formula of the heuristic form
in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form
where is the conductor of
(which, in the case of number fields, is the absolute value of the discriminant of
) and
is the degree of the extension of
over
. As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound
where denotes a quantity that goes to zero as
(holding
fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.
As was the case with the Dirichlet -functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.
Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line and a finite field
of some order
. The polynomial functions on the affine line
are just the usual polynomial ring
, which then play the role of the integers
(or
) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm
of a polynomial is the order of
, which can be computed explicitly as
Because of this, we will normalise things slightly differently here and use in place of
in what follows. The (local) zeta function
is then defined as
where ranges over monic polynomials, and the von Mangoldt function
is defined to equal
when
is a power of a monic irreducible polynomial
, and zero otherwise. Note that because
is always a power of
, the zeta function here is in fact periodic with period
. Because of this, it is customary to make a change of variables
, so that
and is the renormalised zeta function
We have the analogue of (1) (or (7) or (11)):
which leads as before to an exponential identity
analogous to (2), (8), or (12). It also leads to the explicit formula
where are the zeroes of the original zeta function
(counting each residue class of the period
just once), or equivalently
where are the reciprocals of the roots of the normalised zeta function
(or to put it another way,
are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining
As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus
for an explicit integer (independent of
) arising from any potential pole of
at
. In the case of the affine line
, the situation is particularly simple, because the zeta function
is easy to compute. Indeed, since there are exactly
monic polynomials of a given degree
, we see from (14) that
so in fact there are no zeroes whatsoever, and no pole at either, so we have an exact prime number theorem for this function field:
Among other things, this tells us that the number of irreducible monic polynomials of degree is
.
We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial through its roots, which are a finite set of points in the algebraic closure
of the finite field
(or more suggestively, as points on the affine line
). The number of such points (counting multiplicity) is the degree of
, and from the factor theorem, the set of points determines the monic polynomial
(or, if one removes the monic hypothesis, it determines the polynomial
projectively). These points have an action of the Galois group
. It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map
, which fixes the elements of the original finite field
but permutes the other elements of
. Thus the roots of a given polynomial
split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if
is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.
Now consider the degree finite field extension
of
(it is a classical fact that there is exactly one such extension up to isomorphism for each
); this is a subfield of
of order
. (Here we are performing a standard abuse of notation by overloading the subscripts in the
notation; thus
denotes the field of order
, while
denotes the extension of
of order
, so that we in fact have
if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point
in this extension (or, more suggestively, the affine line
over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of
. Since the Frobenius action is periodic of period
on
, the degree of this minimal polynomial must divide
. Conversely, every monic irreducible polynomial of degree
dividing
produces
distinct zeroes that lie in
(here we use the classical fact that finite fields are perfect) and hence in
. We have thus partitioned
into Frobenius orbits (also known as closed points), with each monic irreducible polynomial
of degree
dividing
contributing an orbit of size
. From this we conclude a geometric interpretation of the left-hand side of (18):
The identity (18) thus is equivalent to the thoroughly boring fact that the number of -points on the affine line
is equal to
. However, things become much more interesting if one then replaces the affine line
by a more general (geometrically) irreducible curve
defined over
; for instance one could take
to be an ellpitic curve
for some suitable , although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of
-rational points removed). The analogue of
is then the coordinate ring of
(for instance, in the case of the elliptic curve (20) it would be
), with polynomials in this ring producing a set of roots in the curve
that is again invariant with respect to the Frobenius action (acting on the
and
coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on
will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function
and a von Mangoldt function as before, where
would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve
; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points
in
, or equivalently an effective divisor
of
; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be rational in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of
. With this dictionary, the zeta function becomes
where the sum is over effective rational divisors of
(with
being the degree of an effective divisor
), or equivalently
The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes
thus this sum is simply counting the number of -points of
. The analogue of the exponential identity (16) (or (2), (8), or (12)) is then
and the analogue of the explicit formula (17) (or (5), (10) or (13)) is
where runs over the (reciprocal) zeroes of
(counting multiplicity), and
is an integer independent of
. (As it turns out,
equals
when
is a projective curve, and more generally equals
when
is a projective curve with
rational points deleted.)
To evaluate , one needs to count the number of effective divisors of a given degree on the curve
. Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when
is projective) that
is in fact a rational function, with a finite number of zeroes, and a simple pole at both
and
, with similar results when one deletes some rational points from
; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have
for two complex numbers depending on
and
.
The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of lie on the critical line, or equivalently that all the roots
in (22) have modulus
, so that (22) then gives the asymptotic
where the implied constant depends only on the genus of (and on the number of points removed from
). For instance, for elliptic curves we have the Hasse bound
As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.
then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the proofs of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large , which then amplifies to the optimal bound (23) for all
(and in particular for
). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with
-dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no
-dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of
.
Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet -function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve
and an additive character
, thus
for all
. Given a rational effective divisor
, the sum
is Frobenius-invariant and thus lies in
. By abuse of notation, we may thus define
on such divisors by
and observe that is multiplicative in the sense that
for rational effective divisors
. One can then define
for any non-trivial ideal
by replacing that ideal with the associated rational effective divisor; for instance, if
is a polynomial in the coefficient ring of
, with zeroes at
, then
is
. Again, we have the multiplicativity property
. If we then form the twisted normalised zeta function
then by twisting the previous analysis, we eventually arrive at the exponential identity
in analogy with (21) (or (2), (8), (12), or (16)), where the companion sums are defined by
where the trace of an element
in the plane
is defined by the formula
In particular, is the exponential sum
which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum
as a special case, where . (NOTE: the sign conventions for the companion sum
are not consistent across the literature, sometimes it is
which is referred to as the companion sum.)
If is non-principal (and
is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that
is a rational function of
, with no pole at
, and one then gets an explicit formula of the form
for the companion sums, where are the reciprocals of the zeroes of
, in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form
for all and some complex numbers
depending on
, where we have abbreviated
as
. As before, the Riemann hypothesis for
then gives a square root cancellation bound of the form
for the companion sums (and in particular gives the very explicit Weil bound for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound
As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.
One can also twist the zeta function on a curve by a multiplicative character by similar arguments, except that instead of forming the sum
of all the components of an effective divisor
, one takes the product
instead, and similarly one replaces the trace
by the norm
Again, see Chapter 11 of Iwaniec-Kowalski for details.
Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of -adic sheaves on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to
-adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an
-adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.
Let be a natural number. We consider the question of how many “almost orthogonal” unit vectors
one can place in the Euclidean space
. Of course, if we insist on
being exactly orthogonal, so that
for all distinct
, then we can only pack at most
unit vectors into this space. However, if one is willing to relax the orthogonality condition a little, so that
is small rather than zero, then one can pack a lot more unit vectors into
, due to the important fact that pairs of vectors in high dimensions are typically almost orthogonal to each other. For instance, if one chooses
uniformly and independently at random on the unit sphere, then a standard computation (based on viewing the
as gaussian vectors projected onto the unit sphere) shows that each inner product
concentrates around the origin with standard deviation
and with gaussian tails, and a simple application of the union bound then shows that for any fixed
, one can pack
unit vectors into
whose inner products are all of size
.
One can remove the logarithm by using some number theoretic constructions. For instance, if is twice a prime
, one can identify
with the space
of complex-valued functions
, where
is the field of
elements, and if one then considers the
different quadratic phases
for
, where
is the standard character on
, then a standard application of Gauss sum estimates reveals that these
unit vectors in
all have inner products of magnitude at most
with each other. More generally, if we take
and consider the
different polynomial phases
for
, then an application of the Weil conjectures for curves, proven by Weil, shows that the inner products of the associated
unit vectors with each other have magnitude at most
.
As it turns out, this construction is close to optimal, in that there is a polynomial limit to how many unit vectors one can pack into with an inner product of
:
Theorem 1 (Cheap Kabatjanskii-Levenstein bound) Let
be unit vector in
such that
for some
. Then we have
for some absolute constant
.
In particular, for fixed and large
, the number of unit vectors one can pack in
whose inner products all have magnitude at most
will be
. This doesn’t quite match the construction coming from the Weil conjectures, although it is worth noting that the upper bound of
for the inner product is usually not sharp (the inner product is actually
times the sum of
unit phases which one expects (cf. the Sato-Tate conjecture) to be uniformly distributed on the unit circle, and so the typical inner product is actually closer to
).
Note that for , the
case of the above theorem (or more precisely, Lemma 2 below) gives the bound
, which is essentially optimal as the example of an orthonormal basis shows. For
, the condition
is trivially true from Cauchy-Schwarz, and
can be arbitrariy large. Finally, in the range
, we can use a volume packing argument: we have
, so of we set
, then the open balls of radius
around each
are disjoint, while all lying in a ball of radius
, giving rise to the bound
for some absolute constant
.
As I learned recently from Philippe Michel, a more precise version of this theorem is due to Kabatjanskii and Levenstein, who studied the closely related problem of sphere packing (or more precisely, cap packing) in the unit sphere of
. However, I found a short proof of the above theorem which relies on one of my favorite tricks – the tensor power trick – so I thought I would give it here.
We begin with an easy case, basically the case of the above theorem:
Lemma 2 Let
be unit vectors in
such that
for all distinct
. Then
.
Proof: Suppose for contradiction that . We consider the
Gram matrix
. This matrix is real symmetric with rank at most
, thus if one subtracts off the identity matrix, it has an eigenvalue of
with multiplicity at least
. Taking Hilbert-Schmidt norms, we conclude that
But by hypothesis, the left-hand side is at most , giving the desired contradiction.
To amplify the above lemma to cover larger values of , we apply the tensor power trick. A direct application of the tensor power trick does not gain very much; however one can do a lot better by using the symmetric tensor power rather than the raw tensor power. This gives
Corollary 3 Let
be a natural number, and let
be unit vectors in
such that
for all distinct
. Then
.
Proof: We work in the symmetric component of the tensor power
, which has dimension
. Applying the previous lemma to the tensor powers
, we obtain the claim.
Using the trivial bound , we can lower bound
We can thus prove Theorem 1 by setting for some sufficiently large absolute constant
.
Van Vu and I have just uploaded to the arXiv our joint paper “Local universality of zeroes of random polynomials“. This paper is a sequel to our previous work on local universality of eigenvalues of (non-Hermitian) random matrices with independent entries. One can re-interpret these previous results as a universality result for a certain type of random polynomial
, namely the characteristic polynomial
of the random matrix
. In this paper, we consider the analogous problem for a different model of random polynomial, namely polynomials
with independent random coefficients. More precisely, we consider random polynomials
of the form
where are deterministic complex coefficients, and
are independent identically distributed copies of a complex random variable
, which we normalise to have mean zero and variance one. For simplicity we will ignore the technical issue that the leading coefficient
is allowed to vanish; then
has
zeroes
(counting multiplicity), which can be viewed as a random point process
in the complex plane. In analogy with other models (such as random matrix models), we expect the (suitably normalised) asymptotic statistics of this point process in the limit
to be universal, in the sense that it is largely independent of the precise distribution of the atom variable
.
Our results are fairly general with regard to the choice of coefficients , but we isolate three particular choice of coefficients that are particularly natural and well-studied in the literature:
- Flat polynomials (or Weyl polynomials) in which
.
- Elliptic polynomials (or binomial polynomials) in which
.
- Kac polynomials in which
.
The flat and elliptic polynomials enjoy special symmetries in the model case when the atom distribution is a complex Gaussian
. Indeed, the zeroes
of elliptic polynomials with complex Gaussian coefficients have a distribution which is invariant with respect to isometries
of the Riemann sphere
(thus
has the same distribution as
), while the zeroes of the limiting case
of the flat polynomials with complex Gaussian coefficients are similarly invariant with respect to isometries
of the complex plane
. (For a nice geometric proof of this facts, I recommend the nice book of Hough, Krishnapur, Peres, and Virag.)
The global (i.e. coarse-scale) distribution of zeroes of these polynomials is well understood, first in the case of gaussian distributions using the fundamental tool of the Kac-Rice formula, and then for more general choices of atom distribution in the recent work of Kabluchko and Zaporozhets. The zeroes of the flat polynomials are asymptotically distributed according to the circular law, normalised to be uniformly distributed on the disk of radius
centred at the origin. To put it a bit informally, the zeroes are asymptotically distributed according to the measure
, where
denotes Lebesgue measure on the complex plane. One can non-rigorously see the scale
appear by observing that when
is much larger than
, we expect the leading term
of the flat polynomial
to dominate, so that the polynomial should not have any zeroes in this region.
Similarly, the distribution of the elliptic polynomials is known to be asymptotically distributed according to a Cauchy-type distribution . The Kac polynomials
behave differently; the zeroes concentrate uniformly on the unit circle
(which is reasonable, given that one would expect the top order term
to dominate for
and the bottom order term
to dominate for
). In particular, whereas the typical spacing between zeroes in the flat and elliptic cases would be expected to be comparable to
, the typical spacing between zeroes for a Kac polynomial would be expected instead to be comparable to
.
In our paper we studied the local distribution of zeroes at the scale of the typical spacing. In the case of polynomials with continuous complex atom disribution , the natural statistic to measure here is the
-point correlation function
, which for distinct complex numbers
can be defined as the probability that there is a zero in each of the balls
for some infinitesimal
, divided by the normalising factor
. (One can also define a
-point correlation function in the case of a discrete distribution, but it will be a measure rather than a function in that case.) Our first main theorem is a general replacement principle which asserts, very roughly speaking, that the asymptotic
-point correlation functions of two random polynomials
will agree if the log-magnitudes
have asymptotically the same distribution (actually we have to consider the joint distribution of
for several points
, but let us ignore this detail for now), and if the polynomials
obeys a “non-clustering property” which asserts, roughly speaking, that not too many of the zeroes of
can typically concentrate in a small ball. This replacement principle was implicit in our previous paper (and can be viewed as a local-scale version of the global-scale replacement principle in this earlier paper of ours). Specialising the replacement principle to the elliptic or flat polynomials, and using the Lindeberg swapping argument, we obtain a Two Moment Theorem that asserts, roughly speaking, that the asymptotic behaviour of the
-point correlation functions depends only on the first two moments of the real and imaginary components of
, as long as one avoids some regions of space where universality is expected to break down. (In particular, because
does not have a universal distribution, one does not expect universality to hold near the origin; there is a similar problem near infinity.) Closely related results, by a slightly different method, have also been obtained recently by Ledoan, Merkli, and Starr. A similar result holds for the Kac polynomials after rescaling to account for the narrower spacing between zeroes.
We also have analogous results in the case of polynomials with real coefficients (so that the coefficients and the atom distribution
are both real). In this case one expects to see a certain number of real zeroes, together with conjugate pairs of complex zeroes. Instead of the
-point correlation function
, the natural object of study is now the mixed
-point correlation function
that (roughly speaking) controls how often one can find a real zero near the real numbers
, and a complex zero near the points
. It turns out that one can disentangle the real and strictly complex zeroes and obtain separate universality results for both zeroes, provided that at least one of the polynomials involved obeys a “weak repulsion estimate” that shows that the real zeroes do not cluster very close to each other (and that the complex zeroes do not cluster very close to their complex conjugates). Such an estimate is needed to avoid the situation of two nearby real zeroes “colliding” to create a (barely) complex zero and its complex conjugate, or the time reversal of such a collision. Fortunately, in the case of real gaussian polynomials one can use the Kac-Rice formula to establish such a weak repulsion estimate, allowing analogues of the above universality results for complex random polynomials in the real case. Among other things, this gives universality results for the number
of real zeroes of a random flat or elliptic polynomial; it turns out this number is typically equal to
and
respectively. (For Kac polynomials, the situation is somewhat different; it was already known that
thanks to a long series of papers, starting with the foundational work of Kac and culminating in the work of Ibragimov and Maslova.)
While our methods are based on our earlier work on eigenvalues of random matrices, the situation with random polynomials is actually somewhat easier to deal with. This is because the log-magnitude of a random polynomial with independent coefficients is significantly easier to control than the log-determinant
of a random matrix, as the former can be controlled by the central limit theorem, while the latter requires significantly more difficult arguments (in particular, bounds on the least singular value combined with Girko’s Hermitization trick). As such, one could view the current paper as an introduction to our more complicated previous paper, and with this in mind we have written the current paper to be self-contained (though this did make the paper somewhat lengthy, checking in at 68 pages).
The purpose of this post is to link to a short unpublished note of mine that I wrote back in 2010 but forgot to put on my web page at the time. Entitled “A physical space proof of the bilinear Strichartz and local smoothing estimates for the Schrodinger equation“, it gives a proof of two standard estimates for the free (linear) Schrodinger equation in flat Euclidean space, namely the bilinear Strichartz estimate and the local smoothing estimate, using primarily “physical space” methods such as integration by parts, instead of “frequency space” methods based on the Fourier transform, although a small amount of Fourier analysis (basically sectoral projection to make the Schrodinger waves move roughly in a given direction) is still needed. This is somewhat in the spirit of an older paper of mine with Klainerman and Rodnianski doing something similar for the wave equation, and is also very similar to a paper of Planchon and Vega from 2009. The hope was that by avoiding the finer properties of the Fourier transform, one could obtain a more robust argument which could also extend to nonlinear, non-free, or non-flat situations. These notes were cited once or twice by some people that I had privately circulated them to, so I decided to put them online here for reference.
UPDATE, July 24: Fabrice Planchon has kindly supplied another note in which he gives a particularly simple proof of local smoothing in one dimension, and discusses some other variants of the method (related to the paper of Planchon and Vega cited earlier).
As in previous posts, we use the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on
unless explicitly declared to be “fixed”. The asymptotic notation
is then defined relative to this parameter. A quantity
is said to be of polynomial size if one has
, and bounded if
. We also write
for
, and
for
.
The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument. In order to state the main result, we need to recall some definitions. If is a bounded subset of
, let
denote the square-free numbers whose prime factors lie in
, and let
denote the product of the primes
in
. Note by the Chinese remainder theorem that the set
of primitive congruence classes
modulo
can be identified with the tuples
of primitive congruence classes
of congruence classes modulo
for each
which obey the Chinese remainder theorem
for all coprime , since one can identify
with the tuple
for each
.
If and
is a natural number, we say that
is
-densely divisible if, for every
, one can find a factor of
in the interval
. We say that
is doubly
-densely divisible if, for every
, one can find a factor
of
in the interval
such that
is itself
-densely divisible. We let
denote the set of doubly
-densely divisible natural numbers, and
the set of
-densely divisible numbers.
Given any finitely supported sequence and any primitive residue class
, we define the discrepancy
For any fixed , we let
denote the assertion that
for any fixed , any bounded
, and any primitive
, where
is the von Mangoldt function. Importantly, we do not require
or
to be fixed, in particular
could grow polynomially in
, and
could grow exponentially in
, but the implied constant in (1) would still need to be fixed (so it has to be uniform in
and
). (In previous formulations of these estimates, the system of congruence
was also required to obey a controlled multiplicity hypothesis, but we no longer need this hypothesis in our arguments.) In this post we will record the proof of the following result, which is currently the best distribution result produced by the ongoing polymath8 project to optimise Zhang’s theorem on bounded gaps between primes:
This improves upon the previous constraint of (see this previous post), although that latter statement was stronger in that it only required single dense divisibility rather than double dense divisibility. However, thanks to the efficiency of the sieving step of our argument, the upgrade of the single dense divisibility hypothesis to double dense divisibility costs almost nothing with respect to the
parameter (which, using this constraint, gives a value of
as verified in these comments, which then implies a value of
).
This estimate is deduced from three sub-estimates, which require a bit more notation to state. We need a fixed quantity .
Definition 2 A coefficient sequence is a finitely supported sequence
that obeys the bounds
for all
, where
is the divisor function.
- (i) A coefficient sequence
is said to be at scale
for some
if it is supported on an interval of the form
.
- (ii) A coefficient sequence
at scale
is said to obey the Siegel-Walfisz theorem if one has
for any
, any fixed
, and any primitive residue class
.
- (iii) A coefficient sequence
at scale
(relative to this choice of
) is said to be smooth if it takes the form
for some smooth function
supported on
obeying the derivative bounds
for all fixed
(note that the implied constant in the
notation may depend on
).
Definition 3 (Type I, Type II, Type III estimates) Let
,
, and
be fixed quantities. We let
be an arbitrary bounded subset of
, and
a primitive congruence class.
- (i) We say that
holds if, whenever
are quantities with
and
for some fixed
, and
are coefficient sequences at scales
respectively, with
obeying a Siegel-Walfisz theorem, we have
- (ii) We say that
holds if the conclusion (7) of
holds under the same hypotheses as before, except that (6) is replaced with
for some sufficiently small fixed
.
- (iii) We say that
holds if, whenever
are quantities with
and
are coefficient sequences at scales
respectively, with
smooth, we have
Theorem 1 is then a consequence of the following four statements.
Theorem 4 (Type I estimate)
holds whenever
are fixed quantities such that
Theorem 5 (Type II estimate)
holds whenever
are fixed quantities such that
Theorem 6 (Type III estimate)
holds whenever
,
, and
are fixed quantities such that
In particular, if
then all values of
that are sufficiently close to
are admissible.
Lemma 7 (Combinatorial lemma) Let
,
, and
be such that
,
, and
simultaneously hold. Then
holds.
Indeed, if , one checks that the hypotheses for Theorems 4, 5, 6 are obeyed for
sufficiently close to
, at which point the claim follows from Lemma 7.
The proofs of Theorems 4, 5, 6 will be given below the fold, while the proof of Lemma 7 follows from the arguments in this previous post. We remark that in our current arguments, the double dense divisibility is only fully used in the Type I estimates; the Type II and Type III estimates are also valid just with single dense divisibility.
Remark 1 Theorem 6 is vacuously true for
, as the condition (10) cannot be satisfied in this case. If we use this trivial case of Theorem 6, while keeping the full strength of Theorems 4 and 5, we obtain Theorem 1 in the regime
Recent Comments