You are currently browsing the tag archive for the ‘Burgess inequality’ tag.

As in previous posts, we use the following asymptotic notation: {x} is a parameter going off to infinity, and all quantities may depend on {x} unless explicitly declared to be “fixed”. The asymptotic notation {O(), o(), \ll} is then defined relative to this parameter. A quantity {q} is said to be of polynomial size if one has {q = O(x^{O(1)})}, and said to be bounded if {q=O(1)}. Another convenient notation: we write {X \lessapprox Y} for {X \ll x^{o(1)} Y}. Thus for instance the divisor bound asserts that if {q} has polynomial size, then the number of divisors of {q} is {\lessapprox 1}.

This post is intended to highlight a phenomenon unearthed in the ongoing polymath8 project (and is in fact a key component of Zhang’s proof that there are bounded gaps between primes infinitely often), namely that one can get quite good bounds on relatively short exponential sums when the modulus {q} is smooth, through the basic technique of Weyl differencing (ultimately based on the Cauchy-Schwarz inequality, and also related to the van der Corput lemma in equidistribution theory). Improvements in the case of smooth moduli have appeared before in the literature (e.g. in this paper of Heath-Brown, paper of Graham and Ringrose, this later paper of Heath-Brown, this paper of Chang, or this paper of Goldmakher); the arguments here are particularly close to that of the first paper of Heath-Brown. It now also appears that further optimisation of this Weyl differencing trick could lead to noticeable improvements in the numerology for the polymath8 project, so I am devoting this post to explaining this trick further.

To illustrate the method, let us begin with the classical problem in analytic number theory of estimating an incomplete character sum

\displaystyle  \sum_{M+1 \leq n \leq M+N} \chi(n)

where {\chi} is a primitive Dirichlet character of some conductor {q}, {M} is an integer, and {N} is some quantity between {1} and {q}. Clearly we have the trivial bound

\displaystyle  |\sum_{M+1 \leq n \leq M+N} \chi(n)| \leq N; \ \ \ \ \ (1)

we also have the classical Pólya-Vinogradov inequality

\displaystyle  |\sum_{M+1 \leq n \leq M+N} \chi(n)| \ll q^{1/2} \log q. \ \ \ \ \ (2)

This latter inequality gives improvements over the trivial bound when {N} is much larger than {q^{1/2}}, but not for {N} much smaller than {q^{1/2}}. The Pólya-Vinogradov inequality can be deduced via a little Fourier analysis from the completed exponential sum bound

\displaystyle  | \sum_{n \in {\bf Z}/q{\bf Z}} \chi(n) e_q( an )| \ll q^{1/2}

for any {a \in {\bf Z}/q{\bf Z}}, where {e_q(n) :=e^{2\pi i n/q}}. (In fact, from the classical theory of Gauss sums, this exponential sum is equal to {\tau(\chi) \overline{\chi(a)}} for some complex number {\tau(\chi)} of norm {\sqrt{q}}.)

In the case when {q} is a prime, improving upon the above two inequalities is an important but difficult problem, with only partially satisfactory results so far. To give just one indication of the difficulty, the seemingly modest improvement

\displaystyle |\sum_{M+1 \leq n \leq M+N} \chi(n)| \ll p^{1/2} \log \log p

to the Pólya-Vinogradov inequality when {q=p} was a prime required a 14-page paper in Inventiones by Montgomery and Vaughan to prove, and even then it was only conditional on the generalised Riemann hypothesis! See also this more recent paper of Granville and Soundararajan for an unconditional variant of this result in the case that {\chi} has odd order.

Another important improvement is the Burgess bound, which in our notation asserts that

\displaystyle  |\sum_{M+1 \leq n \leq M+N} \chi(n)| \lessapprox N^{1-1/r} q^{\frac{r+1}{4r^2}} \ \ \ \ \ (3)

for any fixed integer {r \geq 2}, assuming that {q} is square-free (for simplicity) and of polynomial size; see this previous post for a discussion of the Burgess argument. This is non-trivial for {N} as small as {q^{1/4+o(1)}}.

In the case when {q} is prime, there has been very little improvement to the Burgess bound (or its Fourier dual, which can give bounds for {N} as large as {q^{3/4-o(1)}}) in the last fifty years; an improvement to the exponents in (3) in this case (particularly anything that gave a power saving for {N} below {q^{1/4}}) would in fact be rather significant news in analytic number theory.

However, in the opposite case when {q} is smooth – that is to say, all of its factors are much smaller than {q} – then one can do better than the Burgess bound in some regimes. This fact has been observed in several places in the literature (in particular, in the papers of Heath-Brown, Graham-Ringrose, Chang, and Goldmakher mentioned previously), but also turns out to (implicitly) be a key insight in Zhang’s paper on bounded prime gaps. In the case of character sums, one such improved estimate (closely related to Theorem 2 of the Heath-Brown paper) is as follows:

Proposition 1 Let {q} be square-free with a factorisation {q = q_1 q_2} and of polynomial size, and let {M,N} be integers with {1 \leq N \leq q}. Then for any primitive character {\chi} with conductor {q}, one has

\displaystyle  | \sum_{M+1 \leq n \leq M+N} \chi(n) | \lessapprox N^{1/2} q_1^{1/2} + N^{1/2} q_2^{1/4}.

This proposition is particularly powerful when {q} is smooth, as this gives many factorisations {q = q_1 q_2} with the ability to specify {q_1,q_2} with a fair amount of accuracy. For instance, if {q} is {y}-smooth (i.e. all prime factors are at most {y}), then by the greedy algorithm one can find a divisor {q_1} of {q} with {y^{-2/3} q^{1/3} \leq q_1 \leq y^{1/3} q^{1/3}}; if we set {q_2 := q/q_1}, then {y^{-1/3} q^{2/3} \leq q_2 \leq y^{2/3} q^{2/3}}, and the above proposition then gives

\displaystyle  | \sum_{M+1 \leq n \leq M+N} \chi(n) | \lessapprox y^{1/6} N^{1/2} q^{1/6}

which can improve upon the Burgess bound when {y} is small. For instance, if {N = q^{1/2}}, then this bound becomes {\lessapprox y^{1/6} q^{5/12}}; in contrast the Burgess bound only gives {\lessapprox q^{7/16}} for this value of {N} (using the optimal choice {r=2} for {r}), which is inferior for {y < q^{1/8}}.

The hypothesis that {q} be squarefree may be relaxed, but for applications to the Polymath8 project, it is only the squarefree moduli that are relevant.

Proof: If {N \ll q_1} then the claim follows from the trivial bound (1), while for {N \gg q_2} the claim follows from (2). Hence we may assume that

\displaystyle  q_1 < N < q_2.

We use the method of Weyl differencing, the key point being to difference in multiples of {q_1}.

Let {K := \lfloor N/q_1 \rfloor}, thus {K \geq 1}. For any {1 \leq k \leq K}, we have

\displaystyle  \sum_{M+1 \leq n \leq M+N} \chi(n) = \sum_n 1_{[M+1,M+N]}(n+kq_1) \chi(n+kq_1)

and thus on averaging

\displaystyle  \sum_{M+1 \leq n \leq M+N} \chi(n) = \frac{1}{K} \sum_n \sum_{k=1}^K 1_{[M+1,M+N]}(n+kq_1) \chi(n+kq_1). \ \ \ \ \ (4)

By the Chinese remainder theorem, we may factor

\displaystyle  \chi(n) = \chi_1(n) \chi_2(n)

where {\chi_1,\chi_2} are primitive characters of conductor {q_1,q_2} respectively. As {\chi_1} is periodic of period {q_1}, we thus have

\displaystyle  \chi(n+kq_1) = \chi_1(n) \chi_2(n+kq_2)

and so we can take {\chi_1} out of the inner summation of the right-hand side of (4) to obtain

\displaystyle  \sum_{M+1 \leq n \leq M+N} \chi(n) = \frac{1}{K} \sum_n \chi_1(n) \sum_{k=1}^K 1_{[M+1,M+N]}(n+kq_1) \chi_2(n+kq_1)

and hence by the triangle inequality

\displaystyle  |\sum_{M+1 \leq n \leq M+N} \chi(n)| \leq \frac{1}{K} \sum_n |\sum_{k=1}^K 1_{[M+1,M+N]}(n+kq_1) \chi_2(n+kq_1)|.

Note how the characters on the right-hand side only have period {q_2} rather than {q=q_1 q_2}. This reduction in the period is ultimately the source of the saving over the Pólya-Vinogradov inequality.

Note that the inner sum vanishes unless {n \in [M+1-Kq_1,M+N]}, which is an interval of length {O(N)} by choice of {K}. Thus by Cauchy-Schwarz one has

\displaystyle  | \sum_{M+1 \leq n \leq M+N} \chi(n) | \ll

\displaystyle  \frac{N^{1/2}}{K} (\sum_n |\sum_{k=1}^K 1_{[M+1,M+N]}(n+kq_1) \chi_2(n+kq_1)|^2)^{1/2}.

We expand the right-hand side as

\displaystyle  \frac{N^{1/2}}{K} |\sum_{1 \leq k,k' \leq K} \sum_n

\displaystyle  1_{[M+1,M+N]}(n+kq_1) 1_{[M+1,M+N]}(n+k'q_1) \chi_2(n+kq_1) \overline{\chi_2(n+k'q_1)}|^{1/2}.

We first consider the diagonal contribution {k=k'}. In this case we use the trivial bound {O(N)} for the inner summation, and we soon see that the total contribution here is {O( K^{-1/2} N ) = O( N^{1/2}q_1^{1/2} )}.

Now we consider the off-diagonal case; by symmetry we can take {k < k'}. Then the indicator functions {1_{[M+1,M+N]}(n+kq_1) 1_{[M+1,M+N]}(n+k'q_1)} restrict {n} to the interval {[M+1-kq_1, M+N-k'q_1]}. On the other hand, as a consequence of the Weil conjectures for curves one can show that

\displaystyle  |\sum_{n \in {\bf Z}/q_2{\bf Z}} \chi_2(n+kq_1) \overline{\chi_2(n+k'q_1)} e_{q_2}(an)| \lessapprox q_2^{1/2} (k-k',q_2)^{1/2}

for any {a \in {\bf Z}/q_2{\bf Z}}; indeed one can use the Chinese remainder theorem and the square-free nature of {q_2} to reduce to the case when {q_2} is prime, in which case one can apply (for instance) the original paper of Weil to establish this bound, noting also that {q_1} and {q_2} are coprime since {q} is squarefree. Applying the method of completion of sums (or the Parseval formula), this shows that

\displaystyle  |\sum_n 1_{[M+1,M+N]}(n+kq_1) 1_{[M+1,M+N]}(n+k'q_1) \chi_2(n+kq_1) \overline{\chi_2(n+k'q_1)}|

\displaystyle  \lessapprox q_2^{1/2} (k-k',q_2)^{1/2}.

Summing in {k,k'} (using Lemma 5 from this previous post) we see that the total contribution to the off-diagonal case is

\displaystyle  \lessapprox \frac{N^{1/2}}{K} ( K^2 q_2^{1/2} )^{1/2}

which simplifies to {\lessapprox N^{1/2} q_2^{1/4}}. The claim follows. \Box

A modification of the above argument (using more complicated versions of the Weil conjectures) allows one to replace the summand {\chi(n)} by more complicated summands such as {\chi(f(n)) e_q(g(n))} for some polynomials or rational functions {f,g} of bounded degree and obeying a suitable non-degeneracy condition (after restricting of course to those {n} for which the arguments {f(n),g(n)} are well-defined). We will not detail this here, but instead turn to the question of estimating slightly longer exponential sums, such as

\displaystyle  \sum_{1 \leq n \leq N} e_{d_1}( \frac{c_1}{n} ) e_{d_2}( \frac{c_2}{n+l} )

where {N} should be thought of as a little bit larger than {(d_1d_2)^{1/2}}.

Read the rest of this entry »

A large portion of analytic number theory is concerned with the distribution of number-theoretic sets such as the primes, or quadratic residues in a certain modulus. At a local level (e.g. on a short interval {[x,x+y]}), the behaviour of these sets may be quite irregular. However, in many cases one can understand the global behaviour of such sets on very large intervals, (e.g. {[1,x]}), with reasonable accuracy (particularly if one assumes powerful additional conjectures, such as the Riemann hypothesis and its generalisations). For instance, in the case of the primes, we have the prime number theorem, which asserts that the number of primes in a large interval {[1,x]} is asymptotically equal to {x/\log x}; in the case of quadratic residues modulo a prime {p}, it is clear that there are exactly {(p-1)/2} such residues in {[1,p]}. With elementary arguments, one can also count statistics such as the number of pairs of consecutive quadratic residues; and with the aid of deeper tools such as the Weil sum estimates, one can count more complex patterns in these residues also (e.g. {k}-point correlations).

One is often interested in converting this sort of “global” information on long intervals into “local” information on short intervals. If one is interested in the behaviour on a generic or average short interval, then the question is still essentially a global one, basically because one can view a long interval as an average of a long sequence of short intervals. (This does not mean that the problem is automatically easy, because not every global statistic about, say, the primes is understood. For instance, we do not know how to rigorously establish the conjectured asymptotic for the number of twin primes {n,n+2} in a long interval {[1,N]}, and so we do not fully understand the local distribution of the primes in a typical short interval {[n,n+2]}.)

However, suppose that instead of understanding the average-case behaviour of short intervals, one wants to control the worst-case behaviour of such intervals (i.e. to establish bounds that hold for all short intervals, rather than most short intervals). Then it becomes substantially harder to convert global information to local information. In many cases one encounters a “square root barrier”, in which global information at scale {x} (e.g. statistics on {[1,x]}) cannot be used to say anything non-trivial about a fixed (and possibly worst-case) short interval at scales {x^{1/2}} or below. (Here we ignore factors of {\log x} for simplicity.) The basic reason for this is that even randomly distributed sets in {[1,x]} (which are basically the most uniform type of global distribution one could hope for) exhibit random fluctuations of size {x^{1/2}} or so in their global statistics (as can be seen for instance from the central limit theorem). Because of this, one could take a random (or pseudorandom) subset of {[1,x]} and delete all the elements in a short interval of length {o(x^{1/2})}, without anything suspicious showing up on the global statistics level; the edited set still has essentially the same global statistics as the original set. On the other hand, the worst-case behaviour of this set on a short interval has been drastically altered.

One stark example of this arises when trying to control the largest gap between consecutive prime numbers in a large interval {[x,2x]}. There are convincing heuristics that suggest that this largest gap is of size {O( \log^2 x )} (Cramér’s conjecture). But even assuming the Riemann hypothesis, the best upper bound on this gap is only of size {O( x^{1/2} \log x )}, basically because of this square root barrier. This particular instance of the square root barrier is a significant obstruction to the current polymath project “Finding primes“.

On the other hand, in some cases one can use additional tricks to get past the square root barrier. The key point is that many number-theoretic sequences have special structure that distinguish them from being exactly like random sets. For instance, quadratic residues have the basic but fundamental property that the product of two quadratic residues is again a quadratic residue. One way to use this sort of structure to amplify bad behaviour in a single short interval into bad behaviour across many short intervals. Because of this amplification, one can sometimes get new worst-case bounds by tapping the average-case bounds.

In this post I would like to indicate a classical example of this type of amplification trick, namely Burgess’s bound on short character sums. To narrow the discussion, I would like to focus primarily on the following classical problem:

Problem 1 What are the best bounds one can place on the first quadratic non-residue {n_p} in the interval {[1,p-1]} for a large prime {p}?

(The first quadratic residue is, of course, {1}; the more interesting problem is the first quadratic non-residue.)

Probabilistic heuristics (presuming that each non-square integer has a 50-50 chance of being a quadratic residue) suggests that {n_p} should have size {O( \log p )}, and indeed Vinogradov conjectured that {n_p = O_\varepsilon(p^\varepsilon)} for any {\varepsilon > 0}. Using the Pólya-Vinogradov inequality, one can get the bound {n_p = O( \sqrt{p} \log p )} (and can improve it to {n_p = O(\sqrt{p})} using smoothed sums); combining this with a sieve theory argument (exploiting the multiplicative nature of quadratic residues) one can boost this to {n_p = O( p^{\frac{1}{2\sqrt{e}}} \log^2 p )}. Inserting Burgess’s amplification trick one can boost this to {n_p = O_\varepsilon( p^{\frac{1}{4\sqrt{e}}+\varepsilon} )} for any {\varepsilon > 0}. Apart from refinements to the {\varepsilon} factor, this bound has stood for five decades as the “world record” for this problem, which is a testament to the difficulty in breaching the square root barrier.

Note: in order not to obscure the presentation with technical details, I will be using asymptotic notation {O()} in a somewhat informal manner.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,781 other followers