You are currently browsing the tag archive for the ‘restriction theorem’ tag.

One of the most fundamental principles in Fourier analysis is the uncertainty principle. It does not have a single canonical formulation, but one typical informal description of the principle is that if a function ${f}$ is restricted to a narrow region of physical space, then its Fourier transform ${\hat f}$ must be necessarily “smeared out” over a broad region of frequency space. Some versions of the uncertainty principle are discussed in this previous blog post.

In this post I would like to highlight a useful instance of the uncertainty principle, due to Hugh Montgomery, which is useful in analytic number theory contexts. Specifically, suppose we are given a complex-valued function ${f: {\bf Z} \rightarrow {\bf C}}$ on the integers. To avoid irrelevant issues at spatial infinity, we will assume that the support ${\hbox{supp}(f) := \{ n \in {\bf Z}: f(n) \neq 0 \}}$ of this function is finite (in practice, we will only work with functions that are supported in an interval ${[M+1,M+N]}$ for some natural numbers ${M,N}$). Then we can define the Fourier transform ${\hat f: {\bf R}/{\bf Z} \rightarrow {\bf C}}$ by the formula

$\displaystyle \hat f(\xi) := \sum_{n \in {\bf Z}} f(n) e(-n\xi)$

where ${e(x) := e^{2\pi i x}}$. (In some literature, the sign in the exponential phase is reversed, but this will make no substantial difference to the arguments below.)

The classical uncertainty principle, in this context, asserts that if ${f}$ is localised in an interval of length ${N}$, then ${\hat f}$ must be “smeared out” at a scale of at least ${1/N}$ (and essentially constant at scales less than ${1/N}$). For instance, if ${f}$ is supported in ${[M+1,M+N]}$, then we have the Plancherel identity

$\displaystyle \int_{{\bf R}/{\bf Z}} |\hat f(\xi)|^2\ d\xi = \| f \|_{\ell^2({\bf Z})}^2 \ \ \ \ \ (1)$

while from the Cauchy-Schwarz inequality we have

$\displaystyle |\hat f(\xi)| \leq N^{1/2} \| f \|_{\ell^2({\bf Z})}$

for each frequency ${\xi}$, and in particular that

$\displaystyle \int_I |\hat f(\xi)|^2\ d\xi \leq N |I| \| f \|_{\ell^2({\bf Z})}^2$

for any arc ${I}$ in the unit circle (with ${|I|}$ denoting the length of ${I}$). In particular, an interval of length significantly less than ${1/N}$ can only capture a fraction of the ${L^2}$ energy of the Fourier transform of ${\hat f}$, which is consistent with the above informal statement of the uncertainty principle.

Another manifestation of the classical uncertainty principle is the large sieve inequality. A particularly nice formulation of this inequality is due independently to Montgomery and Vaughan and Selberg: if ${f}$ is supported in ${[M+1,M+N]}$, and ${\xi_1,\ldots,\xi_J}$ are frequencies in ${{\bf R}/{\bf Z}}$ that are ${\delta}$-separated for some ${\delta>0}$, thus ${\| \xi_i-\xi_j\|_{{\bf R}/{\bf Z}} \geq \delta}$ for all ${1 \leq i (where ${\|\xi\|_{{\bf R}/{\bf Z}}}$ denotes the distance of ${\xi}$ to the origin in ${{\bf R}/{\bf Z}}$), then

$\displaystyle \sum_{j=1}^J |\hat f(\xi_j)|^2 \leq (N + \frac{1}{\delta}) \| f \|_{\ell^2({\bf Z})}^2. \ \ \ \ \ (2)$

The reader is encouraged to see how this inequality is consistent with the Plancherel identity (1) and the intuition that ${\hat f}$ is essentially constant at scales less than ${1/N}$. The factor ${N + \frac{1}{\delta}}$ can in fact be amplified a little bit to ${N + \frac{1}{\delta} - 1}$, which is essentially optimal, by using a neat dilation trick of Paul Cohen, in which one dilates ${[M+1,M+N]}$ to ${[MK+K, MK+NK]}$ (and replaces each frequency ${\xi_j}$ by their ${K^{th}}$ roots), and then sending ${K \rightarrow \infty}$ (cf. the tensor product trick); see this survey of Montgomery for details. But we will not need this refinement here.

In the above instances of the uncertainty principle, the concept of narrow support in physical space was formalised in the Archimedean sense, using the standard Archimedean metric ${d_\infty(n,m) := |n-m|}$ on the integers ${{\bf Z}}$ (in particular, the parameter ${N}$ is essentially the Archimedean diameter of the support of ${f}$). However, in number theory, the Archimedean metric is not the only metric of importance on the integers; the ${p}$-adic metrics play an equally important role; indeed, it is common to unify the Archimedean and ${p}$-adic perspectives together into a unified adelic perspective. In the ${p}$-adic world, the metric balls are no longer intervals, but are instead residue classes modulo some power of ${p}$. Intersecting these balls from different ${p}$-adic metrics together, we obtain residue classes with respect to various moduli ${q}$ (which may be either prime or composite). As such, another natural manifestation of the concept of “narrow support in physical space” is “vanishes on many residue classes modulo ${q}$“. This notion of narrowness is particularly common in sieve theory, when one deals with functions supported on thin sets such as the primes, which naturally tend to avoid many residue classes (particularly if one throws away the first few primes).

In this context, the uncertainty principle is this: the more residue classes modulo ${q}$ that ${f}$ avoids, the more the Fourier transform ${\hat f}$ must spread out along multiples of ${1/q}$. To illustrate a very simple example of this principle, let us take ${q=2}$, and suppose that ${f}$ is supported only on odd numbers (thus it completely avoids the residue class ${0 \mod 2}$). We write out the formulae for ${\hat f(\xi)}$ and ${\hat f(\xi+1/2)}$:

$\displaystyle \hat f(\xi) = \sum_n f(n) e(-n\xi)$

$\displaystyle \hat f(\xi+1/2) = \sum_n f(n) e(-n\xi) e(-n/2).$

If ${f}$ is supported on the odd numbers, then ${e(-n/2)}$ is always equal to ${-1}$ on the support of ${f}$, and so we have ${\hat f(\xi+1/2)=-\hat f(\xi)}$. Thus, whenever ${\hat f}$ has a significant presence at a frequency ${\xi}$, it also must have an equally significant presence at the frequency ${\xi+1/2}$; there is a spreading out across multiples of ${1/2}$. Note that one has a similar effect if ${f}$ was supported instead on the even integers instead of the odd integers.

A little more generally, suppose now that ${f}$ avoids a single residue class modulo a prime ${p}$; for sake of argument let us say that it avoids the zero residue class ${0 \mod p}$, although the situation for the other residue classes is similar. For any frequency ${\xi}$ and any ${j=0,\ldots,p-1}$, one has

$\displaystyle \hat f(\xi+j/p) = \sum_n f(n) e(-n\xi) e(-jn/p).$

From basic Fourier analysis, we know that the phases ${e(-jn/p)}$ sum to zero as ${j}$ ranges from ${0}$ to ${p-1}$ whenever ${n}$ is not a multiple of ${p}$. We thus have

$\displaystyle \sum_{j=0}^{p-1} \hat f(\xi+j/p) = 0.$

In particular, if ${\hat f(\xi)}$ is large, then one of the other ${\hat f(\xi+j/p)}$ has to be somewhat large as well; using the Cauchy-Schwarz inequality, we can quantify this assertion in an ${\ell^2}$ sense via the inequality

$\displaystyle \sum_{j=1}^{p-1} |\hat f(\xi+j/p)|^2 \geq \frac{1}{p-1} |\hat f(\xi)|^2. \ \ \ \ \ (3)$

Let us continue this analysis a bit further. Now suppose that ${f}$ avoids ${\omega(p)}$ residue classes modulo a prime ${p}$, for some ${0 \leq \omega(p) < p}$. (We exclude the case ${\omega(p)=p}$ as it is clearly degenerates by forcing ${f}$ to be identically zero.) Let ${\nu(n)}$ be the function that equals ${1}$ on these residue classes and zero away from these residue classes, then

$\displaystyle \sum_n f(n) e(-n\xi) \nu(n) = 0.$

Using the periodic Fourier transform, we can write

$\displaystyle \nu(n) = \sum_{j=0}^{p-1} c(j) e(-jn/p)$

for some coefficients ${c(j)}$, thus

$\displaystyle \sum_{j=0}^{p-1} \hat f(\xi+j/p) c(j) = 0.$

Some Fourier-analytic computations reveal that

$\displaystyle c(0) = \frac{\omega(p)}{p}$

and

$\displaystyle \sum_{j=0}^{p-1} |c(j)|^2 = \frac{\omega(p)}{p}$

and so after some routine algebra and the Cauchy-Schwarz inequality, we obtain a generalisation of (3):

$\displaystyle \sum_{j=1}^{p-1} |\hat f(\xi+j/p)|^2 \geq \frac{\omega(p)}{p-\omega(p)} |\hat f(\xi)|^2.$

Thus we see that the more residue classes mod ${p}$ we exclude, the more Fourier energy has to disperse along multiples of ${1/p}$. It is also instructive to consider the extreme case ${\omega(p)=p-1}$, in which ${f}$ is supported on just a single residue class ${a \mod p}$; in this case, one clearly has ${\hat f(\xi+j/p) = e(-aj/p) \hat f(\xi)}$, and so ${\hat f}$ spreads its energy completely evenly along multiples of ${1/p}$.

In 1968, Montgomery observed the following useful generalisation of the above calculation to arbitrary modulus:

Proposition 1 (Montgomery’s uncertainty principle) Let ${f: {\bf Z} \rightarrow{\bf C}}$ be a finitely supported function which, for each prime ${p}$, avoids ${\omega(p)}$ residue classes modulo ${p}$ for some ${0 \leq \omega(p) < p}$. Then for each natural number ${q}$, one has

$\displaystyle \sum_{1 \leq a \leq q: (a,q)=1} |\hat f(\xi+\frac{a}{q})|^2 \geq h(q) |\hat f(\xi)|^2$

where ${h(q)}$ is the quantity

$\displaystyle h(q) := \mu(q)^2 \prod_{p|q} \frac{\omega(p)}{p-\omega(p)} \ \ \ \ \ (4)$

where ${\mu}$ is the Möbius function.

We give a proof of this proposition below the fold.

Following the “adelic” philosophy, it is natural to combine this uncertainty principle with the large sieve inequality to take simultaneous advantage of localisation both in the Archimedean sense and in the ${p}$-adic senses. This leads to the following corollary:

Corollary 2 (Arithmetic large sieve inequality) Let ${f: {\bf Z} \rightarrow {\bf C}}$ be a function supported on an interval ${[M+1,M+N]}$ which, for each prime ${p}$, avoids ${\omega(p)}$ residue classes modulo ${p}$ for some ${0 \leq \omega(p) < p}$. Let ${\xi_1,\ldots,\xi_J \in {\bf R}/{\bf Z}}$, and let ${{\mathcal Q}}$ be a finite set of natural numbers. Suppose that the frequencies ${\xi_j + \frac{a}{q}}$ with ${1 \leq j \leq J}$, ${q \in {\mathcal Q}}$, and ${(a,q)=1}$ are ${\delta}$-separated. Then one has

$\displaystyle \sum_{j=1}^J |\hat f(\xi_j)|^2 \leq \frac{N + \frac{1}{\delta}}{\sum_{q \in {\mathcal Q}} h(q)} \| f \|_{\ell^2({\bf Z})}^2$

where ${h(q)}$ was defined in (4).

Indeed, from the large sieve inequality one has

$\displaystyle \sum_{j=1}^J \sum_{q \in {\mathcal Q}} \sum_{1 \leq a \leq q: (a,q)=1} |\hat f(\xi_j+\frac{a}{q})|^2 \leq (N + \frac{1}{\delta}) \| f \|_{\ell^2({\bf Z})}^2$

while from Proposition 1 one has

$\displaystyle \sum_{q \in {\mathcal Q}} \sum_{1 \leq a \leq q: (a,q)=1} |\hat f(\xi_j+\frac{a}{q})|^2 \geq (\sum_{q \in {\mathcal Q}} h(q)) |\hat f(\xi_j)|^2$

whence the claim.

There is a great deal of flexibility in the above inequality, due to the ability to select the set ${{\mathcal Q}}$, the frequencies ${\xi_1,\ldots,\xi_J}$, the omitted classes ${\omega(p)}$, and the separation parameter ${\delta}$. Here is a typical application concerning the original motivation for the large sieve inequality, namely in bounding the size of sets which avoid many residue classes:

Corollary 3 (Large sieve) Let ${A}$ be a set of integers contained in ${[M+1,M+N]}$ which avoids ${\omega(p)}$ residue classes modulo ${p}$ for each prime ${p}$, and let ${R>0}$. Then

$\displaystyle |A| \leq \frac{N+R^2}{G(R)}$

where

$\displaystyle G(R) := \sum_{q \leq R} h(q).$

Proof: We apply Corollary 2 with ${f = 1_A}$, ${J=1}$, ${\delta=1/R^2}$, ${\xi_1=0}$, and ${{\mathcal Q} := \{ q: q \leq R\}}$. The key point is that the Farey sequence of fractions ${a/q}$ with ${q \leq R}$ and ${(a,q)=1}$ is ${1/R^2}$-separated, since

$\displaystyle \| \frac{a}{q}-\frac{a'}{q'} \|_{{\bf R}/{\bf Z}} \geq \frac{1}{qq'} \geq \frac{1}{R^2}$

whenever ${a/q, a'/q'}$ are distinct fractions in this sequence. $\Box$

If, for instance, ${A}$ is the set of all primes in ${[M+1,M+N]}$ larger than ${R}$, then one can set ${\omega(p)=1}$ for all ${p \leq R}$, which makes ${h(q) = \frac{\mu^2(q)}{\phi(q)}}$, where ${\phi}$ is the Euler totient function. It is a classical estimate that

$\displaystyle G(R) = \log R + O(1).$

Using this fact and optimising in ${R}$, we obtain (a special case of) the Brun-Titchmarsh inequality

$\displaystyle \pi(M+N)-\pi(M) \leq (2+o_{N \rightarrow \infty}(1)) \frac{N}{\log N}$

where ${\pi(x)}$ is the prime counting function; a variant of the same argument gives the more general Brun-Titchmarsh inequality

$\displaystyle \pi(M+N;a,q)-\pi(M;a,q) \leq (2+o_{N \rightarrow \infty;q}(1)) \frac{q}{\phi(q)} \frac{N}{\log N}$

for any primitive residue class ${a \mod q}$, where ${\pi(M;a,q)}$ is the number of primes less than or equal to ${M}$ that are congruent to ${a \mod q}$. By performing a more careful optimisation using a slightly sharper version of the large sieve inequality (2) that exploits the irregular spacing of the Farey sequence, Montgomery and Vaughan were able to delete the error term in the Brun-Titchmarsh inequality, thus establishing the very nice inequality

$\displaystyle \pi(M+N;a,q)-\pi(M;a,q) \leq 2 \frac{q}{\phi(q)} \frac{N}{\log N}$

for any natural numbers ${M,N,a,q}$ with ${N>1}$. This is a particularly useful inequality in non-asymptotic analytic number theory (when one wishes to study number theory at explicit orders of magnitude, rather than the number theory of sufficiently large numbers), due to the absence of asymptotic notation.

I recently realised that Corollary 2 also establishes a stronger version of the “restriction theorem for the Selberg sieve” that Ben Green and I proved some years ago (indeed, one can view Corollary 2 as a “restriction theorem for the large sieve”). I’m placing the details below the fold.