You are currently browsing the tag archive for the ‘almost orthogonality’ tag.

In contrast to previous notes, in this set of notes we shall focus exclusively on Fourier analysis in the one-dimensional setting {d=1} for simplicity of notation, although all of the results here have natural extensions to higher dimensions. Depending on the physical context, one can view the physical domain {{\bf R}} as representing either space or time; we will mostly think in terms of the former interpretation, even though the standard terminology of “time-frequency analysis”, which we will make more prominent use of in later notes, clearly originates from the latter.

In previous notes we have often performed various localisations in either physical space or Fourier space {{\bf R}}, for instance in order to take advantage of the uncertainty principle. One can formalise these operations in terms of the functional calculus of two basic operations on Schwartz functions {{\mathcal S}({\bf R})}, the position operator {X: {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} defined by

\displaystyle  (Xf)(x) := x f(x)

and the momentum operator {D: {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})}, defined by

\displaystyle  (Df)(x) := \frac{1}{2\pi i} \frac{d}{dx} f(x). \ \ \ \ \ (1)

(The terminology comes from quantum mechanics, where it is customary to also insert a small constant {h} on the right-hand side of (1) in accordance with de Broglie’s law. Such a normalisation is also used in several branches of mathematics, most notably semiclassical analysis and microlocal analysis, where it becomes profitable to consider the semiclassical limit {h \rightarrow 0}, but we will not emphasise this perspective here.) The momentum operator can be viewed as the counterpart to the position operator, but in frequency space instead of physical space, since we have the standard identity

\displaystyle  \widehat{Df}(\xi) = \xi \hat f(\xi)

for any {\xi \in {\bf R}} and {f \in {\mathcal S}({\bf R})}. We observe that both operators {X,D} are formally self-adjoint in the sense that

\displaystyle  \langle Xf, g \rangle = \langle f, Xg \rangle; \quad \langle Df, g \rangle = \langle f, Dg \rangle

for all {f,g \in {\mathcal S}({\bf R})}, where we use the {L^2({\bf R})} Hermitian inner product

\displaystyle  \langle f, g\rangle := \int_{\bf R} f(x) \overline{g(x)}\ dx.

Clearly, for any polynomial {P(x)} of one real variable {x} (with complex coefficients), the operator {P(X): {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} is given by the spatial multiplier operator

\displaystyle  (P(X) f)(x) = P(x) f(x)

and similarly the operator {P(D): {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} is given by the Fourier multiplier operator

\displaystyle  \widehat{P(D) f}(\xi) = P(\xi) \hat f(\xi).

Inspired by this, if {m: {\bf R} \rightarrow {\bf C}} is any smooth function that obeys the derivative bounds

\displaystyle  \frac{d^j}{dx^j} m(x) \lesssim_{m,j} \langle x \rangle^{O_{m,j}(1)} \ \ \ \ \ (2)

for all {j \geq 0} and {x \in {\bf R}} (that is to say, all derivatives of {m} grow at most polynomially), then we can define the spatial multiplier operator {m(X): {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} by the formula

\displaystyle  (m(X) f)(x) := m(x) f(x);

one can easily verify from several applications of the Leibniz rule that {m(X)} maps Schwartz functions to Schwartz functions. We refer to {m(x)} as the symbol of this spatial multiplier operator. In a similar fashion, we define the Fourier multiplier operator {m(D)} associated to the symbol {m(\xi)} by the formula

\displaystyle  \widehat{m(D) f}(\xi) := m(\xi) \hat f(\xi).

For instance, any constant coefficient linear differential operators {\sum_{k=0}^n c_k \frac{d^k}{dx^k}} can be written in this notation as

\displaystyle \sum_{k=0}^n c_k \frac{d^k}{dx^k} =\sum_{k=0}^n c_k (2\pi i D)^k;

however there are many Fourier multiplier operators that are not of this form, such as fractional derivative operators {\langle D \rangle^s = (1- \frac{1}{4\pi^2} \frac{d^2}{dx^2})^{s/2}} for non-integer values of {s}, which is a Fourier multiplier operator with symbol {\langle \xi \rangle^s}. It is also very common to use spatial cutoffs {\psi(X)} and Fourier cutoffs {\psi(D)} for various bump functions {\psi} to localise functions in either space or frequency; we have seen several examples of such cutoffs in action in previous notes (often in the higher dimensional setting {d>1}).

We observe that the maps {m \mapsto m(X)} and {m \mapsto m(D)} are ring homomorphisms, thus for instance

\displaystyle  (m_1 + m_2)(D) = m_1(D) + m_2(D)

and

\displaystyle  (m_1 m_2)(D) = m_1(D) m_2(D)

for any {m_1,m_2} obeying the derivative bounds (2); also {m(D)} is formally adjoint to {\overline{m(D)}} in the sense that

\displaystyle  \langle m(D) f, g \rangle = \langle f, \overline{m}(D) g \rangle

for {f,g \in {\mathcal S}({\bf R})}, and similarly for {m(X)} and {\overline{m}(X)}. One can interpret these facts as part of the functional calculus of the operators {X,D}, which can be interpreted as densely defined self-adjoint operators on {L^2({\bf R})}. However, in this set of notes we will not develop the spectral theory necessary in order to fully set out this functional calculus rigorously.

In the field of PDE and ODE, it is also very common to study variable coefficient linear differential operators

\displaystyle  \sum_{k=0}^n c_k(x) \frac{d^k}{dx^k} \ \ \ \ \ (3)

where the {c_0,\dots,c_n} are now functions of the spatial variable {x} obeying the derivative bounds (2). A simple example is the quantum harmonic oscillator Hamiltonian {-\frac{d^2}{dx^2} + x^2}. One can rewrite this operator in our notation as

\displaystyle  \sum_{k=0}^n c_k(X) (2\pi i D)^k

and so it is natural to interpret this operator as a combination {a(X,D)} of both the position operator {X} and the momentum operator {D}, where the symbol {a: {\bf R} \times {\bf R} \rightarrow {\bf C}} this operator is the function

\displaystyle  a(x,\xi) := \sum_{k=0}^n c_k(x) (2\pi i \xi)^k. \ \ \ \ \ (4)

Indeed, from the Fourier inversion formula

\displaystyle  f(x) = \int_{\bf R} \hat f(\xi) e^{2\pi i x \xi}\ d\xi

for any {f \in {\mathcal S}({\bf R})} we have

\displaystyle  (2\pi i D)^k f(x) = \int_{\bf R} (2\pi i \xi)^k \hat f(\xi) e^{2\pi i x \xi}\ d\xi

and hence on multiplying by {c_k(x)} and summing we have

\displaystyle (\sum_{k=0}^n c_k(X) (2\pi i D)^k) f(x) = \int_{\bf R} a(x,\xi) \hat f(\xi) e^{2\pi i x \xi}\ d\xi.

Inspired by this, we can introduce the Kohn-Nirenberg quantisation by defining the operator {a(X,D) = a_{KN}(X,D): {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} by the formula

\displaystyle  a(X,D) f(x) = \int_{\bf R} a(x,\xi) \hat f(\xi) e^{2\pi i x \xi}\ d\xi \ \ \ \ \ (5)

whenever {f \in {\mathcal S}({\bf R})} and {a: {\bf R} \times {\bf R} \rightarrow {\bf C}} is any smooth function obeying the derivative bounds

\displaystyle  \frac{\partial^j}{\partial x^j} \frac{\partial^l}{\partial \xi^l} a(x,\xi) \lesssim_{a,j,l} \langle x \rangle^{O_{a,j}(1)} \langle \xi \rangle^{O_{a,j,l}(1)} \ \ \ \ \ (6)

for all {j,l \geq 0} and {x \in {\bf R}} (note carefully that the exponent in {x} on the right-hand side is required to be uniform in {l}). This quantisation clearly generalises both the spatial multiplier operators {m(X)} and the Fourier multiplier operators {m(D)} defined earlier, which correspond to the cases when the symbol {a(x,\xi)} is a function of {x} only or {\xi} only respectively. Thus we have combined the physical space {{\bf R} = \{ x: x \in {\bf R}\}} and the frequency space {{\bf R} = \{ \xi: \xi \in {\bf R}\}} into a single domain, known as phase space {{\bf R} \times {\bf R} = \{ (x,\xi): x,\xi \in {\bf R} \}}. The term “time-frequency analysis” encompasses analysis based on decompositions and other manipulations of phase space, in much the same way that “Fourier analysis” encompasses analysis based on decompositions and other manipulations of frequency space. We remark that the Kohn-Nirenberg quantization is not the only choice of quantization one could use; see Remark 19 below.

Exercise 1

  • (i) Show that for {a} obeying (6), that {a(X,D)} does indeed map {{\mathcal S}({\bf R})} to {{\mathcal S}({\bf R})}.
  • (ii) Show that the symbol {a} is uniquely determined by the operator {a(X,D)}. That is to say, if {a,b} are two functions obeying (6) with {a(X,D) f = b(X,D) f} for all {f \in {\mathcal S}({\bf R})}, then {a=b}. (Hint: apply {a(X,D)-b(X,D)} to a suitable truncation of a plane wave {x \mapsto e^{2\pi i x \xi}} and then take limits.)

In principle, the quantisations {a(X,D)} are potentially very useful for such tasks as inverting variable coefficient linear operators, or to localize a function simultaneously in physical and Fourier space. However, a fundamental difficulty arises: map from symbols {a} to operators {a(X,D)} is now no longer a ring homomorphism, in particular

\displaystyle  (a_1 a_2)(X,D) \neq a_1(X,D) a_2(X,D) \ \ \ \ \ (7)

in general. Fundamentally, this is due to the fact that pointwise multiplication of symbols is a commutative operation, whereas the composition of operators such as {X} and {D} does not necessarily commute. This lack of commutativity can be measured by introducing the commutator

\displaystyle  [A,B] := AB - BA

of two operators {A,B}, and noting from the product rule that

\displaystyle  [X,D] = -\frac{1}{2\pi i} \neq 0.

(In the language of Lie groups and Lie algebras, this tells us that {X,D} are (up to complex constants) the standard Lie algebra generators of the Heisenberg group.) From a quantum mechanical perspective, this lack of commutativity is the root cause of the uncertainty principle that prevents one from simultaneously localizing in both position and momentum past a certain point. Here is one basic way of formalising this principle:

Exercise 2 (Heisenberg uncertainty principle) For any {x_0, \xi_0 \in {\bf R}} and {f \in \mathcal{S}({\bf R})}, show that

\displaystyle  \| (X-x_0) f \|_{L^2({\bf R})} \| (D-\xi_0) f\|_{L^2({\bf R})} \geq \frac{1}{4\pi} \|f\|_{L^2({\bf R})}^2.

(Hint: evaluate the expression {\langle [X-x_0, D - \xi_0] f, f \rangle} in two different ways and apply the Cauchy-Schwarz inequality.) Informally, this exercise asserts that the spatial uncertainty {\Delta x} and the frequency uncertainty {\Delta \xi} of a function obey the Heisenberg uncertainty relation {\Delta x \Delta \xi \gtrsim 1}.

Nevertheless, one still has the correspondence principle, which asserts that in certain regimes (which, with our choice of normalisations, corresponds to the high-frequency regime), quantum mechanics continues to behave like a commutative theory, and one can sometimes proceed as if the operators {X,D} (and the various operators {a(X,D)} constructed from them) commute up to “lower order” errors. This can be formalised using the pseudodifferential calculus, which we give below the fold, in which we restrict the symbol {a} to certain “symbol classes” of various orders (which then restricts {a(X,D)} to be pseudodifferential operators of various orders), and obtains approximate identities such as

\displaystyle  (a_1 a_2)(X,D) \approx a_1(X,D) a_2(X,D)

where the error between the left and right-hand sides is of “lower order” and can in fact enjoys a useful asymptotic expansion. As a first approximation to this calculus, one can think of functions {f \in {\mathcal S}({\bf R})} as having some sort of “phase space portrait{\tilde f(x,\xi)} which somehow combines the physical space representation {x \mapsto f(x)} with its Fourier representation {\xi \mapsto f(\xi)}, and pseudodifferential operators {a(X,D)} behave approximately like “phase space multiplier operators” in this representation in the sense that

\displaystyle  \widetilde{a(X,D) f}(x,\xi) \approx a(x,\xi) \tilde f(x,\xi).

Unfortunately the uncertainty principle (or the non-commutativity of {X} and {D}) prevents us from making these approximations perfectly precise, and it is not always clear how to even define a phase space portrait {\tilde f} of a function {f} precisely (although there are certain popular candidates for such a portrait, such as the FBI transform (also known as the Gabor transform in signal processing literature), or the Wigner quasiprobability distribution, each of which have some advantages and disadvantages). Nevertheless even if the concept of a phase space portrait is somewhat fuzzy, it is of great conceptual benefit both within mathematics and outside of it. For instance, the musical score one assigns a piece of music can be viewed as a phase space portrait of the sound waves generated by that music.

To complement the pseudodifferential calculus we have the basic Calderón-Vaillancourt theorem, which asserts that pseudodifferential operators of order zero are Calderón-Zygmund operators and thus bounded on {L^p({\bf R})} for {1 < p < \infty}. The standard proof of this theorem is a classic application of one of the basic techniques in harmonic analysis, namely the exploitation of almost orthogonality; the proof we will give here will achieve this through the elegant device of the Cotlar-Stein lemma.

Pseudodifferential operators (especially when generalised to higher dimensions {d \geq 1}) are a fundamental tool in the theory of linear PDE, as well as related fields such as semiclassical analysis, microlocal analysis, and geometric quantisation. There is an even wider class of operators that is also of interest, namely the Fourier integral operators, which roughly speaking not only approximately multiply the phase space portrait {\tilde f(x,\xi)} of a function by some multiplier {a(x,\xi)}, but also move the portrait around by a canonical transformation. However, the development of theory of these operators is beyond the scope of these notes; see for instance the texts of Hormander or Eskin.

This set of notes is only the briefest introduction to the theory of pseudodifferential operators. Many texts are available that cover the theory in more detail, for instance this text of Taylor.

Read the rest of this entry »

A fundamental and recurring problem in analytic number theory is to demonstrate the presence of cancellation in an oscillating sum, a typical example of which might be a correlation

\displaystyle  \sum_{n} f(n) \overline{g(n)} \ \ \ \ \ (1)

between two arithmetic functions {f: {\bf N} \rightarrow {\bf C}} and {g: {\bf N} \rightarrow {\bf C}}, which to avoid technicalities we will assume to be finitely supported (or that the {n} variable is localised to a finite range, such as {\{ n: n \leq x \}}). A key example to keep in mind for the purposes of this set of notes is the twisted von Mangoldt summatory function

\displaystyle  \sum_{n \leq x} \Lambda(n) \overline{\chi(n)} \ \ \ \ \ (2)

that measures the correlation between the primes and a Dirichlet character {\chi}. One can get a “trivial” bound on such sums from the triangle inequality

\displaystyle  |\sum_{n} f(n) \overline{g(n)}| \leq \sum_{n} |f(n)| |g(n)|;

for instance, from the triangle inequality and the prime number theorem we have

\displaystyle  |\sum_{n \leq x} \Lambda(n) \overline{\chi(n)}| \leq x + o(x) \ \ \ \ \ (3)

as {x \rightarrow \infty}. But the triangle inequality is insensitive to the phase oscillations of the summands, and often we expect (e.g. from the probabilistic heuristics from Supplement 4) to be able to improve upon the trivial triangle inequality bound by a substantial amount; in the best case scenario, one typically expects a “square root cancellation” that gains a factor that is roughly the square root of the number of summands. (For instance, for Dirichlet characters {\chi} of conductor {O(x^{O(1)})}, it is expected from probabilistic heuristics that the left-hand side of (3) should in fact be {O_\varepsilon(x^{1/2+\varepsilon})} for any {\varepsilon>0}.)

It has proven surprisingly difficult, however, to establish significant cancellation in many of the sums of interest in analytic number theory, particularly if the sums do not have a strong amount of algebraic structure (e.g. multiplicative structure) which allow for the deployment of specialised techniques (such as multiplicative number theory techniques). In fact, we are forced to rely (to an embarrassingly large extent) on (many variations of) a single basic tool to capture at least some cancellation, namely the Cauchy-Schwarz inequality. In fact, in many cases the classical case

\displaystyle  |\sum_n f(n) \overline{g(n)}| \leq (\sum_n |f(n)|^2)^{1/2} (\sum_n |g(n)|^2)^{1/2}, \ \ \ \ \ (4)

considered by Cauchy, where at least one of {f, g: {\bf N} \rightarrow {\bf C}} is finitely supported, suffices for applications. Roughly speaking, the Cauchy-Schwarz inequality replaces the task of estimating a cross-correlation between two different functions {f,g}, to that of measuring self-correlations between {f} and itself, or {g} and itself, which are usually easier to compute (albeit at the cost of capturing less cancellation). Note that the Cauchy-Schwarz inequality requires almost no hypotheses on the functions {f} or {g}, making it a very widely applicable tool.

There is however some skill required to decide exactly how to deploy the Cauchy-Schwarz inequality (and in particular, how to select {f} and {g}); if applied blindly, one loses all cancellation and can even end up with a worse estimate than the trivial bound. For instance, if one tries to bound (2) directly by applying Cauchy-Schwarz with the functions {\Lambda} and {\chi}, one obtains the bound

\displaystyle  |\sum_{n \leq x} \Lambda(n) \overline{\chi(n)}| \leq (\sum_{n \leq x} \Lambda(n)^2)^{1/2} (\sum_{n \leq x} |\chi(n)|^2)^{1/2}.

The right-hand side may be bounded by {\ll x \log^{1/2} x}, but this is worse than the trivial bound (3) by a logarithmic factor. This can be “blamed” on the fact that {\Lambda} and {\chi} are concentrated on rather different sets ({\Lambda} is concentrated on primes, while {\chi} is more or less uniformly distributed amongst the natural numbers); but even if one corrects for this (e.g. by weighting Cauchy-Schwarz with some suitable “sieve weight” that is more concentrated on primes), one still does not do any better than (3). Indeed, the Cauchy-Schwarz inequality suffers from the same key weakness as the triangle inequality: it is insensitive to the phase oscillation of the factors {f, g}.

While the Cauchy-Schwarz inequality can be poor at estimating a single correlation such as (1), its power improves when considering an average (or sum, or square sum) of multiple correlations. In this set of notes, we will focus on one such situation of this type, namely that of trying to estimate a square sum

\displaystyle  (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \ \ \ \ \ (5)

that measures the correlations of a single function {f: {\bf N} \rightarrow {\bf C}} with multiple other functions {g_j: {\bf N} \rightarrow {\bf C}}. One should think of the situation in which {f} is a “complicated” function, such as the von Mangoldt function {\Lambda}, but the {g_j} are relatively “simple” functions, such as Dirichlet characters. In the case when the {g_j} are orthonormal functions, we of course have the classical Bessel inequality:

Lemma 1 (Bessel inequality) Let {g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}} be finitely supported functions obeying the orthonormality relationship

\displaystyle  \sum_n g_j(n) \overline{g_{j'}(n)} = 1_{j=j'}

for all {1 \leq j,j' \leq J}. Then for any function {f: {\bf N} \rightarrow {\bf C}}, we have

\displaystyle  (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \leq (\sum_n |f(n)|^2)^{1/2}.

For sake of comparison, if one were to apply the Cauchy-Schwarz inequality (4) separately to each summand in (5), one would obtain the bound of {J^{1/2} (\sum_n |f(n)|^2)^{1/2}}, which is significantly inferior to the Bessel bound when {J} is large. Geometrically, what is going on is this: the Cauchy-Schwarz inequality (4) is only close to sharp when {f} and {g} are close to parallel in the Hilbert space {\ell^2({\bf N})}. But if {g_1,\dots,g_J} are orthonormal, then it is not possible for any other vector {f} to be simultaneously close to parallel to too many of these orthonormal vectors, and so the inner products of {f} with most of the {g_j} should be small. (See this previous blog post for more discussion of this principle.) One can view the Bessel inequality as formalising a repulsion principle: if {f} correlates too much with some of the {g_j}, then it does not have enough “energy” to have large correlation with the rest of the {g_j}.

In analytic number theory applications, it is useful to generalise the Bessel inequality to the situation in which the {g_j} are not necessarily orthonormal. This can be accomplished via the Cauchy-Schwarz inequality:

Proposition 2 (Generalised Bessel inequality) Let {g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}} be finitely supported functions, and let {\nu: {\bf N} \rightarrow {\bf R}^+} be a non-negative function. Let {f: {\bf N} \rightarrow {\bf C}} be such that {f} vanishes whenever {\nu} vanishes, we have

\displaystyle  (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2} \ \ \ \ \ (6)

\displaystyle  \times ( \sum_{j=1}^J \sum_{j'=1}^J c_j \overline{c_{j'}} \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)} )^{1/2}

for some sequence {c_1,\dots,c_J} of complex numbers with {\sum_{j=1}^J |c_j|^2 = 1}, with the convention that {|f(n)|^2/\nu(n)} vanishes whenever {f(n), \nu(n)} both vanish.

Note by relabeling that we may replace the domain {{\bf N}} here by any other at most countable set, such as the integers {{\bf Z}}. (Indeed, one can give an analogue of this lemma on arbitrary measure spaces, but we will not do so here.) This result first appears in this paper of Boas.

Proof: We use the method of duality to replace the role of the function {f} by a dual sequence {c_1,\dots,c_J}. By the converse to Cauchy-Schwarz, we may write the left-hand side of (6) as

\displaystyle  \sum_{j=1}^J \overline{c_j} \sum_{n} f(n) \overline{g_j(n)}

for some complex numbers {c_1,\dots,c_J} with {\sum_{j=1}^J |c_j|^2 = 1}. Indeed, if all of the {\sum_{n} f(n) \overline{g_j(n)}} vanish, we can set the {c_j} arbitrarily, otherwise we set {(c_1,\dots,c_J)} to be the unit vector formed by dividing {(\sum_{n} f(n) \overline{g_j(n)})_{j=1}^J} by its length. We can then rearrange this expression as

\displaystyle  \sum_n f(n) \overline{\sum_{j=1}^J c_j g_j(n)}.

Applying Cauchy-Schwarz (dividing the first factor by {\nu(n)^{1/2}} and multiplying the second by {\nu(n)^{1/2}}, after first removing those {n} for which {\nu(n)} vanish), this is bounded by

\displaystyle  (\sum_n |f(n)|^2 / \nu(n))^{1/2} (\sum_n \nu(n) |\sum_{j=1}^J c_j g_j(n)|^2)^{1/2},

and the claim follows by expanding out the second factor. \Box

Observe that Lemma 1 is a special case of Proposition 2 when {\nu=1} and the {g_j} are orthonormal. In general, one can expect Proposition 2 to be useful when the {g_j} are almost orthogonal relative to {\nu}, in that the correlations {\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}} tend to be small when {j,j'} are distinct. In that case, one can hope for the diagonal term {j=j'} in the right-hand side of (6) to dominate, in which case one can obtain estimates of comparable strength to the classical Bessel inequality. The flexibility to choose different weights {\nu} in the above proposition has some technical advantages; for instance, if {f} is concentrated in a sparse set (such as the primes), it is sometimes useful to tailor {\nu} to a comparable set (e.g. the almost primes) in order not to lose too much in the first factor {\sum_n |f(n)|^2 / \nu(n)}. Also, it can be useful to choose a fairly “smooth” weight {\nu}, in order to make the weighted correlations {\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}} small.

Remark 3 In harmonic analysis, the use of tools such as Proposition 2 is known as the method of almost orthogonality, or the {TT^*} method. The explanation for the latter name is as follows. For sake of exposition, suppose that {\nu} is never zero (or we remove all {n} from the domain for which {\nu(n)} vanishes). Given a family of finitely supported functions {g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}}, consider the linear operator {T: \ell^2(\nu^{-1}) \rightarrow \ell^2(\{1,\dots,J\})} defined by the formula

\displaystyle  T f := ( \sum_{n} f(n) \overline{g_j(n)} )_{j=1}^J.

This is a bounded linear operator, and the left-hand side of (6) is nothing other than the {\ell^2(\{1,\dots,J\})} norm of {Tf}. Without any further information on the function {f} other than its {\ell^2(\nu^{-1})} norm {(\sum_n |f(n)|^2 / \nu(n))^{1/2}}, the best estimate one can obtain on (6) here is clearly

\displaystyle  (\sum_n |f(n)|^2 / \nu(n))^{1/2} \times \|T\|_{op},

where {\|T\|_{op}} denotes the operator norm of {T}.

The adjoint {T^*: \ell^2(\{1,\dots,J\}) \rightarrow \ell^2(\nu^{-1})} is easily computed to be

\displaystyle  T^* (c_j)_{j=1}^J := (\sum_{j=1}^J c_j \nu(n) g_j(n) )_{n \in {\bf N}}.

The composition {TT^*: \ell^2(\{1,\dots,J\}) \rightarrow \ell^2(\{1,\dots,J\})} of {T} and its adjoint is then given by

\displaystyle  TT^* (c_j)_{j=1}^J := (\sum_{j=1}^J c_j \sum_n \nu(n) g_j(n) \overline{g_{j'}}(n) )_{j=1}^J.

From the spectral theorem (or singular value decomposition), one sees that the operator norms of {T} and {TT^*} are related by the identity

\displaystyle  \|T\|_{op} = \|TT^*\|_{op}^{1/2},

and as {TT^*} is a self-adjoint, positive semi-definite operator, the operator norm {\|TT^*\|_{op}} is also the supremum of the quantity

\displaystyle  \langle TT^* (c_j)_{j=1}^J, (c_j)_{j=1}^J \rangle_{\ell^2(\{1,\dots,J\})} = \sum_{j=1}^J \sum_{j'=1}^J c_j \overline{c_{j'}} \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}

where {(c_j)_{j=1}^J} ranges over unit vectors in {\ell^2(\{1,\dots,J\})}. Putting these facts together, we obtain Proposition 2; furthermore, we see from this analysis that the bound here is essentially optimal if the only information one is allowed to use about {f} is its {\ell^2(\nu^{-1})} norm.

For further discussion of almost orthogonality methods from a harmonic analysis perspective, see Chapter VII of this text of Stein.

Exercise 4 Under the same hypotheses as Proposition 2, show that

\displaystyle  \sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}| \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2}

\displaystyle  \times ( \sum_{j=1}^J \sum_{j'=1}^J |\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}| )^{1/2}

as well as the variant inequality

\displaystyle  |\sum_{j=1}^J \sum_{n} f(n) \overline{g_j(n)}| \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2}

\displaystyle  \times | \sum_{j=1}^J \sum_{j'=1}^J \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}|^{1/2}.

Proposition 2 has many applications in analytic number theory; for instance, we will use it in later notes to control the large value of Dirichlet series such as the Riemann zeta function. One of the key benefits is that it largely eliminates the need to consider further correlations of the function {f} (other than its self-correlation {\sum_n |f(n)|^2 / \nu(n)} relative to {\nu^{-1}}, which is usually fairly easy to compute or estimate as {\nu} is usually chosen to be relatively simple); this is particularly useful if {f} is a function which is significantly more complicated to analyse than the functions {g_j}. Of course, the tradeoff for this is that one now has to deal with the coefficients {c_j}, which if anything are even less understood than {f}, since literally the only thing we know about these coefficients is their square sum {\sum_{j=1}^J |c_j|^2}. However, as long as there is enough almost orthogonality between the {g_j}, one can estimate the {c_j} by fairly crude estimates (e.g. triangle inequality or Cauchy-Schwarz) and still get reasonably good estimates.

In this set of notes, we will use Proposition 2 to prove some versions of the large sieve inequality, which controls a square-sum of correlations

\displaystyle  \sum_n f(n) e( -\xi_j n )

of an arbitrary finitely supported function {f: {\bf Z} \rightarrow {\bf C}} with various additive characters {n \mapsto e( \xi_j n)} (where {e(x) := e^{2\pi i x}}), or alternatively a square-sum of correlations

\displaystyle  \sum_n f(n) \overline{\chi_j(n)}

of {f} with various primitive Dirichlet characters {\chi_j}; it turns out that one can prove a (slightly sub-optimal) version of this inequality quite quickly from Proposition 2 if one first prepares the sum by inserting a smooth cutoff with well-behaved Fourier transform. The large sieve inequality has many applications (as the name suggests, it has particular utility within sieve theory). For the purposes of this set of notes, though, the main application we will need it for is the Bombieri-Vinogradov theorem, which in a very rough sense gives a prime number theorem in arithmetic progressions, which, “on average”, is of strength comparable to the results provided by the Generalised Riemann Hypothesis (GRH), but has the great advantage of being unconditional (it does not require any unproven hypotheses such as GRH); it can be viewed as a significant extension of the Siegel-Walfisz theorem from Notes 2. As we shall see in later notes, the Bombieri-Vinogradov theorem is a very useful ingredient in sieve-theoretic problems involving the primes.

There is however one additional important trick, beyond the large sieve, which we will need in order to establish the Bombieri-Vinogradov theorem. As it turns out, after some basic manipulations (and the deployment of some multiplicative number theory, and specifically the Siegel-Walfisz theorem), the task of proving the Bombieri-Vinogradov theorem is reduced to that of getting a good estimate on sums that are roughly of the form

\displaystyle  \sum_{j=1}^J |\sum_n \Lambda(n) \overline{\chi_j}(n)| \ \ \ \ \ (7)

for some primitive Dirichlet characters {\chi_j}. This looks like the type of sum that can be controlled by the large sieve (or by Proposition 2), except that this is an ordinary sum rather than a square sum (i.e., an {\ell^1} norm rather than an {\ell^2} norm). One could of course try to control such a sum in terms of the associated square-sum through the Cauchy-Schwarz inequality, but this turns out to be very wasteful (it loses a factor of about {J^{1/2}}). Instead, one should try to exploit the special structure of the von Mangoldt function {\Lambda}, in particular the fact that it can be expressible as a Dirichlet convolution {\alpha * \beta} of two further arithmetic sequences {\alpha,\beta} (or as a finite linear combination of such Dirichlet convolutions). The reason for introducing this convolution structure is through the basic identity

\displaystyle  (\sum_n \alpha*\beta(n) \overline{\chi_j}(n)) = (\sum_n \alpha(n) \overline{\chi_j}(n)) (\sum_n \beta(n) \overline{\chi_j}(n)) \ \ \ \ \ (8)

for any finitely supported sequences {\alpha,\beta: {\bf N} \rightarrow {\bf C}}, as can be easily seen by multiplying everything out and using the completely multiplicative nature of {\chi_j}. (This is the multiplicative analogue of the well-known relationship {\widehat{f*g}(\xi) = \hat f(\xi) \hat g(\xi)} between ordinary convolution and Fourier coefficients.) This factorisation, together with yet another application of the Cauchy-Schwarz inequality, lets one control (7) by square-sums of the sort that can be handled by the large sieve inequality.

As we have seen in Notes 1, the von Mangoldt function {\Lambda} does indeed admit several factorisations into Dirichlet convolution type, such as the factorisation {\Lambda = \mu * L}. One can try directly inserting this factorisation into the above strategy; it almost works, however there turns out to be a problem when considering the contribution of the portion of {\mu} or {L} that is supported at very small natural numbers, as the large sieve loses any gain over the trivial bound in such settings. Because of this, there is a need for a more sophisticated decomposition of {\Lambda} into Dirichlet convolutions {\alpha * \beta} which are non-degenerate in the sense that {\alpha,\beta} are supported away from small values. (As a non-example, the trivial factorisation {\Lambda = \Lambda * \delta} would be a totally inappropriate factorisation for this purpose.) Fortunately, it turns out that through some elementary combinatorial manipulations, some satisfactory decompositions of this type are available, such as the Vaughan identity and the Heath-Brown identity. By using one of these identities we will be able to complete the proof of the Bombieri-Vinogradov theorem. (These identities are also useful for other applications in which one wishes to control correlations between the von Mangoldt function {\Lambda} and some other sequence; we will see some examples of this in later notes.)

For further reading on these topics, including a significantly larger number of examples of the large sieve inequality, see Chapters 7 and 17 of Iwaniec and Kowalski.

Remark 5 We caution that the presentation given in this set of notes is highly ahistorical; we are using modern streamlined proofs of results that were first obtained by more complicated arguments.

Read the rest of this entry »

One of the basic problems in analytic number theory is to estimate sums of the form

\displaystyle  \sum_{p<x} f(p)

as {x \rightarrow \infty}, where {p} ranges over primes and {f} is some explicit function of interest (e.g. a linear phase function {f(p) = e^{2\pi i \alpha p}} for some real number {\alpha}). This is essentially the same task as obtaining estimates on the sum

\displaystyle  \sum_{n<x} \Lambda(n) f(n)

where {\Lambda} is the von Mangoldt function. If {f} is bounded, {f(n)=O(1)}, then from the prime number theorem one has the trivial bound

\displaystyle  \sum_{n<x} \Lambda(n) f(n) = O(x)

but often (when {f} is somehow “oscillatory” in nature) one is seeking the refinement

\displaystyle  \sum_{n<x} \Lambda(n) f(n) = o(x) \ \ \ \ \ (1)

or equivalently

\displaystyle  \sum_{p<x} f(p) = o(\frac{x}{\log x}). \ \ \ \ \ (2)

Thanks to identities such as

\displaystyle  \Lambda(n) = \sum_{d|n} \mu(d) \log(\frac{n}{d}), \ \ \ \ \ (3)

where {\mu} is the Möbius function, refinements such as (1) are similar in spirit to estimates of the form

\displaystyle  \sum_{n<x} \mu(n) f(n) = o(x). \ \ \ \ \ (4)

Unfortunately, the connection between (1) and (4) is not particularly tight; roughly speaking, one needs to improve the bounds in (4) (and variants thereof) by about two factors of {\log x} before one can use identities such as (3) to recover (1). Still, one generally thinks of (1) and (4) as being “morally” equivalent, even if they are not formally equivalent.

When {f} is oscillating in a sufficiently “irrational” way, then one standard way to proceed is the method of Type I and Type II sums, which uses truncated versions of divisor identities such as (3) to expand out either (1) or (4) into linear (Type I) or bilinear sums (Type II) with which one can exploit the oscillation of {f}. For instance, Vaughan’s identity lets one rewrite the sum in (1) as the sum of the Type I sum

\displaystyle  \sum_{d \leq U} \mu(d) (\sum_{V/d \leq r \leq x/d} (\log r) f(rd)),

the Type I sum

\displaystyle  -\sum_{d \leq UV} a(d) \sum_{V/d \leq r \leq x/d} f(rd),

the Type II sum

\displaystyle  -\sum_{V \leq d \leq x/U} \sum_{U < m \leq x/V} \Lambda(d) b(m) f(dm),

and the error term {\sum_{d \leq V} \Lambda(n) f(n)}, whenever {1 \leq U, V \leq x} are parameters, and {a, b} are the sequences

\displaystyle  a(d) := \sum_{e \leq U, f \leq V: ef = d} \Lambda(d) \mu(e)

and

\displaystyle  b(m) := \sum_{d|m: d \leq U} \mu(d).

Similarly one can express (4) as the Type I sum

\displaystyle  -\sum_{d \leq UV} c(d) \sum_{UV/d \leq r \leq x/d} f(rd),

the Type II sum

\displaystyle  - \sum_{V < d \leq x/U} \sum_{U < m \leq x/d} \mu(m) b(d) f(dm)

and the error term {\sum_{d \leq UV} \mu(n) f(N)}, whenever {1 \leq U,V \leq x} with {UV \leq x}, and {c} is the sequence

\displaystyle  c(d) := \sum_{e \leq U, f \leq V: ef = d} \mu(d) \mu(e).

After eliminating troublesome sequences such as {a(), b(), c()} via Cauchy-Schwarz or the triangle inequality, one is then faced with the task of estimating Type I sums such as

\displaystyle  \sum_{r \leq y} f(rd)

or Type II sums such as

\displaystyle  \sum_{r \leq y} f(rd) \overline{f(rd')}

for various {y, d, d' \geq 1}. Here, the trivial bound is {O(y)}, but due to a number of logarithmic inefficiencies in the above method, one has to obtain bounds that are more like {O( \frac{y}{\log^C y})} for some constant {C} (e.g. {C=5}) in order to end up with an asymptotic such as (1) or (4).

However, in a recent paper of Bourgain, Sarnak, and Ziegler, it was observed that as long as one is only seeking the Mobius orthogonality (4) rather than the von Mangoldt orthogonality (1), one can avoid losing any logarithmic factors, and rely purely on qualitative equidistribution properties of {f}. A special case of their orthogonality criterion (which actually dates back to an earlier paper of Katai, as was pointed out to me by Nikos Frantzikinakis) is as follows:

Proposition 1 (Orthogonality criterion) Let {f: {\bf N} \rightarrow {\bf C}} be a bounded function such that

\displaystyle  \sum_{n \leq x} f(pn) \overline{f(qn)} = o(x) \ \ \ \ \ (5)

for any distinct primes {p, q} (where the decay rate of the error term {o(x)} may depend on {p} and {q}). Then

\displaystyle  \sum_{n \leq x} \mu(n) f(n) =o(x). \ \ \ \ \ (6)

Actually, the Bourgain-Sarnak-Ziegler paper establishes a more quantitative version of this proposition, in which {\mu} can be replaced by an arbitrary bounded multiplicative function, but we will content ourselves with the above weaker special case. (See also these notes of Harper, which uses the Katai argument to give a slightly weaker quantitative bound in the same spirit.) This criterion can be viewed as a multiplicative variant of the classical van der Corput lemma, which in our notation asserts that {\sum_{n \leq x} f(n) = o(x)} if one has {\sum_{n \leq x} f(n+h) \overline{f(n)} = o(x)} for each fixed non-zero {h}.

As a sample application, Proposition 1 easily gives a proof of the asymptotic

\displaystyle  \sum_{n \leq x} \mu(n) e^{2\pi i \alpha n} = o(x)

for any irrational {\alpha}. (For rational {\alpha}, this is a little trickier, as it is basically equivalent to the prime number theorem in arithmetic progressions.) The paper of Bourgain, Sarnak, and Ziegler also apply this criterion to nilsequences (obtaining a quick proof of a qualitative version of a result of Ben Green and myself, see these notes of Ziegler for details) and to horocycle flows (for which no Möbius orthogonality result was previously known).

Informally, the connection between (5) and (6) comes from the multiplicative nature of the Möbius function. If (6) failed, then {\mu(n)} exhibits strong correlation with {f(n)}; by change of variables, we then expect {\mu(pn)} to correlate with {f(pn)} and {\mu(pm)} to correlate with {f(qn)}, for “typical” {p,q} at least. On the other hand, since {\mu} is multiplicative, {\mu(pn)} exhibits strong correlation with {\mu(qn)}. Putting all this together (and pretending correlation is transitive), this would give the claim (in the contrapositive). Of course, correlation is not quite transitive, but it turns out that one can use the Cauchy-Schwarz inequality as a substitute for transitivity of correlation in this case.

I will give a proof of Proposition 1 below the fold (which is not quite based on the argument in the above mentioned paper, but on a variant of that argument communicated to me by Tamar Ziegler, and also independently discovered by Adam Harper). The main idea is to exploit the following observation: if {P} is a “large” but finite set of primes (in the sense that the sum {A := \sum_{p \in P} \frac{1}{p}} is large), then for a typical large number {n} (much larger than the elements of {P}), the number of primes in {P} that divide {n} is pretty close to {A = \sum_{p \in P} \frac{1}{p}}:

\displaystyle  \sum_{p \in P: p|n} 1 \approx A. \ \ \ \ \ (7)

A more precise formalisation of this heuristic is provided by the Turan-Kubilius inequality, which is proven by a simple application of the second moment method.

In particular, one can sum (7) against {\mu(n) f(n)} and obtain an approximation

\displaystyle  \sum_{n \leq x} \mu(n) f(n) \approx \frac{1}{A} \sum_{p \in P} \sum_{n \leq x: p|n} \mu(n) f(n)

that approximates a sum of {\mu(n) f(n)} by a bunch of sparser sums of {\mu(n) f(n)}. Since

\displaystyle  x = \frac{1}{A} \sum_{p \in P} \frac{x}{p},

we see (heuristically, at least) that in order to establish (4), it would suffice to establish the sparser estimates

\displaystyle  \sum_{n \leq x: p|n} \mu(n) f(n) = o(\frac{x}{p})

for all {p \in P} (or at least for “most” {p \in P}).

Now we make the change of variables {n = pm}. As the Möbius function is multiplicative, we usually have {\mu(n) = \mu(p) \mu(m) = - \mu(m)}. (There is an exception when {n} is divisible by {p^2}, but this will be a rare event and we will be able to ignore it.) So it should suffice to show that

\displaystyle  \sum_{m \leq x/p} \mu(m) f(pm) = o( x/p )

for most {p \in P}. However, by the hypothesis (5), the sequences {m \mapsto f(pm)} are asymptotically orthogonal as {p} varies, and this claim will then follow from a Cauchy-Schwarz argument.

Read the rest of this entry »

A basic problem in harmonic analysis (as well as in linear algebra, random matrix theory, and high-dimensional geometry) is to estimate the operator norm {\|T\|_{op}} of a linear map {T: H \rightarrow H'} between two Hilbert spaces, which we will take to be complex for sake of discussion. Even the finite-dimensional case {T: {\bf C}^m \rightarrow {\bf C}^n} is of interest, as this operator norm is the same as the largest singular value {\sigma_1(A)} of the {n \times m} matrix {A} associated to {T}.

In general, this operator norm is hard to compute precisely, except in special cases. One such special case is that of a diagonal operator, such as that associated to an {n \times n} diagonal matrix {D = \hbox{diag}(\lambda_1,\ldots,\lambda_n)}. In this case, the operator norm is simply the supremum norm of the diagonal coefficients:

\displaystyle  \|D\|_{op} = \sup_{1 \leq i \leq n} |\lambda_i|. \ \ \ \ \ (1)

A variant of (1) is Schur’s test, which for simplicity we will phrase in the setting of finite-dimensional operators {T: {\bf C}^m \rightarrow {\bf C}^n} given by a matrix {A = (a_{ij})_{1 \leq i \leq n; 1 \leq j \leq m}} via the usual formula

\displaystyle  T (x_j)_{j=1}^m := ( \sum_{j=1}^m a_{ij} x_j )_{i=1}^n.

A simple version of this test is as follows: if all the absolute row sums and columns sums of {A} are bounded by some constant {M}, thus

\displaystyle  \sum_{j=1}^m |a_{ij}| \leq M \ \ \ \ \ (2)

for all {1 \leq i \leq n} and

\displaystyle  \sum_{i=1}^n |a_{ij}| \leq M \ \ \ \ \ (3)

for all {1 \leq j \leq m}, then

\displaystyle  \|T\|_{op} = \|A\|_{op} \leq M \ \ \ \ \ (4)

(note that this generalises (the upper bound in) (1).) Indeed, to see (4), it suffices by duality and homogeneity to show that

\displaystyle  |\sum_{i=1}^n (\sum_{j=1}^m a_{ij} x_j) y_i| \leq M

whenever {(x_j)_{j=1}^m} and {(y_i)_{i=1}^n} are sequences with {\sum_{j=1}^m |x_j|^2 = \sum_{i=1}^n |y_i|^2 = 1}; but this easily follows from the arithmetic mean-geometric mean inequality

\displaystyle  |a_{ij} x_j) y_i| \leq \frac{1}{2} |a_{ij}| |x_i|^2 + \frac{1}{2} |a_{ij}| |y_j|^2

and (2), (3).

Schur’s test (4) (and its many generalisations to weighted situations, or to Lebesgue or Lorentz spaces) is particularly useful for controlling operators in which the role of oscillation (as reflected in the phase of the coefficients {a_{ij}}, as opposed to just their magnitudes {|a_{ij}|}) is not decisive. However, it is of limited use in situations that involve a lot of cancellation. For this, a different test, known as the Cotlar-Stein lemma, is much more flexible and powerful. It can be viewed in a sense as a non-commutative variant of Schur’s test (4) (or of (1)), in which the scalar coefficients {\lambda_i} or {a_{ij}} are replaced by operators instead.

To illustrate the basic flavour of the result, let us return to the bound (1), and now consider instead a block-diagonal matrix

\displaystyle  A = \begin{pmatrix} \Lambda_1 & 0 & \ldots & 0 \\ 0 & \Lambda_2 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & \Lambda_n \end{pmatrix} \ \ \ \ \ (5)

where each {\Lambda_i} is now a {m_i \times m_i} matrix, and so {A} is an {m \times m} matrix with {m := m_1 + \ldots +m_n}. Then we have

\displaystyle  \|A\|_{op} = \sup_{1 \leq i \leq n} \|\Lambda_i\|_{op}. \ \ \ \ \ (6)

Indeed, the lower bound is trivial (as can be seen by testing {A} on vectors which are supported on the {i^{th}} block of coordinates), while to establish the upper bound, one can make use of the orthogonal decomposition

\displaystyle  {\bf C}^m \equiv \bigoplus_{i=1}^m {\bf C}^{m_i} \ \ \ \ \ (7)

to decompose an arbitrary vector {x \in {\bf C}^m} as

\displaystyle  x = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix}

with {x_i \in {\bf C}^{m_i}}, in which case we have

\displaystyle  Ax = \begin{pmatrix} \Lambda_1 x_1 \\ \Lambda_2 x_2 \\ \vdots \\ \Lambda_n x_n \end{pmatrix}

and the upper bound in (6) then follows from a simple computation.

The operator {T} associated to the matrix {A} in (5) can be viewed as a sum {T = \sum_{i=1}^n T_i}, where each {T_i} corresponds to the {\Lambda_i} block of {A}, in which case (6) can also be written as

\displaystyle  \|T\|_{op} = \sup_{1 \leq i \leq n} \|T_i\|_{op}. \ \ \ \ \ (8)

When {n} is large, this is a significant improvement over the triangle inequality, which merely gives

\displaystyle  \|T\|_{op} \leq \sum_{1 \leq i \leq n} \|T_i\|_{op}.

The reason for this gain can ultimately be traced back to the “orthogonality” of the {T_i}; that they “occupy different columns” and “different rows” of the range and domain of {T}. This is obvious when viewed in the matrix formalism, but can also be described in the more abstract Hilbert space operator formalism via the identities

\displaystyle  T_i^* T_j = 0 \ \ \ \ \ (9)

and

\displaystyle  T_i T^* j = 0 \ \ \ \ \ (10)

whenever {i \neq j}. (The first identity asserts that the ranges of the {T_i} are orthogonal to each other, and the second asserts that the coranges of the {T_i} (the ranges of the adjoints {T_i^*}) are orthogonal to each other.) By replacing (7) with a more abstract orthogonal decomposition into these ranges and coranges, one can in fact deduce (8) directly from (9) and (10).

The Cotlar-Stein lemma is an extension of this observation to the case where the {T_i} are merely almost orthogonal rather than orthogonal, in a manner somewhat analogous to how Schur’s test (partially) extends (1) to the non-diagonal case. Specifically, we have

Lemma 1 (Cotlar-Stein lemma) Let {T_1,\ldots,T_n: H \rightarrow H'} be a finite sequence of bounded linear operators from one Hilbert space {H} to another {H'}, obeying the bounds

\displaystyle  \sum_{j=1}^n \| T_i T_j^* \|_{op}^{1/2} \leq M \ \ \ \ \ (11)

and

\displaystyle  \sum_{j=1}^n \| T_i^* T_j \|_{op}^{1/2} \leq M \ \ \ \ \ (12)

for all {i=1,\ldots,n} and some {M > 0} (compare with (2), (3)). Then one has

\displaystyle  \| \sum_{i=1}^n T_i \|_{op} \leq M. \ \ \ \ \ (13)

Note from the basic {TT^*} identity

\displaystyle  \|T\|_{op} = \|TT^* \|_{op}^{1/2} = \|T^* T\|_{op}^{1/2} \ \ \ \ \ (14)

that the hypothesis (11) (or (12)) already gives the bound

\displaystyle  \|T_i\|_{op} \leq M \ \ \ \ \ (15)

on each component {T_i} of {T}, which by the triangle inequality gives the inferior bound

\displaystyle  \| \sum_{i=1}^n T_i \|_{op} \leq nM;

the point of the Cotlar-Stein lemma is that the dependence on {n} in this bound is eliminated in (13), which in particular makes the bound suitable for extension to the limit {n \rightarrow \infty} (see Remark 1 below).

The Cotlar-Stein lemma was first established by Cotlar in the special case of commuting self-adjoint operators, and then independently by Cotlar and Stein in full generality, with the proof appearing in a subsequent paper of Knapp and Stein.

The Cotlar-Stein lemma is often useful in controlling operators such as singular integral operators or pseudo-differential operators {T} which “do not mix scales together too much”, in that operators {T} map functions “that oscillate at a given scale {2^{-i}}” to functions that still mostly oscillate at the same scale {2^{-i}}. In that case, one can often split {T} into components {T_i} which essentically capture the scale {2^{-i}} behaviour, and understanding {L^2} boundedness properties of {T} then reduces to establishing the boundedness of the simpler operators {T_i} (and of establishing a sufficient decay in products such as {T_i^* T_j} or {T_i T_j^*} when {i} and {j} are separated from each other). In some cases, one can use Fourier-analytic tools such as Littlewood-Paley projections to generate the {T_i}, but the true power of the Cotlar-Stein lemma comes from situations in which the Fourier transform is not suitable, such as when one has a complicated domain (e.g. a manifold or a non-abelian Lie group), or very rough coefficients (which would then have badly behaved Fourier behaviour). One can then select the decomposition {T = \sum_i T_i} in a fashion that is tailored to the particular operator {T}, and is not necessarily dictated by Fourier-analytic considerations.

Once one is in the almost orthogonal setting, as opposed to the genuinely orthogonal setting, the previous arguments based on orthogonal projection seem to fail completely. Instead, the proof of the Cotlar-Stein lemma proceeds via an elegant application of the tensor power trick (or perhaps more accurately, the power method), in which the operator norm of {T} is understood through the operator norm of a large power of {T} (or more precisely, of its self-adjoint square {TT^*} or {T^* T}). Indeed, from an iteration of (14) we see that for any natural number {N}, one has

\displaystyle  \|T\|_{op}^{2N} = \| (TT^*)^N \|_{op}. \ \ \ \ \ (16)

To estimate the right-hand side, we expand out the right-hand side and apply the triangle inequality to bound it by

\displaystyle  \sum_{i_1,j_1,\ldots,i_N,j_N \in \{1,\ldots,n\}} \| T_{i_1} T_{j_1}^* T_{i_2} T_{j_2}^* \ldots T_{i_N} T_{j_N}^* \|_{op}. \ \ \ \ \ (17)

Recall that when we applied the triangle inequality directly to {T}, we lost a factor of {n} in the final estimate; it will turn out that we will lose a similar factor here, but this factor will eventually be attenuated into nothingness by the tensor power trick.

To bound (17), we use the basic inequality {\|ST\|_{op} \leq \|S\|_{op} \|T\|_{op}} in two different ways. If we group the product {T_{i_1} T_{j_1}^* T_{i_2} T_{j_2}^* \ldots T_{i_N} T_{j_N}^*} in pairs, we can bound the summand of (17) by

\displaystyle  \| T_{i_1} T_{j_1}^* \|_{op} \ldots \| T_{i_N} T_{j_N}^* \|_{op}.

On the other hand, we can group the product by pairs in another way, to obtain the bound of

\displaystyle  \| T_{i_1} \|_{op} \| T_{j_1}^* T_{i_2} \|_{op} \ldots \| T_{j_{N-1}}^* T_{i_N}\|_{op} \| T_{j_N}^* \|_{op}.

We bound {\| T_{i_1} \|_{op}} and {\| T_{j_N}^* \|_{op}} crudely by {M} using (15). Taking the geometric mean of the above bounds, we can thus bound (17) by

\displaystyle  M \sum_{i_1,j_1,\ldots,i_N,j_N \in \{1,\ldots,n\}} \| T_{i_1} T_{j_1}^* \|_{op}^{1/2} \| T_{j_1}^* T_{i_2} \|_{op}^{1/2} \ldots \| T_{j_{N-1}}^* T_{i_N}\|_{op}^{1/2} \| T_{i_N} T_{j_N}^* \|_{op}^{1/2}.

If we then sum this series first in {j_N}, then in {i_N}, then moving back all the way to {i_1}, using (11) and (12) alternately, we obtain a final bound of

\displaystyle  n M^{2N}

for (16). Taking {N^{th}} roots, we obtain

\displaystyle  \|T\|_{op} \leq n^{1/2N} M.

Sending {N \rightarrow \infty}, we obtain the claim.

Remark 1 As observed in a number of places (see e.g. page 318 of Stein’s book, or this paper of Comech, the Cotlar-Stein lemma can be extended to infinite sums {\sum_{i=1}^\infty T_i} (with the obvious changes to the hypotheses (11), (12)). Indeed, one can show that for any {f \in H}, the sum {\sum_{i=1}^\infty T_i f} is unconditionally convergent in {H'} (and furthermore has bounded {2}-variation), and the resulting operator {\sum_{i=1}^\infty T_i} is a bounded linear operator with an operator norm bound on {M}.

Remark 2 If we specialise to the case where all the {T_i} are equal, we see that the bound in the Cotlar-Stein lemma is sharp, at least in this case. Thus we see how the tensor power trick can convert an inefficient argument, such as that obtained using the triangle inequality or crude bounds such as (15), into an efficient one.

Remark 3 One can prove Schur’s test by a similar method. Indeed, starting from the inequality

\displaystyle  \|A\|_{op}^{2N} \leq \hbox{tr}( (AA^*)^N )

(which follows easily from the singular value decomposition), we can bound {\|A\|_{op}^{2N}} by

\displaystyle  \sum_{i_1,\ldots,j_N \in \{1,\ldots,n\}} a_{i_1,j_1} \overline{a_{j_1,i_2}} \ldots a_{i_N,j_N} \overline{a_{j_N,i_1}}.

Estimating the other two terms in the summand by {M}, and then repeatedly summing the indices one at a time as before, we obtain

\displaystyle  \|A\|_{op}^{2N} \leq n M^{2N}

and the claim follows from the tensor power trick as before. On the other hand, in the converse direction, I do not know of any way to prove the Cotlar-Stein lemma that does not basically go through the tensor power argument.

The first Distinguished Lecture Series at UCLA for this academic year is given by Elias Stein (who, incidentally, was my graduate student advisor), who is lecturing on “Singular Integrals and Several Complex Variables: Some New Perspectives“.  The first lecture was a historical (and non-technical) survey of modern harmonic analysis (which, amazingly, was compressed into half an hour), followed by an introduction as to how this theory is currently in the process of being adapted to handle the basic analytical issues in several complex variables, a topic which in many ways is still only now being developed.  The second and third lectures will focus on these issues in greater depth.

As usual, any errors here are due to my transcription and interpretation of the lecture.

[Update, Oct 27: The slides from the talk are now available here.]

Read the rest of this entry »

As many readers may already know, my good friend and fellow mathematical blogger Tim Gowers, having wrapped up work on the Princeton Companion to Mathematics (which I believe is now in press), has begun another mathematical initiative, namely a “Tricks Wiki” to act as a repository for mathematical tricks and techniques.    Tim has already started the ball rolling with several seed articles on his own blog, and asked me to also contribute some articles.  (As I understand it, these articles will be migrated to the Wiki in a few months, once it is fully set up, and then they will evolve with edits and contributions by anyone who wishes to pitch in, in the spirit of Wikipedia; in particular, articles are not intended to be permanently authored or signed by any single contributor.)

So today I’d like to start by extracting some material from an old post of mine on “Amplification, arbitrage, and the tensor power trick” (as well as from some of the comments), and converting it to the Tricks Wiki format, while also taking the opportunity to add a few more examples.

Title: The tensor power trick

Quick description: If one wants to prove an inequality X \leq Y for some non-negative quantities X, Y, but can only see how to prove a quasi-inequality X \leq CY that loses a multiplicative constant C, then try to replace all objects involved in the problem by “tensor powers” of themselves and apply the quasi-inequality to those powers.  If all goes well, one can show that X^M \leq C Y^M for all M \geq 1, with a constant C which is independent of M, which implies that X \leq Y as desired by taking M^{th} roots and then taking limits as M \to \infty.

Read the rest of this entry »

Archives