You are currently browsing the monthly archive for May 2020.

This set of notes discusses aspects of one of the oldest questions in Fourier analysis, namely the nature of convergence of Fourier series.

If is an absolutely integrable function, its Fourier coefficients are defined by the formula

If is smooth, then the Fourier coefficients are absolutely summable, and we have the Fourier inversion formula where the series here is uniformly convergent. In particular, if we define the partial summation operators then converges uniformly to when is smooth.What if is not smooth, but merely lies in an class for some ? The Fourier coefficients remain well-defined, as do the partial summation operators . The question of convergence in norm is relatively easy to settle:

Exercise 1

- (i) If and , show that converges in norm to . (
Hint:first use the boundedness of the Hilbert transform to show that is bounded in uniformly in .)- (ii) If or , show that there exists such that the sequence is unbounded in (so in particular it certainly does not converge in norm to . (
Hint:first show that is not bounded in uniformly in , then apply the uniform boundedness principle in the contrapositive.)

The question of pointwise almost everywhere convergence turned out to be a significantly harder problem:

Theorem 2 (Pointwise almost everywhere convergence)

Note from Hölder’s inequality that contains for all , so Carleson’s theorem covers the case of Hunt’s theorem. We remark that the precise threshold near between Kolmogorov-type divergence results and Carleson-Hunt pointwise convergence results, in the category of Orlicz spaces, is still an active area of research; see this paper of Lie for further discussion.

Carleson’s theorem in particular was a surprisingly difficult result, lying just out of reach of classical methods (as we shall see later, the result is much easier if we smooth either the function or the summation method by a tiny bit). Nowadays we realise that the reason for this is that Carleson’s theorem essentially contains a *frequency modulation symmetry* in addition to the more familiar translation symmetry and dilation symmetry. This basically rules out the possibility of attacking Carleson’s theorem with tools such as Calderón-Zygmund theory or Littlewood-Paley theory, which respect the latter two symmetries but not the former. Instead, tools from “time-frequency analysis” that essentially respect all three symmetries should be employed. We will illustrate this by giving a relatively short proof of Carleson’s theorem due to Lacey and Thiele. (There are other proofs of Carleson’s theorem, including Carleson’s original proof, its modification by Hunt, and a later time-frequency proof by Fefferman; see Remark 18 below.)

In contrast to previous notes, in this set of notes we shall focus exclusively on Fourier analysis in the one-dimensional setting for simplicity of notation, although all of the results here have natural extensions to higher dimensions. Depending on the physical context, one can view the physical domain as representing either space or time; we will mostly think in terms of the former interpretation, even though the standard terminology of “time-frequency analysis”, which we will make more prominent use of in later notes, clearly originates from the latter.

In previous notes we have often performed various localisations in either physical space or Fourier space , for instance in order to take advantage of the uncertainty principle. One can formalise these operations in terms of the functional calculus of two basic operations on Schwartz functions , the *position operator* defined by

and the *momentum operator* , defined by

(The terminology comes from quantum mechanics, where it is customary to also insert a small constant on the right-hand side of (1) in accordance with de Broglie’s law. Such a normalisation is also used in several branches of mathematics, most notably semiclassical analysis and microlocal analysis, where it becomes profitable to consider the semiclassical limit , but we will not emphasise this perspective here.) The momentum operator can be viewed as the counterpart to the position operator, but in frequency space instead of physical space, since we have the standard identity

for any and . We observe that both operators are formally self-adjoint in the sense that

for all , where we use the Hermitian inner product

Clearly, for any polynomial of one real variable (with complex coefficients), the operator is given by the spatial multiplier operator

and similarly the operator is given by the Fourier multiplier operator

Inspired by this, if is any smooth function that obeys the derivative bounds

for all and (that is to say, all derivatives of grow at most polynomially), then we can define the spatial multiplier operator by the formula

one can easily verify from several applications of the Leibniz rule that maps Schwartz functions to Schwartz functions. We refer to as the *symbol* of this spatial multiplier operator. In a similar fashion, we define the Fourier multiplier operator associated to the symbol by the formula

For instance, any constant coefficient linear differential operators can be written in this notation as

however there are many Fourier multiplier operators that are not of this form, such as fractional derivative operators for non-integer values of , which is a Fourier multiplier operator with symbol . It is also very common to use spatial cutoffs and Fourier cutoffs for various bump functions to localise functions in either space or frequency; we have seen several examples of such cutoffs in action in previous notes (often in the higher dimensional setting ).

We observe that the maps and are ring homomorphisms, thus for instance

and

for any obeying the derivative bounds (2); also is formally adjoint to in the sense that

for , and similarly for and . One can interpret these facts as part of the functional calculus of the operators , which can be interpreted as densely defined self-adjoint operators on . However, in this set of notes we will not develop the spectral theory necessary in order to fully set out this functional calculus rigorously.

In the field of PDE and ODE, it is also very common to study *variable coefficient* linear differential operators

where the are now functions of the spatial variable obeying the derivative bounds (2). A simple example is the quantum harmonic oscillator Hamiltonian . One can rewrite this operator in our notation as

and so it is natural to interpret this operator as a combination of both the position operator and the momentum operator , where the *symbol* this operator is the function

Indeed, from the Fourier inversion formula

for any we have

and hence on multiplying by and summing we have

Inspired by this, we can introduce the *Kohn-Nirenberg quantisation* by defining the operator by the formula

whenever and is any smooth function obeying the derivative bounds

for all and (note carefully that the exponent in on the right-hand side is required to be uniform in ). This quantisation clearly generalises both the spatial multiplier operators and the Fourier multiplier operators defined earlier, which correspond to the cases when the symbol is a function of only or only respectively. Thus we have combined the physical space and the frequency space into a single domain, known as phase space . The term “time-frequency analysis” encompasses analysis based on decompositions and other manipulations of phase space, in much the same way that “Fourier analysis” encompasses analysis based on decompositions and other manipulations of frequency space. We remark that the Kohn-Nirenberg quantization is not the only choice of quantization one could use; see Remark 19 below.

In principle, the quantisations are potentially very useful for such tasks as inverting variable coefficient linear operators, or to localize a function simultaneously in physical and Fourier space. However, a fundamental difficulty arises: map from symbols to operators is now no longer a ring homomorphism, in particular

in general. Fundamentally, this is due to the fact that pointwise multiplication of symbols is a commutative operation, whereas the composition of operators such as and does not necessarily commute. This lack of commutativity can be measured by introducing the *commutator*

of two operators , and noting from the product rule that

(In the language of Lie groups and Lie algebras, this tells us that are (up to complex constants) the standard Lie algebra generators of the Heisenberg group.) From a quantum mechanical perspective, this lack of commutativity is the root cause of the uncertainty principle that prevents one from simultaneously localizing in both position and momentum past a certain point. Here is one basic way of formalising this principle:

Exercise 2 (Heisenberg uncertainty principle)For any and , show that(

Hint:evaluate the expression in two different ways and apply the Cauchy-Schwarz inequality.) Informally, this exercise asserts that the spatial uncertainty and the frequency uncertainty of a function obey the Heisenberg uncertainty relation .

Nevertheless, one still has the correspondence principle, which asserts that in certain regimes (which, with our choice of normalisations, corresponds to the high-frequency regime), quantum mechanics continues to behave like a commutative theory, and one can sometimes proceed as if the operators (and the various operators constructed from them) commute up to “lower order” errors. This can be formalised using the *pseudodifferential calculus*, which we give below the fold, in which we restrict the symbol to certain “symbol classes” of various orders (which then restricts to be pseudodifferential operators of various orders), and obtains approximate identities such as

where the error between the left and right-hand sides is of “lower order” and can in fact enjoys a useful asymptotic expansion. As a first approximation to this calculus, one can think of functions as having some sort of “phase space portrait” which somehow combines the physical space representation with its Fourier representation , and pseudodifferential operators behave approximately like “phase space multiplier operators” in this representation in the sense that

Unfortunately the uncertainty principle (or the non-commutativity of and ) prevents us from making these approximations perfectly precise, and it is not always clear how to even define a phase space portrait of a function precisely (although there are certain popular candidates for such a portrait, such as the FBI transform (also known as the Gabor transform in signal processing literature), or the Wigner quasiprobability distribution, each of which have some advantages and disadvantages). Nevertheless even if the concept of a phase space portrait is somewhat fuzzy, it is of great conceptual benefit both within mathematics and outside of it. For instance, the musical score one assigns a piece of music can be viewed as a phase space portrait of the sound waves generated by that music.

To complement the pseudodifferential calculus we have the basic *Calderón-Vaillancourt theorem*, which asserts that pseudodifferential operators of order zero are Calderón-Zygmund operators and thus bounded on for . The standard proof of this theorem is a classic application of one of the basic techniques in harmonic analysis, namely the exploitation of *almost orthogonality*; the proof we will give here will achieve this through the elegant device of the Cotlar-Stein lemma.

Pseudodifferential operators (especially when generalised to higher dimensions ) are a fundamental tool in the theory of linear PDE, as well as related fields such as semiclassical analysis, microlocal analysis, and geometric quantisation. There is an even wider class of operators that is also of interest, namely the Fourier integral operators, which roughly speaking not only approximately multiply the phase space portrait of a function by some multiplier , but also move the portrait around by a canonical transformation. However, the development of theory of these operators is beyond the scope of these notes; see for instance the texts of Hormander or Eskin.

This set of notes is only the briefest introduction to the theory of pseudodifferential operators. Many texts are available that cover the theory in more detail, for instance this text of Taylor.

## Recent Comments