You are currently browsing the tag archive for the ‘Fourier series’ tag.

This set of notes discusses aspects of one of the oldest questions in Fourier analysis, namely the nature of convergence of Fourier series.

If ${f: {\bf R}/{\bf Z} \rightarrow {\bf C}}$ is an absolutely integrable function, its Fourier coefficients ${\hat f: {\bf Z} \rightarrow {\bf C}}$ are defined by the formula

$\displaystyle \hat f(n) := \int_{{\bf R}/{\bf Z}} f(x) e^{-2\pi i nx}\ dx.$

If ${f}$ is smooth, then the Fourier coefficients ${\hat f}$ are absolutely summable, and we have the Fourier inversion formula

$\displaystyle f(x) = \sum_{n \in {\bf Z}} \hat f(n) e^{2\pi i nx}$

where the series here is uniformly convergent. In particular, if we define the partial summation operators

$\displaystyle S_N f(x) := \sum_{|n| \leq N} \hat f(n) e^{2\pi i nx}$

then ${S_N f}$ converges uniformly to ${f}$ when ${f}$ is smooth.

What if ${f}$ is not smooth, but merely lies in an ${L^p({\bf R}/{\bf Z})}$ class for some ${1 \leq p \leq \infty}$? The Fourier coefficients ${\hat f}$ remain well-defined, as do the partial summation operators ${S_N}$. The question of convergence in norm is relatively easy to settle:

Exercise 1
• (i) If ${1 < p < \infty}$ and ${f \in L^p({\bf R}/{\bf Z})}$, show that ${S_N f}$ converges in ${L^p({\bf R}/{\bf Z})}$ norm to ${f}$. (Hint: first use the boundedness of the Hilbert transform to show that ${S_N}$ is bounded in ${L^p({\bf R}/{\bf Z})}$ uniformly in ${N}$.)
• (ii) If ${p=1}$ or ${p=\infty}$, show that there exists ${f \in L^p({\bf R}/{\bf Z})}$ such that the sequence ${S_N f}$ is unbounded in ${L^p({\bf R}/{\bf Z})}$ (so in particular it certainly does not converge in ${L^p({\bf R}/{\bf Z})}$ norm to ${f}$. (Hint: first show that ${S_N}$ is not bounded in ${L^p({\bf R}/{\bf Z})}$ uniformly in ${N}$, then apply the uniform boundedness principle in the contrapositive.)

The question of pointwise almost everywhere convergence turned out to be a significantly harder problem:

Theorem 2 (Pointwise almost everywhere convergence)
• (i) (Kolmogorov, 1923) There exists ${f \in L^1({\bf R}/{\bf Z})}$ such that ${S_N f(x)}$ is unbounded in ${N}$ for almost every ${x}$.
• (ii) (Carleson, 1966; conjectured by Lusin, 1913) For every ${f \in L^2({\bf R}/{\bf Z})}$, ${S_N f(x)}$ converges to ${f(x)}$ as ${N \rightarrow \infty}$ for almost every ${x}$.
• (iii) (Hunt, 1967) For every ${1 < p \leq \infty}$ and ${f \in L^p({\bf R}/{\bf Z})}$, ${S_N f(x)}$ converges to ${f(x)}$ as ${N \rightarrow \infty}$ for almost every ${x}$.

Note from Hölder’s inequality that ${L^2({\bf R}/{\bf Z})}$ contains ${L^p({\bf R}/{\bf Z})}$ for all ${p\geq 2}$, so Carleson’s theorem covers the ${p \geq 2}$ case of Hunt’s theorem. We remark that the precise threshold near ${L^1}$ between Kolmogorov-type divergence results and Carleson-Hunt pointwise convergence results, in the category of Orlicz spaces, is still an active area of research; see this paper of Lie for further discussion.

Carleson’s theorem in particular was a surprisingly difficult result, lying just out of reach of classical methods (as we shall see later, the result is much easier if we smooth either the function ${f}$ or the summation method ${S_N}$ by a tiny bit). Nowadays we realise that the reason for this is that Carleson’s theorem essentially contains a frequency modulation symmetry in addition to the more familiar translation symmetry and dilation symmetry. This basically rules out the possibility of attacking Carleson’s theorem with tools such as Calderón-Zygmund theory or Littlewood-Paley theory, which respect the latter two symmetries but not the former. Instead, tools from “time-frequency analysis” that essentially respect all three symmetries should be employed. We will illustrate this by giving a relatively short proof of Carleson’s theorem due to Lacey and Thiele. (There are other proofs of Carleson’s theorem, including Carleson’s original proof, its modification by Hunt, and a later time-frequency proof by Fefferman; see Remark 18 below.)

In these notes we lay out the basic theory of the Fourier transform, which is of course the most fundamental tool in harmonic analysis and also of major importance in related fields (functional analysis, complex analysis, PDE, number theory, additive combinatorics, representation theory, signal processing, etc.). The Fourier transform, in conjunction with the Fourier inversion formula, allows one to take essentially arbitrary (complex-valued) functions on a group ${G}$ (or more generally, a space ${X}$ that ${G}$ acts on, e.g. a homogeneous space ${G/H}$), and decompose them as a (discrete or continuous) superposition of much more symmetric functions on the domain, such as characters ${\chi: G \rightarrow S^1}$; the precise superposition is given by Fourier coefficients ${\hat f(\xi)}$, which take values in some dual object such as the Pontryagin dual ${\hat G}$ of ${G}$. Characters behave in a very simple manner with respect to translation (indeed, they are eigenfunctions of the translation action), and so the Fourier transform tends to simplify any mathematical problem which enjoys a translation invariance symmetry (or an approximation to such a symmetry), and is somehow “linear” (i.e. it interacts nicely with superpositions). In particular, Fourier analytic methods are particularly useful for studying operations such as convolution ${f, g \mapsto f*g}$ and set-theoretic addition ${A, B \mapsto A+B}$, or the closely related problem of counting solutions to additive problems such as ${x = a_1 + a_2 + a_3}$ or ${x = a_1 - a_2}$, where ${a_1, a_2, a_3}$ are constrained to lie in specific sets ${A_1, A_2, A_3}$. The Fourier transform is also a particularly powerful tool for solving constant-coefficient linear ODE and PDE (because of the translation invariance), and can also approximately solve some variable-coefficient (or slightly non-linear) equations if the coefficients vary smoothly enough and the nonlinear terms are sufficiently tame.

The Fourier transform ${\hat f(\xi)}$ also provides an important new way of looking at a function ${f(x)}$, as it highlights the distribution of ${f}$ in frequency space (the domain of the frequency variable ${\xi}$) rather than physical space (the domain of the physical variable ${x}$). A given property of ${f}$ in the physical domain may be transformed to a rather different-looking property of ${\hat f}$ in the frequency domain. For instance:

• Smoothness of ${f}$ in the physical domain corresponds to decay of ${\hat f}$ in the Fourier domain, and conversely. (More generally, fine scale properties of ${f}$ tend to manifest themselves as coarse scale properties of ${\hat f}$, and conversely.)
• Convolution in the physical domain corresponds to pointwise multiplication in the Fourier domain, and conversely.
• Constant coefficient differential operators such as ${d/dx}$ in the physical domain corresponds to multiplication by polynomials such as ${2\pi i \xi}$ in the Fourier domain, and conversely.
• More generally, translation invariant operators in the physical domain correspond to multiplication by symbols in the Fourier domain, and conversely.
• Rescaling in the physical domain by an invertible linear transformation corresponds to an inverse (adjoint) rescaling in the Fourier domain.
• Restriction to a subspace (or subgroup) in the physical domain corresponds to projection to the dual quotient space (or quotient group) in the Fourier domain, and conversely.
• Frequency modulation in the physical domain corresponds to translation in the frequency domain, and conversely.

(We will make these statements more precise below.)

On the other hand, some operations in the physical domain remain essentially unchanged in the Fourier domain. Most importantly, the ${L^2}$ norm (or energy) of a function ${f}$ is the same as that of its Fourier transform, and more generally the inner product ${\langle f, g \rangle}$ of two functions ${f}$ is the same as that of their Fourier transforms. Indeed, the Fourier transform is a unitary operator on ${L^2}$ (a fact which is variously known as the Plancherel theorem or the Parseval identity). This makes it easier to pass back and forth between the physical domain and frequency domain, so that one can combine techniques that are easy to execute in the physical domain with other techniques that are easy to execute in the frequency domain. (In fact, one can combine the physical and frequency domains together into a product domain known as phase space, and there are entire fields of mathematics (e.g. microlocal analysis, geometric quantisation, time-frequency analysis) devoted to performing analysis on these sorts of spaces directly, but this is beyond the scope of this course.)

In these notes, we briefly discuss the general theory of the Fourier transform, but will mainly focus on the two classical domains for Fourier analysis: the torus ${{\Bbb T}^d := ({\bf R}/{\bf Z})^d}$, and the Euclidean space ${{\bf R}^d}$. For these domains one has the advantage of being able to perform very explicit algebraic calculations, involving concrete functions such as plane waves ${x \mapsto e^{2\pi i x \cdot \xi}}$ or Gaussians ${x \mapsto A^{d/2} e^{-\pi A |x|^2}}$.