You are currently browsing the tag archive for the ‘Fourier analysis’ tag.

I’ve just finished writing the first draft of my third book coming out of the 2010 blog posts, namely “Higher order Fourier analysis“, which was based primarily on my graduate course in the topic, though it also contains material from some additional posts related to linear and higher order Fourier analysis on the blog.  It is available online here.  As usual, comments and corrections are welcome.  There is also a stub page for the book, which at present does not contain much more than the above link.

We now give a basic application of Fourier analysis to the problem of counting additive patterns in sets, namely the following famous theorem of Roth:

Theorem 1 (Roth’s theorem) Let ${A}$ be a subset of the integers ${{\bf Z}}$ whose upper density $\displaystyle \overline{\delta}(A) := \limsup_{N \rightarrow \infty} \frac{|A \cap [-N,N]|}{2N+1}$

is positive. Then ${A}$ contains infinitely many arithmetic progressions ${a, a+r, a+2r}$ of length three, with ${a \in {\bf Z}}$ and ${r>0}$.

This is the first non-trivial case of Szemerédi’s theorem, which is the same assertion but with length three arithmetic progressions replaced by progressions of length ${k}$ for any ${k}$.
As it turns out, one can prove Roth’s theorem by an application of linear Fourier analysis – by comparing the set ${A}$ (or more precisely, the indicator function ${1_A}$ of that set, or of pieces of that set) against linear characters ${n \mapsto e(\alpha n)}$ for various frequencies ${\alpha \in {\bf R}/{\bf Z}}$. There are two extreme cases to consider (which are model examples of a more general dichotomy between structure and randomness). One is when ${A}$ is aligned up almost completely with one of these linear characters, for instance by being a Bohr set of the form $\displaystyle \{ n \in {\bf Z}: \| \alpha n - \theta \|_{{\bf R}/{\bf Z}} < \varepsilon \}$

or more generally of the form $\displaystyle \{ n \in {\bf Z}: \alpha n \in U \}$

for some multi-dimensional frequency ${\alpha \in {\bf T}^d}$ and some open set ${U}$. In this case, arithmetic progressions can be located using the equidistribution theory of the previous set of notes. At the other extreme, one has Fourier-uniform or Fourier-pseudorandom sets, whose correlation with any linear character is negligible. In this case, arithmetic progressions can be produced in abundance via a Fourier-analytic calculation.
To handle the general case, one must somehow synthesise together the argument that deals with the structured case with the argument that deals with the random case. There are several known ways to do this, but they can be basically classified into two general methods, namely the density increment argument (or ${L^\infty}$ increment argument) and the energy increment argument (or ${L^2}$ increment argument).
The idea behind the density increment argument is to introduce a dichotomy: either the object ${A}$ being studied is pseudorandom (in which case one is done), or else one can use the theory of the structured objects to locate a sub-object of significantly higher “density” than the original object. As the density cannot exceed one, one should thus be done after a finite number of iterations of this dichotomy. This argument was introduced by Roth in his original proof of the above theorem.
The idea behind the energy increment argument is instead to decompose the original object ${A}$ into two pieces (and, sometimes, a small additional error term): a structured component that captures all the structured objects that have significant correlation with ${A}$, and a pseudorandom component which has no significant correlation with any structured object. This decomposition usually proceeds by trying to maximise the “energy” (or ${L^2}$ norm) of the structured component, or dually by trying to minimise the energy of the residual between the original object and the structured object. This argument appears for instance in the proof of the Szemerédi regularity lemma (which, not coincidentally, can also be used to prove Roth’s theorem), and is also implicit in the ergodic theory approach to such problems (through the machinery of conditional expectation relative to a factor, which is a type of orthogonal projection, the existence of which is usually established via an energy increment argument). However, one can also deploy the energy increment argument in the Fourier analytic setting, to give an alternate Fourier-analytic proof of Roth’s theorem that differs in some ways from the density increment proof.
In these notes we give both two Fourier-analytic proofs of Roth’s theorem, one proceeding via the density increment argument, and the other by the energy increment argument. As it turns out, both of these arguments extend to establish Szemerédi’s theorem, and more generally in counting other types of patterns, but this is non-trivial (requiring some sort of inverse conjecture for the Gowers uniformity norms in both cases); we will discuss this further in later notes.
Read the rest of this entry »

This week I am in Boston, giving this year’s Simons lectures at MIT together with David Donoho. (These lectures, incidentally, are endowed by Jim Simons, who was mentioned in some earlier discussion here.) While preparing these lectures, it occurred to me that I may as well post my lecture notes on this blog, since this medium is essentially just an asynchronous version of a traditional lecture series, and the hypertext capability is in some ways more convenient and informal than, say, $\LaTeX$ slides.

I am giving three lectures, each expounding on some aspects of the theme “the dichotomy between structure and randomness”, which I also spoke about (and wrote about) for the ICM last August. This theme seems to pervade many of the areas of mathematics that I work in, and my lectures aim to explore how this theme manifests itself in several of these. In this, the first lecture, I describe the dichotomy as it appears in Fourier analysis and in number theory. (In the second, I discuss the dichotomy in ergodic theory and graph theory, while in the third, I discuss PDE.) James on 254A, Notes 2: Complex-analyti… Pierre de la Harpe on Goursat and Furstenberg-Weiss… Jacob Wakem on Work hard Anonymous on The Hardy–Littlewood… Anonymous on The Hardy–Littlewood… Raphael on The Hardy–Littlewood… Quill on The Hardy–Littlewood… Dwight Walsh on Why global regularity for Navi… Viki Esther Chang on About Siegelvsgrh on The Hardy–Littlewood… Terence Tao on The Hardy–Littlewood… Terence Tao on The Hardy–Littlewood… Terence Tao on The Hardy–Littlewood… Yaver Gulusoy on 275A, Notes 2: Product measure… Anonymous on The Hardy–Littlewood…