You are currently browsing the tag archive for the ‘harmonic analysis’ tag.
I’d like to begin today by welcoming Timothy Gowers to the mathematics blogging community; Tim’s blog will also double as the “official” blog for the Princeton Companion to Mathematics, as indicated by his first post
which also contains links to further material (such as sample articles) on the Companion. Tim is already thinking beyond the blog medium, though, as you can see in his second post…
Anyway, this gives me an excuse to continue my own series of PCM articles. Some years back, Tim asked me to write a longer article on harmonic analysis – the quantitative study of oscillation, transforms, and other features of functions and sets on domains. At the time I did not fully understand the theme of the Companion, and wrote a rather detailed and technical survey of the subject, which turned out to be totally unsuitable for the Companion. I then went back and rewrote the article from scratch, leading to this article, which (modulo some further editing) is close to what will actually appear. (These two articles were already available on my web site, but not in a particularly prominent manner.) So, as you can see, the articles in the Companion are not exactly at the same level as the expository survey articles one sees published in journals.
I should also mention that some other authors for the Companion have put their articles on-line. For instance, Alain Connes‘ PCM article “Advice for the beginner“, aimed at graduate students just starting out in research mathematics, was in fact already linked to on one of the pages of this blog. I’ll try to point out links to other PCM articles in future posts in this series.
One of the oldest and most fundamental concepts in mathematics is the line. Depending on exactly what mathematical structures we want to study (algebraic, geometric, topological, order-theoretic, etc.), we model lines nowadays by a variety of standard mathematical objects, such as the real line , the complex line , the projective line , the extended real line , the affine line , the continuum , the long line , etc. We also have discrete versions of the line, such as the natural numbers , the integers , and the ordinal , as well as compact versions of the line, such as the unit interval or the unit circle . Finally we have discrete and compact versions of the line, such as the cyclic groups and the discrete intervals and . By taking Cartesian products we then obtain higher-dimensional objects such as Euclidean space , the standard lattice , the standard torus , and so forth. These objects of course form the background on which a very large fraction of modern mathematics is set.
Broadly speaking, the line has three major families of structures on it:
- Geometric structures, such as a metric or a measure, completeness, scales (coarse and fine), rigid motions (translations and reflection), similarities (dilation, affine maps), and differential structures (tangent bundle, etc.);
- Algebraic structures, such group, ring, or field structures, and everything else that comes from those categories (e.g. subgroups, homomorphisms, involutions, etc.); and
- One-dimensional structures, such as order, a length space structure (in particular, path-connectedness structure), a singleton generator, the Archimedean property, the ability to use mathematical induction (i.e. well-ordering), convexity, or the ability to disconnect the line by removing a single point.
Of course, these structures are inter-related, and it is an important phenomenon that a mathematical concept which appears to be native to one structure, can often be equivalently defined in terms of other structures. For instance, the absolute value of an integer can be defined geometrically as the distance from 0 to , algebraically as the index of the subgroup of the integers generated by n, or one-dimensionally as the number of integers between 0 and (including 0, but excluding ). This equivalence of definitions becomes important when one wants to work in more general contexts in which one or more of the above structures is missing or otherwise weakened.
What I want to talk about today is an important toy model for the line (in any of its incarnations), in which the geometric and algebraic structures are enhanced (and become neatly nested and recursive), at the expense of the one-dimensional structure (which is largely destroyed). This model has many different names, depending on what field of mathematics one is working in and which structures one is interested in. In harmonic analysis it is called the dyadic model, the Walsh model, or the Cantor group model; in number theory and arithmetic geometry it is known as the function field model; in topology it is the Cantor space model; in probability it is the martingale model; in metric geometry it is the ultrametric, tree, or non-Archimedean model; in algebraic geometry it is the Puiseux series model; in additive combinatorics it is the bounded torsion or finite field model; in computer science and information theory it is the Hamming cube model; in representation theory it is the Kashiwara crystal model. Let me arbitrarily select one of these terms, and refer to all of these models as dyadic models for the line (or of objects derived from the line). While there is often no direct link between a dyadic model and a non-dyadic model, dyadic models serve as incredibly useful laboratories in which to gain insight and intuition for the “real-world” non-dyadic model, since one has much more powerful and elegant algebraic and geometric structure to play with in this setting (though the loss of one-dimensional structure can be a significant concern). Perhaps the most striking example of this is the three-line proof of the Riemann hypothesis in the function field model of the integers, which I will discuss a little later.
Paul Cohen is of course best known in mathematics for his Fields Medal-winning proof of the undecidability of the continuum hypothesis within the standard Zermelo-Frankel-Choice (ZFC) axioms of set theory, by introducing the now standard method of forcing in model theory. (More precisely, assuming ZFC is consistent, Cohen proved that models of ZFC exist in which the continuum hypothesis fails; Gödel had previously shown under the same assumption that models exist in which the continuum hypothesis is true.) Cohen’s method also showed that the axiom of choice was independent of ZF. The friendliest introduction to forcing is perhaps still Timothy Chow‘s “Forcing for dummies“, though I should warn that Tim has a rather stringent definition of “dummy”.
But Cohen was also a noted analyst. For instance, the Cohen idempotent theorem in harmonic analysis classifies the idempotent measures in a locally compact abelian group G (i.e. the finite regular measures for which ); specifically, a finite regular measure is idempotent if and only if the Fourier transform of the measure only takes values 0 and 1, and furthermore can be expressed as a finite linear combination of indicator functions of cosets of open subgroups of the Pontryagin dual of G. (Earlier results in this direction were obtained by Helson and by Rudin; a non-commutative version was subsequently given by Host. These results play an important role in abstract harmonic analysis.) Recently, Ben Green and Tom Sanders connected this classical result to the very recent work on Freiman-type theorems in additive combinatorics, using the latter to create a quantitative version of the former, which in particular is suitable for use in finite abelian groups.
Paul Cohen’s legacy also includes the advisorship of outstanding mathematicians such as the number theorist and analyst Peter Sarnak (who, incidentally, taught me analytic number theory when I was a graduate student). Cohen was in fact my “uncle”; his advisor, Antoni Zygmund, was the advisor of my own advisor Elias Stein.
It is a great loss for the world of mathematics.
[Update, Mar 25: Added the hypothesis that ZFC is consistent to the description of Cohen's result. Several other minor edits also.]
I’ve just uploaded the short story “Uchiyama’s constructive proof of the Fefferman-Stein decomposition“. In 1982, Uchiyama gave a new proof of the celebrated Fefferman-Stein theorem that expressed any BMO function as the sum of a bounded function, and Riesz transforms of bounded functions. Unlike the original proof (which relied, among other things, on the Hahn-Banach theorem), Uchiyama’s proof was very explicit, constructing the decomposition by building the bounded functions one Littlewood-Paley frequency band at a time while keeping the functions taking values on or near a sphere, and then iterating away the error. Here I have written some notes on how the proof goes. The notes are a little condensed, in that a number of standard computations involving estimations of Schwartz tails, Carleson measures, etc. have been omitted, but hopefully the gist of the argument is still clear.