You are currently browsing the monthly archive for April 2009.
This weekend I was in Washington, D.C., for the annual meeting of the National Academy of Sciences. Among the various events at this meeting was an address to the Academy by President Obama this morning on several major science and education policy initiatives, including some already announced in the economic stimulus package and draft federal budget, and some carried over from the previous administration. (I myself missed the address, though, as I had to return back to LA to teach.) Among the initiatives stated were the creation of an Advanced Research Projects Agency for Energy (ARPA-E), modeled on DARPA (and recommended by the NAS); a significant increase in funding to the NSF and related agencies (which was committed to by the Bush administration, but not yet implemented; this is distinct from the one-time funding from the stimulus package discussed in this previous post), leading in particular to a tripling in the number of NSF graduate research fellowships; and a “race to the top” fund administered by the Department of Education to provide incentives for states to improve their quality of maths and science education, among other goals. Some of these initiatives may not survive the budgetary process, of course, but it does seem that there is both symbolic and substantive support for science and education at the federal level.
Here are the video, audio, and transcript of the talk.
[Update, Apr 28: Another event at the meeting is the announcement of the new membership of the Academy for 2009. In mathematics, the new members include Alice Chang, Percy Deift, John Morgan, and Gilbert Strang; congratulations to all four, of course.]
I have received some anecdotal evidence that wordpress blogs such as this one have recently been blocked again by the “great firewall of China“. I was wondering if the readers here could confirm or disconfirm this, and also if they knew of some effective ways to circumvent this firewall, as I have been getting a number of requests on how to do so.
[Of course, by definition, if a reader is directly affected by this blockage, then they would not be able to comment here or to read about any workarounds; but perhaps they would be able to confirm the situation indirectly, and I could still pass on any relevant tips obtained here by other channels.]
[Update, June 3: A partial list of blocked sites can be found here, and a firewall tester can be found here.]
In the theory of dense graphs on vertices, where
is large, a fundamental role is played by the Szemerédi regularity lemma:
Lemma 1 (Regularity lemma, standard version) Let
be a graph on
vertices, and let
and
. Then there exists a partition of the vertices
, with
bounded below by
and above by a quantity
depending only on
, obeying the following properties:
- (Equitable partition) For any
, the cardinalities
of
and
differ by at most
.
- (Regularity) For all but at most
pairs
, the portion of the graph
between
and
is
-regular in the sense that one has
for any
and
with
, where
is the density of edges between
and
.
This lemma becomes useful in the regime when is very large compared to
or
, because all the conclusions of the lemma are uniform in
. Very roughly speaking, it says that “up to errors of size
“, a large graph can be more or less described completely by a bounded number of quantities
. This can be interpreted as saying that the space of all graphs is totally bounded (and hence precompact) in a suitable metric space, thus allowing one to take formal limits of sequences (or subsequences) of graphs; see for instance this paper of Lovasz and Szegedy for a discussion.
For various technical reasons it is easier to work with a slightly weaker version of the lemma, which allows for the cells to have unequal sizes:
Lemma 2 (Regularity lemma, weighted version) Let
be a graph on
vertices, and let
. Then there exists a partition of the vertices
, with
bounded above by a quantity
depending only on
, obeying the following properties:
While Lemma 2 is, strictly speaking, weaker than Lemma 1 in that it does not enforce the equitable size property between the atoms, in practice it seems that the two lemmas are roughly of equal utility; most of the combinatorial consequences of Lemma 1 can also be proven using Lemma 2. The point is that one always has to remember to weight each cell by its density
, rather than by giving each cell an equal weight as in Lemma 1. Lemma 2 also has the advantage that one can easily generalise the result from finite vertex sets
to other probability spaces (for instance, one could weight
with something other than the uniform distribution). For applications to hypergraph regularity, it turns out to be slightly more convenient to have two partitions (coarse and fine) rather than just one; see for instance my own paper on this topic. In any event the arguments below that we give to prove Lemma 2 can be modified to give a proof of Lemma 1 also. The proof of the regularity lemma is usually conducted by a greedy algorithm. Very roughly speaking, one starts with the trivial partition of
. If this partition already regularises the graph, we are done; if not, this means that there are some sets
and
in which there is a significant density fluctuation beyond what has already been detected by the original partition. One then adds these sets to the partition and iterates the argument. Every time a new density fluctuation is incorporated into the partition that models the original graph, this increases a certain “index” or “energy” of the partition. On the other hand, this energy remains bounded no matter how complex the partition, so eventually one must reach a long “energy plateau” in which no further refinement is possible, at which point one can find the regular partition.
One disadvantage of the greedy algorithm is that it is not efficient in the limit , as it requires one to search over all pairs of subsets
of a given pair
of cells, which is an exponentially long search. There are more algorithmically efficient ways to regularise, for instance a polynomial time algorithm was given by Alon, Duke, Lefmann, Rödl, and Yuster. However, one can do even better, if one is willing to (a) allow cells of unequal size, (b) allow a small probability of failure, (c) have the ability to sample vertices from
at random, and (d) allow for the cells to be defined “implicitly” (via their relationships with a fixed set of reference vertices) rather than “explicitly” (as a list of vertices). In that case, one can regularise a graph in a number of operations bounded in
. Indeed, one has
Lemma 3 (Regularity lemma via random neighbourhoods) Let
. Then there exists integers
with the following property: whenever
be a graph on finitely many vertices, if one selects one of the integers
at random from
, then selects
vertices
uniformly from
at random, then the
vertex cells
(some of which can be empty) generated by the vertex neighbourhoods
for
, will obey the conclusions of Lemma 2 with probability at least
.
Thus, roughly speaking, one can regularise a graph simply by taking a large number of random vertex neighbourhoods, and using the partition (or Venn diagram) generated by these neighbourhoods as the partition. The intuition is that if there is any non-uniformity in the graph (e.g. if the graph exhibits bipartite behaviour), this will bias the random neighbourhoods to seek out the partitions that would regularise that non-uniformity (e.g. vertex neighbourhoods would begin to fill out the two vertex cells associated to the bipartite property); if one takes sufficiently many such random neighbourhoods, the probability that all detectable non-uniformity is captured by the partition should converge to . (It is more complicated than this, because the finer one makes the partition, the finer the types of non-uniformity one can begin to detect, but this is the basic idea.)
This fact seems to be reasonably well-known folklore, discovered independently by many authors; it is for instance quite close to the graph property testing results of Alon and Shapira, and also appears implicitly in a paper of Ishigami, as well as a paper of Austin (and perhaps even more implicitly in a paper of myself). However, in none of these papers is the above lemma stated explicitly. I was asked about this lemma recently, so I decided to provide a proof here.
I’ve just uploaded to the arXiv my paper “An inverse theorem for the bilinear $L^2$ Strichartz estimate for the wave equation“. This paper is another technical component of my “heatwave project“, which aims to establish the global regularity conjecture for energy-critical wave maps into hyperbolic space. I have been in the process of writing the final paper of that project, in which I will show that the only way singularities can form is if a special type of solution, known as an “almost periodic blowup solution”, exists. However, I recently discovered that the existing function space estimates that I was relying on for the large energy perturbation theory were not quite adequate, and in particular I needed a certain “inverse theorem” for a standard bilinear estimate which was not quite in the literature. The purpose of this paper is to establish that inverse theorem, which may also have some application to other nonlinear wave equations.
In set theory, a function is defined as an object that evaluates every input
to exactly one output
. However, in various branches of mathematics, it has become convenient to generalise this classical concept of a function to a more abstract one. For instance, in operator algebras, quantum mechanics, or non-commutative geometry, one often replaces commutative algebras of (real or complex-valued) functions on some space
, such as
or
, with a more general – and possibly non-commutative – algebra (e.g. a
-algebra or a von Neumann algebra). Elements in this more abstract algebra are no longer definable as functions in the classical sense of assigning a single value
to every point
, but one can still define other operations on these “generalised functions” (e.g. one can multiply or take inner products between two such objects).
Generalisations of functions are also very useful in analysis. In our study of spaces, we have already seen one such generalisation, namely the concept of a function defined up to almost everywhere equivalence. Such a function
(or more precisely, an equivalence class of classical functions) cannot be evaluated at any given point
, if that point has measure zero. However, it is still possible to perform algebraic operations on such functions (e.g. multiplying or adding two functions together), and one can also integrate such functions on measurable sets (provided, of course, that the function has some suitable integrability condition). We also know that the
spaces can usually be described via duality, as the dual space of
(except in some endpoint cases, namely when
, or when
and the underlying space is not
-finite).
We have also seen (via the Lebesgue-Radon-Nikodym theorem) that locally integrable functions on, say, the real line
, can be identified with locally finite absolutely continuous measures
on the line, by multiplying Lebesgue measure
by the function
. So another way to generalise the concept of a function is to consider arbitrary locally finite Radon measures
(not necessarily absolutely continuous), such as the Dirac measure
. With this concept of “generalised function”, one can still add and subtract two measures
, and integrate any measure
against a (bounded) measurable set
to obtain a number
, but one cannot evaluate a measure
(or more precisely, the Radon-Nikodym derivative
of that measure) at a single point
, and one also cannot multiply two measures together to obtain another measure. From the Riesz representation theorem, we also know that the space of (finite) Radon measures can be described via duality, as linear functionals on
.
There is an even larger class of generalised functions that is very useful, particularly in linear PDE, namely the space of distributions, say on a Euclidean space . In contrast to Radon measures
, which can be defined by how they “pair up” against continuous, compactly supported test functions
to create numbers
, a distribution
is defined by how it pairs up against a smooth compactly supported function
to create a number
. As the space
of smooth compactly supported functions is smaller than (but dense in) the space
of continuous compactly supported functions (and has a stronger topology), the space of distributions is larger than that of measures. But the space
is closed under more operations than
, and in particular is closed under differential operators (with smooth coefficients). Because of this, the space of distributions is similarly closed under such operations; in particular, one can differentiate a distribution and get another distribution, which is something that is not always possible with measures or
functions. But as measures or functions can be interpreted as distributions, this leads to the notion of a weak derivative for such objects, which makes sense (but only as a distribution) even for functions that are not classically differentiable. Thus the theory of distributions can allow one to rigorously manipulate rough functions “as if” they were smooth, although one must still be careful as some operations on distributions are not well-defined, most notably the operation of multiplying two distributions together. Nevertheless one can use this theory to justify many formal computations involving derivatives, integrals, etc. (including several computations used routinely in physics) that would be difficult to formalise rigorously in a purely classical framework.
If one shrinks the space of distributions slightly, to the space of tempered distributions (which is formed by enlarging dual class to the Schwartz class
), then one obtains closure under another important operation, namely the Fourier transform. This allows one to define various Fourier-analytic operations (e.g. pseudodifferential operators) on such distributions.
Of course, at the end of the day, one is usually not all that interested in distributions in their own right, but would like to be able to use them as a tool to study more classical objects, such as smooth functions. Fortunately, one can recover facts about smooth functions from facts about the (far rougher) space of distributions in a number of ways. For instance, if one convolves a distribution with a smooth, compactly supported function, one gets back a smooth function. This is a particularly useful fact in the theory of constant-coefficient linear partial differential equations such as , as it allows one to recover a smooth solution
from smooth, compactly supported data
by convolving
with a specific distribution
, known as the fundamental solution of
. We will give some examples of this later in these notes.
It is this unusual and useful combination of both being able to pass from classical functions to generalised functions (e.g. by differentiation) and then back from generalised functions to classical functions (e.g. by convolution) that sets the theory of distributions apart from other competing theories of generalised functions, in particular allowing one to justify many formal calculations in PDE and Fourier analysis rigorously with relatively little additional effort. On the other hand, being defined by linear duality, the theory of distributions becomes somewhat less useful when one moves to more nonlinear problems, such as nonlinear PDE. However, they still serve an important supporting role in such problems as a “ambient space” of functions, inside of which one carves out more useful function spaces, such as Sobolev spaces, which we will discuss in the next set of notes.
From Tim Gowers’ blog comes the announcement that the Tricki – a wiki for various tricks and strategies for proving mathematical results – is now live. (My own articles for the Tricki are also on this blog; also Ben Green has written up an article on using finite fields to prove results about infinite fields which is loosely based on my own post on the topic, which is in turn based on an article of Serre.) It seems to already be growing at a reasonable rate, with many contributors.
Recently, I have been studying the concept of amenability on groups. This concept can be defined in a “combinatorial” or “finitary” fashion, using Følner sequences, and also in a more “functional-analytic” or “infinitary”‘ fashion, using invariant means. I wanted to get some practice passing back and forth between these two definitions, so I wrote down some notes on how to do this, and also how to take some facts about amenability that are usually proven in one setting, and prove them instead in the other. These notes are thus mostly for my own benefit, but I thought I might post them here also, in case anyone else is interested.
The famous Gödel completeness theorem in logic (not to be confused with the even more famous Gödel incompleteness theorem) roughly states the following:
Theorem 1 (Gödel completeness theorem, informal statement) Let
be a first-order theory (a formal language
, together with a set of axioms, i.e. sentences assumed to be true), and let
be a sentence in the formal language. Assume also that the language
has at most countably many symbols. Then the following are equivalent:
- (i) (Syntactic consequence)
can be deduced from the axioms in
by a finite number of applications of the laws of deduction in first order logic. (This property is abbreviated as
.)
- (ii) (Semantic consequence) Every structure
which satisfies or models
, also satisfies
. (This property is abbreviated as
.)
- (iii) (Semantic consequence for at most countable models) Every structure
which is at most countable, and which models
, also satisfies
.
One can also formulate versions of the completeness theorem for languages with uncountably many symbols, but I will not do so here. One can also force other cardinalities on the model by using the Löwenheim-Skolem theorem.
To state this theorem even more informally, any (first-order) result which is true in all models of a theory, must be logically deducible from that theory, and vice versa. (For instance, any result which is true for all groups, must be deducible from the group axioms; any result which is true for all systems obeying Peano arithmetic, must be deducible from the Peano axioms; and so forth.) In fact, it suffices to check countable and finite models only; for instance, any first-order statement which is true for all finite or countable groups, is in fact true for all groups! Informally, a first-order language with only countably many symbols cannot “detect” whether a given structure is countably or uncountably infinite. Thus for instance even the ZFC axioms of set theory must have some at most countable model, even though one can use ZFC to prove the existence of uncountable sets; this is known as Skolem’s paradox. (To resolve the paradox, one needs to carefully distinguish between an object in a set theory being “externally” countable in the structure that models that theory, and being “internally” countable within that theory.)
Of course, a theory may contain undecidable statements
– sentences which are neither provable nor disprovable in the theory. By the completeness theorem, this is equivalent to saying that
is satisfied by some models of
but not by other models. Thus the completeness theorem is compatible with the incompleteness theorem: recursively enumerable theories such as Peano arithmetic are modeled by the natural numbers
, but are also modeled by other structures also, and there are sentences satisfied by
which are not satisfied by other models of Peano arithmetic, and are thus undecidable within that arithmetic.
An important corollary of the completeness theorem is the compactness theorem:
Corollary 2 (Compactness theorem, informal statement) Let
be a first-order theory whose language has at most countably many symbols. Then the following are equivalent:
- (i)
is consistent, i.e. it is not possible to logically deduce a contradiction from the axioms in
.
- (ii)
is satisfiable, i.e. there exists a structure
that models
.
- (iii) There exists a structure
which is at most countable, that models
.
- (iv) Every finite subset
of
is consistent.
- (v) Every finite subset
of
is satisfiable.
- (vi) Every finite subset
of
is satisfiable with an at most countable model.
Indeed, the equivalence of (i)-(iii), or (iv)-(vi), follows directly from the completeness theorem, while the equivalence of (i) and (iv) follows from the fact that any logical deduction has finite length and so can involve at most finitely many of the axioms in . (Again, the theorem can be generalised to uncountable languages, but the models become uncountable also.)
There is a consequence of the compactness theorem which more closely resembles the sequential concept of compactness. Given a sequence of structures for
, and another structure
for
, let us say that
converges elementarily to
if every sentence
which is satisfied by
, is also satisfied by
for sufficiently large
. (Replacing
by its negation
, we also see that every sentence that is not satisfied by
, is not satisfied by
for sufficiently large
.) Note that the limit
is only unique up to elementary equivalence. Clearly, if each of the
models some theory
, then the limit
will also; thus for instance the elementary limit of a sequence of groups is still a group, the elementary limit of a sequence of rings is still a ring, etc.
Corollary 3 (Sequential compactness theorem) Let
be a language with at most countably many symbols, and let
be a sequence of structures for
. Then there exists a subsequence
which converges elementarily to a limit
which is at most countable.
Proof: For each structure , let
be the theory of that structure, i.e. the set of all sentences that are satisfied by that structure. One can view that theory as a point in
, where
is the set of all sentences in the language
. Since
has at most countably many symbols,
is at most countable, and so (by the sequential Tychonoff theorem)
is sequentially compact in the product topology. (This can also be seen directly by the usual Arzelá-Ascoli diagonalisation argument.) Thus we can find a subsequence
which converges in the product topology to a limit theory
, thus every sentence in
is satisfied by
for sufficiently large
(and every sentence not in
is not satisfied by
for sufficiently large
). In particular, any finite subset of
is satisfiable, hence consistent; by the compactness theorem,
itself is therefore consistent, and has an at most countable model
. Also, each of the theories
is clearly complete (given any sentence
, either
or
is in the theory), and so
is complete as well. One concludes that
is the theory of
, and hence
is the elementary limit of the
as claimed.
[It is also possible to state the compactness theorem using the topological notion of compactness, as follows: let be the space of all structures of a given language
, quotiented by elementary equivalence. One can define a topology on
by taking the sets
as a sub-base, where
ranges over all sentences. Then the compactness theorem is equivalent to the assertion that
is topologically compact.]
One can use the sequential compactness theorem to build a number of interesting “non-standard” models to various theories. For instance, consider the language used by Peano arithmetic (which contains the operations
and the successor operation
, the relation
, and the constant
), and adjoint a new constant
to create an expanded language
. For each natural number
, let
be a structure for
which consists of the natural numbers
(with the usual interpretations of
,
, etc.) and interprets the symbol
as the natural number
. By the compactness theorem, some subsequence of
must converge elementarily to a new structure
of
, which still models Peano arithmetic, but now has the additional property that
for every (standard) natural number
; thus we have managed to create a non-standard model of Peano arithmetic which contains a non-standardly large number (one which is larger than every standard natural number).
The sequential compactness theorem also lets us construct infinitary limits of various sequences of finitary objects; for instance, one can construct infinite pseudo-finite fields as the elementary limits of sequences of finite fields. I recently discovered that other several correspondence principles between finitary and infinitary objects, such as the Furstenberg correspondence principle between sets of integers and dynamical systems, or the more recent correspondence principles concerning graph limits, can be viewed as special cases of the sequential compactness theorem; it also seems possible to encode much of the sum-product theory in finite fields in an infinitary setting using this theorem. I hope to discuss these points in more detail in a later post. In this post, I wish to review (partly for my own benefit) the proof of the completeness (and hence compactness) theorem. The material here is quite standard (I basically follow the usual proof of Henkin, and taking advantage of Skolemisation), but perhaps the concept of an elementary limit is not as well-known outside of logic as it might be. (The closely related concept of an ultraproduct is better known, and can be used to prove most of the compactness theorem already, thanks to Los’s theorem, but I do not know how to use ultraproducts to ensure that the limiting model is countable. However, one can think (intuitively, at least), of the limit model in the above theorem as being the set of “constructible” elements of an ultraproduct of the
.)
In order to emphasise the main ideas in the proof, I will gloss over some of the more technical details in the proofs, relying instead on informal arguments and examples at various points.
In these notes we lay out the basic theory of the Fourier transform, which is of course the most fundamental tool in harmonic analysis and also of major importance in related fields (functional analysis, complex analysis, PDE, number theory, additive combinatorics, representation theory, signal processing, etc.). The Fourier transform, in conjunction with the Fourier inversion formula, allows one to take essentially arbitrary (complex-valued) functions on a group (or more generally, a space
that
acts on, e.g. a homogeneous space
), and decompose them as a (discrete or continuous) superposition of much more symmetric functions on the domain, such as characters
; the precise superposition is given by Fourier coefficients
, which take values in some dual object such as the Pontryagin dual
of
. Characters behave in a very simple manner with respect to translation (indeed, they are eigenfunctions of the translation action), and so the Fourier transform tends to simplify any mathematical problem which enjoys a translation invariance symmetry (or an approximation to such a symmetry), and is somehow “linear” (i.e. it interacts nicely with superpositions). In particular, Fourier analytic methods are particularly useful for studying operations such as convolution
and set-theoretic addition
, or the closely related problem of counting solutions to additive problems such as
or
, where
are constrained to lie in specific sets
. The Fourier transform is also a particularly powerful tool for solving constant-coefficient linear ODE and PDE (because of the translation invariance), and can also approximately solve some variable-coefficient (or slightly non-linear) equations if the coefficients vary smoothly enough and the nonlinear terms are sufficiently tame.
The Fourier transform also provides an important new way of looking at a function
, as it highlights the distribution of
in frequency space (the domain of the frequency variable
) rather than physical space (the domain of the physical variable
). A given property of
in the physical domain may be transformed to a rather different-looking property of
in the frequency domain. For instance:
- Smoothness of
in the physical domain corresponds to decay of
in the Fourier domain, and conversely. (More generally, fine scale properties of
tend to manifest themselves as coarse scale properties of
, and conversely.)
- Convolution in the physical domain corresponds to pointwise multiplication in the Fourier domain, and conversely.
- Constant coefficient differential operators such as
in the physical domain corresponds to multiplication by polynomials such as
in the Fourier domain, and conversely.
- More generally, translation invariant operators in the physical domain correspond to multiplication by symbols in the Fourier domain, and conversely.
- Rescaling in the physical domain by an invertible linear transformation corresponds to an inverse (adjoint) rescaling in the Fourier domain.
- Restriction to a subspace (or subgroup) in the physical domain corresponds to projection to the dual quotient space (or quotient group) in the Fourier domain, and conversely.
- Frequency modulation in the physical domain corresponds to translation in the frequency domain, and conversely.
(We will make these statements more precise below.)
On the other hand, some operations in the physical domain remain essentially unchanged in the Fourier domain. Most importantly, the norm (or energy) of a function
is the same as that of its Fourier transform, and more generally the inner product
of two functions
is the same as that of their Fourier transforms. Indeed, the Fourier transform is a unitary operator on
(a fact which is variously known as the Plancherel theorem or the Parseval identity). This makes it easier to pass back and forth between the physical domain and frequency domain, so that one can combine techniques that are easy to execute in the physical domain with other techniques that are easy to execute in the frequency domain. (In fact, one can combine the physical and frequency domains together into a product domain known as phase space, and there are entire fields of mathematics (e.g. microlocal analysis, geometric quantisation, time-frequency analysis) devoted to performing analysis on these sorts of spaces directly, but this is beyond the scope of this course.)
In these notes, we briefly discuss the general theory of the Fourier transform, but will mainly focus on the two classical domains for Fourier analysis: the torus , and the Euclidean space
. For these domains one has the advantage of being able to perform very explicit algebraic calculations, involving concrete functions such as plane waves
or Gaussians
.
Recent Comments