You are currently browsing the monthly archive for September 2010.
Gil Kalai has officially started the Polymath3 project (Polynomial Hirsch conjecture) with a research thread at his blog.
The original aim of this project is to prove the polynomial Hirsch conjecture, which is a conjecture in the combinatorial geometry of polytopes. However, there is a reduction due to Eisenbrand, Hahnle, Razborov, and Rothvoss that would deduce this conjecture from a purely combinatorial conjecture, which can be stated as follows.
Combinatorial polynomial Hirsch conjecture. Let
be non-empty collections of subsets of
with the following properties:
- (Disjointness)
for every
.
- (Connectedness) If
and
, there exists
such that
.
Then t is of polynomial size in n (i.e.
).
For instance, when n=3, one can obtain such a family with t=6, e.g.
one can show that this is the best possible value of t for this choice of n. The best possible value of t for n=4 is still not worked out; it is between 8 and 11.
One appealing thing about this problem is that there is a simple elementary argument that gives the bound for all
; and so in some sense one is “only a logarithm away” from proving the conjecture. Anyway, the project is just starting, and does not require any particularly specialised background, so anyone who may be interested in this problem one may want to take a look at the research thread.
Thus far, we have only focused on measure and integration theory in the context of Euclidean spaces . Now, we will work in a more abstract and general setting, in which the Euclidean space
is replaced by a more general space
.
It turns out that in order to properly define measure and integration on a general space , it is not enough to just specify the set
. One also needs to specify two additional pieces of data:
- A collection
of subsets of
that one is allowed to measure; and
- The measure
one assigns to each measurable set
.
For instance, Lebesgue measure theory covers the case when is a Euclidean space
,
is the collection
of all Lebesgue measurable subsets of
, and
is the Lebesgue measure
of
.
The collection has to obey a number of axioms (e.g. being closed with respect to countable unions) that make it a
-algebra, which is a stronger variant of the more well-known concept of a boolean algebra. Similarly, the measure
has to obey a number of axioms (most notably, a countable additivity axiom) in order to obtain a measure and integration theory comparable to the Lebesgue theory on Euclidean spaces. When all these axioms are satisfied, the triple
is known as a measure space. These play much the same role in abstract measure theory that metric spaces or topological spaces play in abstract point-set topology, or that vector spaces play in abstract linear algebra.
On any measure space, one can set up the unsigned and absolutely convergent integrals in almost exactly the same way as was done in the previous notes for the Lebesgue integral on Euclidean spaces, although the approximation theorems are largely unavailable at this level of generality due to the lack of such concepts as “elementary set” or “continuous function” for an abstract measure space. On the other hand, one does have the fundamental convergence theorems for the subject, namely Fatou’s lemma, the monotone convergence theorem and the dominated convergence theorem, and we present these results here.
One question that will not be addressed much in this current set of notes is how one actually constructs interesting examples of measures. We will discuss this issue more in later notes (although one of the most powerful tools for such constructions, namely the Riesz representation theorem, will not be covered until 245B).
Ben Green, Tamar Ziegler, and I have just uploaded to the arXiv our paper “An inverse theorem for the Gowers U^{s+1}[N] norm“, which was previously announced on this blog. We are still planning one final round of reviewing the preprint before submitting the paper, but it has gotten to the stage where we are comfortable with having the paper available on the arXiv.
The main result of the paper is to establish the inverse conjecture for the Gowers norm over the integers, which has a number of applications, in particular to counting solutions to various linear equations in primes. In spirit, the proof of the paper follows the 21-page announcement that was uploaded previously. However, for various rather annoying technical reasons, the 117-page paper has to devote a large amount of space to setting up various bits of auxiliary machinery (as well as a dozen or so pages worth of examples and discussion). For instance, the announcement motivates many of the steps of the argument by heuristically identifying nilsequences with bracket polynomial phases such as
. However, a rather significant amount of theory (which was already worked out to a large extent by Leibman) is needed to formalise the “bracket algebra” needed to manipulate such bracket polynomials and to connect them with nilsequences. Furthermore, the “piecewise smooth” nature of bracket polynomials causes some technical issues with the equidistribution theory for these sequences. Our original version of the paper (which was even longer than the current version) set out this theory. But we eventually decided that it was best to eschew almost all use of bracket polynomials (except as motivation and examples), and run the argument almost entirely within the language of nilsequences, to keep the argument a bit more notationally focused (and to make the equidistribution theory easier to establish). But this was not without a tradeoff; some statements that are almost trivially true for bracket polynomials, required some “nilpotent algebra” to convert to the language of nilsequences. Here are some examples of this:
- It is intuitively clear that a bracket polynomial phase e(P(n)) of degree k in one variable n can be “multilinearised” to a polynomial
of multi-degree
in k variables
, such that
and
agree modulo lower order terms. For instance, if
(so k=3), then one could take
. The analogue of this statement for nilsequences is true, but required a moderately complicated nilpotent algebra construction using the Baker-Campbell-Hausdorff formula.
- Suppose one has a bracket polynomial phase e(P_h(n)) of degree k in one variable n that depends on an additional parameter h, in such a way that exactly one of the coefficients in each monomial depends on h. Furthermore, suppose this dependence is bracket linear in h. Then it is intuitively clear that this phase can be rewritten (modulo lower order terms) as e( Q(h,n) ) where Q is a bracket polynomial of multidegree (1,k) in (h,n). For instance, if
and
, then we can take
. The nilpotent algebra analogue of this claim is true, but requires another moderately complicated nilpotent algebra construction based on semi-direct products.
- A bracket polynomial has a fairly visible concept of a “degree” (analogous to the corresponding notion for true polynomials), as well as a “rank” (which, roughly speaking measures the number of parentheses in the bracket monomials, plus one). Thus, for instance, the bracket monomial
has degree 7 and rank 3. Defining degree and rank for nilsequences requires one to generalise the notion of a (filtered) nilmanifold to one in which the lower central series is replaced by a filtration indexed by both the degree and the rank.
There are various other tradeoffs of this type in this paper. For instance, nonstandard analysis tools were introduced to eliminate what would otherwise be quite a large number of epsilons and regularity lemmas to manage, at the cost of some notational overhead; and the piecewise discontinuities mentioned earlier were eliminated by the use of vector-valued nilsequences, though this again caused some further notational overhead. These difficulties may be a sign that we do not yet have the “right” proof of this conjecture, but one will probably have to wait a few years before we get a proper amount of perspective and understanding on this circle of ideas and results.
I’ve uploaded to the arXiv the polymath research paper “Deterministic methods to find primes“, which is the outcome of the Polymath4 collaborative mathematics project, and has been submitted to Mathematics of Computation.
The objective of this paper was to find fast deterministic algorithms to solve the following problem:
Given a (large) integer x, find a prime p larger than x.
Thanks to the AKS algorithm, a number of size O(x) can be deterministically tested for primality in time . By Bertrand’s postulate, there is always at least one prime between x and 2x; by testing each one of these integers in turn for primality, one can thus obtain a deterministic algorithm to find primes in time
.
But one should be able to do much better. For comparison, if probabilistic algorithms are allowed, then by randomly selecting integers between x and 2x to test for primality, it is easy to see from the prime number theorem that one will succeed in obtaining a prime with high probability in time . However, after some effort we were not able to “derandomise” this algorithm to create any reasonable deterministic counterpart. Nevertheless, we conjecture that a deterministic algorithm with run time
exists. Such algorithms can be easily obtained if one assumes some standard conjectures regarding the primes (e.g. Cramer’s conjecture on prime gaps), but we do not know of any deterministic algorithms which can be unconditionally proved to run in time
.
Currently, the best known deterministic algorithm is due to Lagarias and Odlyzko, and has a run time of . Roughly speaking, it is based on the ability to compute the prime counting function
in time
; once one has this function, one can detect which intervals contain primes or not, and then starting from Bertrand’s postulate and performing a binary search one can then locate a prime. The Lagarias-Odlyzko argument is based on approximating
by a certain integral of the Riemann zeta function, which one then approximates in turn by quadrature.
We conjecture that one should be able to compute in faster time, and in particular in time
for some
. Unfortunately, we were not able to achieve this; however, we do have a non-trivial method to compute the parity
of
in such a time; a bit more generally (and oversimplifying a little bit), we can compute various projections
of the prime polynomial
modulo some small polynomials g. This seems close to being able to achieve the goal of detecting whether primes exist in a given interval, but unfortunately some additional work seems to be needed in that area. Still, the methods used to get the parity seem to be interesting (involving the Dirichlet hyperbola method, a piecewise approximation of the hyperbola, and the Strassen fast multiplication algorithm), and these techniques might be useful for some other problems.
Roughly speaking, the idea to compute the parity of is as follows. The first observation is that, for square-free n, the number
of divisors of n is equal to 2 when n is a prime, and a multiple of 4 otherwise. So to compute the parity of
, it suffices to compute
modulo 4 (or more precisely, the restriction of this sum to squarefree n).
The restriction to squarefree numbers turns out to be relatively easy to handle, so we ignore it. Since , we can rewrite
.
So it suffices to find an efficient way to count the number of lattice points below the hyperbola .
The classic way to do this is via the Dirichlet hyperbola method. This lets one rewrite the above expression as
(assuming is not a perfect square, where
is the greatest integer function). One can then compute this quantity in time
without difficulty.
To improve this to , we used some ideas of Vinogradov, based on using Diophantine approximation to find moderately long arithmetic progressions on which the map
is linear, and thus easily summable.
The same method extends to the polynomial setting, though now, instead of summing an arithmetic progression, one has to find an efficient way to sum quadratic sums such as . This turns out to be possible by expressing this sum as a matrix product and then using fast matrix multiplication algorithms.
There is still some scope to improve the results and methods further; Ernie Croot is pursuing this with two undergraduate students, David Hollis and David Lowry,
In the previous notes, we defined the Lebesgue measure of a Lebesgue measurable set
, and set out the basic properties of this measure. In this set of notes, we use Lebesgue measure to define the Lebesgue integral
of functions . Just as not every set can be measured by Lebesgue measure, not every function can be integrated by the Lebesgue integral; the function will need to be Lebesgue measurable. Furthermore, the function will either need to be unsigned (taking values on
), or absolutely integrable.
To motivate the Lebesgue integral, let us first briefly review two simpler integration concepts. The first is that of an infinite summation
of a sequence of numbers , which can be viewed as a discrete analogue of the Lebesgue integral. Actually, there are two overlapping, but different, notions of summation that we wish to recall here. The first is that of the unsigned infinite sum, when the
lie in the extended non-negative real axis
. In this case, the infinite sum can be defined as the limit of the partial sums
or equivalently as a supremum of arbitrary finite partial sums:
The unsigned infinite sum always exists, but its value may be infinite, even when each term is individually finite (consider e.g.
).
The second notion of a summation is the absolutely summable infinite sum, in which the lie in the complex plane
and obey the absolute summability condition
where the left-hand side is of course an unsigned infinite sum. When this occurs, one can show that the partial sums converge to a limit, and we can then define the infinite sum by the same formula (1) as in the unsigned case, though now the sum takes values in
rather than
. The absolute summability condition confers a number of useful properties that are not obeyed by sums that are merely conditionally convergent; most notably, the value of an absolutely convergent sum is unchanged if one rearranges the terms in the series in an arbitrary fashion. Note also that the absolutely summable infinite sums can be defined in terms of the unsigned infinite sums by taking advantage of the formulae
for complex absolutely summable , and
for real absolutely summable , where
and
are the (magnitudes of the) positive and negative parts of
.
In an analogous spirit, we will first define an unsigned Lebesgue integral of (measurable) unsigned functions
, and then use that to define the absolutely convergent Lebesgue integral
of absolutely integrable functions
. (In contrast to absolutely summable series, which cannot have any infinite terms, absolutely integrable functions will be allowed to occasionally become infinite. However, as we will see, this can only happen on a set of Lebesgue measure zero.)
To define the unsigned Lebesgue integral, we now turn to another more basic notion of integration, namely the Riemann integral of a Riemann integrable function
. Recall from the prologue that this integral is equal to the lower Darboux integral
(It is also equal to the upper Darboux integral; but much as the theory of Lebesgue measure is easiest to define by relying solely on outer measure and not on inner measure, the theory of the unsigned Lebesgue integral is easiest to define by relying solely on lower integrals rather than upper ones; the upper integral is somewhat problematic when dealing with “improper” integrals of functions that are unbounded or are supported on sets of infinite measure.) Compare this formula also with (2). The integral is a piecewise constant integral, formed by breaking up the piecewise constant functions
into finite linear combinations of indicator functions of intervals, and then measuring the length of each interval.
It turns out that virtually the same definition allows us to define a lower Lebesgue integral of any unsigned function
, simply by replacing intervals with the more general class of Lebesgue measurable sets (and thus replacing piecewise constant functions with the more general class of simple functions). If the function is Lebesgue measurable (a concept that we will define presently), then we refer to the lower Lebesgue integral simply as the Lebesgue integral. As we shall see, it obeys all the basic properties one expects of an integral, such as monotonicity and additivity; in subsequent notes we will also see that it behaves quite well with respect to limits, as we shall see by establishing the two basic convergence theorems of the unsigned Lebesgue integral, namely Fatou’s lemma and the monotone convergence theorem.
Once we have the theory of the unsigned Lebesgue integral, we will then be able to define the absolutely convergent Lebesgue integral, similarly to how the absolutely convergent infinite sum can be defined using the unsigned infinite sum. This integral also obeys all the basic properties one expects, such as linearity and compatibility with the more classical Riemann integral; in subsequent notes we will see that it also obeys a fundamentally important convergence theorem, the dominated convergence theorem. This convergence theorem makes the Lebesgue integral (and its abstract generalisations to other measure spaces than ) particularly suitable for analysis, as well as allied fields that rely heavily on limits of functions, such as PDE, probability, and ergodic theory.
Remark 1 This is not the only route to setting up the unsigned and absolutely convergent Lebesgue integrals. Stein-Shakarchi, for instance, proceeds slightly differently, beginning with the unsigned integral but then making an auxiliary stop at integration of functions that are bounded and are supported on a set of finite measure, before going to the absolutely convergent Lebesgue integral. Another approach (which will not be discussed here) is to take the metric completion of the Riemann integral with respect to the
metric.
The Lebesgue integral and Lebesgue measure can be viewed as completions of the Riemann integral and Jordan measure respectively. This means three things. Firstly, the Lebesgue theory extends the Riemann theory: every Jordan measurable set is Lebesgue measurable, and every Riemann integrable function is Lebesgue measurable, with the measures and integrals from the two theories being compatible. Conversely, the Lebesgue theory can be approximated by the Riemann theory; as we saw in the previous notes, every Lebesgue measurable set can be approximated (in various senses) by simpler sets, such as open sets or elementary sets, and in a similar fashion, Lebesgue measurable functions can be approximated by nicer functions, such as Riemann integrable or continuous functions. Finally, the Lebesgue theory is complete in various ways; we will formalise this properly only in the next quarter when we study spaces, but the convergence theorems mentioned above already hint at this completeness. A related fact, known as Egorov’s theorem, asserts that a pointwise converging sequence of functions can be approximated as a (locally) uniformly converging sequence of functions. The facts listed here manifestations of Littlewood’s three principles of real analysis, which capture much of the essence of the Lebesgue theory.
I’ve spent the last week or so reworking the first draft of my universality article for Mathematics Awareness Month, in view of the useful comments and feedback received on that draft here on this blog, as well as elsewhere. In fact, I ended up rewriting the article from scratch, and expanding it substantially, in order to focus on a more engaging and less technical narrative. I found that I had to use a substantially different mindset than the one I am used to having for technical expository writing; indeed, the exercise reminded me more of my high school English assignments than of my professional work. (This is perhaps a bad sign: English was not exactly my strongest subject as a student.)
The piece now has title: “E pluribus unum: from complexity, universality”. This is a somewhat US-centric piece of wordplay, but Mathematics Awareness Month is, after all, a US-based initiative, even though awareness of mathematics certainly transcends national boundaries. Still, it is a trivial matter to modify the title later if a better proposal arises, and I am sure that if I send this text to be published, that the editors may have some suggestions in this regard.
By coincidence, I moved up and expanded the other US-centric item – the discussion of the 2008 US presidential elections – to the front of the paper to play the role of the hook. I’ll try to keep the Commonwealth spelling conventions, though. :-)
I decided to cut out the discussion of the N-body problem for various values of N, in part due to the confusion over the notion of a “solution”; there is a nice mathematical story there, but perhaps one that gets in the way of the main story of universality.
I have added a fair number of relevant images, though some of them will have to be changed in the final version for copyright reasons. The narrow column format of this blog means that the image placement is not optimal, but I am sure that this can be rectified if this article is published professionally.
Read the rest of this entry »
In the prologue for this course, we recalled the classical theory of Jordan measure on Euclidean spaces . This theory proceeded in the following stages:
- First, one defined the notion of a box
and its volume
.
- Using this, one defined the notion of an elementary set
(a finite union of boxes), and defines the elementary measure
of such sets.
- From this, one defined the inner and outer Jordan measures
of an arbitrary bounded set
. If those measures match, we say that
is Jordan measurable, and call
the Jordan measure of
.
As long as one is lucky enough to only have to deal with Jordan measurable sets, the theory of Jordan measure works well enough. However, as noted previously, not all sets are Jordan measurable, even if one restricts attention to bounded sets. In fact, we shall see later in these notes that there even exist bounded open sets, or compact sets, which are not Jordan measurable, so the Jordan theory does not cover many classes of sets of interest. Another class that it fails to cover is countable unions or intersections of sets that are already known to be measurable:
Exercise 1 Show that the countable union
or countable intersection
of Jordan measurable sets
need not be Jordan measurable, even when bounded.
This creates problems with Riemann integrability (which, as we saw in the preceding notes, was closely related to Jordan measure) and pointwise limits:
Exercise 2 Give an example of a sequence of uniformly bounded, Riemann integrable functions
for
that converge pointwise to a bounded function
that is not Riemann integrable. What happens if we replace pointwise convergence with uniform convergence?
These issues can be rectified by using a more powerful notion of measure than Jordan measure, namely Lebesgue measure. To define this measure, we first tinker with the notion of the Jordan outer measure
of a set (we adopt the convention that
if
is unbounded, thus
now takes values in the extended non-negative reals
, whose properties we will briefly review below). Observe from the finite additivity and subadditivity of elementary measure that we can also write the Jordan outer measure as
i.e. the Jordan outer measure is the infimal cost required to cover by a finite union of boxes. (The natural number
is allowed to vary freely in the above infimum.) We now modify this by replacing the finite union of boxes by a countable union of boxes, leading to the Lebesgue outer measure
of
:
thus the Lebesgue outer measure is the infimal cost required to cover by a countable union of boxes. Note that the countable sum
may be infinite, and so the Lebesgue outer measure
could well equal
.
(Caution: the Lebesgue outer measure is sometimes denoted
; this is for instance the case in Stein-Shakarchi.)
Clearly, we always have (since we can always pad out a finite union of boxes into an infinite union by adding an infinite number of empty boxes). But
can be a lot smaller:
Example 1 Let
be a countable set. We know that the Jordan outer measure of
can be quite large; for instance, in one dimension,
is infinite, and
since
has
as its closure (see Exercise 18 of the prologue). On the other hand, all countable sets
have Lebesgue outer measure zero. Indeed, one simply covers
by the degenerate boxes
of sidelength and volume zero.
Alternatively, if one does not like degenerate boxes, one can cover each
by a cube
of sidelength
(say) for some arbitrary
, leading to a total cost of
, which converges to
for some absolute constant
. As
can be arbitrarily small, we see that the Lebesgue outer measure must be zero. We will refer to this type of trick as the
trick; it will be used many further times in this course.
From this example we see in particular that a set may be unbounded while still having Lebesgue outer measure zero, in contrast to Jordan outer measure.
As we shall see later in this course, Lebesgue outer measure (also known as Lebesgue exterior measure) is a special case of a more general concept known as an outer measure.
In analogy with the Jordan theory, we would also like to define a concept of “Lebesgue inner measure” to complement that of outer measure. Here, there is an asymmetry (which ultimately arises from the fact that elementary measure is subadditive rather than superadditive): one does not gain any increase in power in the Jordan inner measure by replacing finite unions of boxes with countable ones. But one can get a sort of Lebesgue inner measure by taking complements; see Exercise 18. This leads to one possible definition for Lebesgue measurability, namely the Carathéodory criterion for Lebesgue measurability, see Exercise 17. However, this is not the most intuitive formulation of this concept to work with, and we will instead use a different (but logically equivalent) definition of Lebesgue measurability. The starting point is the observation (see Exercise 5 of the prologue) that Jordan measurable sets can be efficiently contained in elementary sets, with an error that has small Jordan outer measure. In a similar vein, we will define Lebesgue measurable sets to be sets that can be efficiently contained in open sets, with an error that has small Lebesgue outer measure:
Definition 1 (Lebesgue measurability) A set
is said to be Lebesgue measurable if, for every
, there exists an open set
containing
such that
. If
is Lebesgue measurable, we refer to
as the Lebesgue measure of
(note that this quantity may be equal to
). We also write
as
when we wish to emphasise the dimension
.
(The intuition that measurable sets are almost open is also known as Littlewood’s first principle, this principle is a triviality with our current choice of definitions, though less so if one uses other, equivalent, definitions of Lebesgue measurability.)
As we shall see later, Lebesgue measure extends Jordan measure, in the sense that every Jordan measurable set is Lebesgue measurable, and the Lebesgue measure and Jordan measure of a Jordan measurable set are always equal. We will also see a few other equivalent descriptions of the concept of Lebesgue measurability.
In the notes below we will establish the basic properties of Lebesgue measure. Broadly speaking, this concept obeys all the intuitive properties one would ask of measure, so long as one restricts attention to countable operations rather than uncountable ones, and as long as one restricts attention to Lebesgue measurable sets. The latter is not a serious restriction in practice, as almost every set one actually encounters in analysis will be measurable (the main exceptions being some pathological sets that are constructed using the axiom of choice). In the next set of notes we will use Lebesgue measure to set up the Lebesgue integral, which extends the Riemann integral in the same way that Lebesgue measure extends Jordan measure; and the many pleasant properties of Lebesgue measure will be reflected in analogous pleasant properties of the Lebesgue integral (most notably the convergence theorems).
We will treat all dimensions equally here, but for the purposes of drawing pictures, we recommend to the reader that one sets
equal to
. However, for this topic at least, no additional mathematical difficulties will be encountered in the higher-dimensional case (though of course there are significant visual difficulties once
exceeds
).
The material here is based on Sections 1.1-1.3 of the Stein-Shakarchi text, though it is arranged somewhat differently.
The month of April has been designated as Mathematics Awareness Month by the major American mathematics organisations (the AMS, ASA, MAA, and SIAM). I was approached to write a popular mathematics article for April 2011 (the theme for that month is “Mathematics and Complexity”). While I have written a fair number of expository articles (including several on this blog) aimed at a mathematical audience, I actually have not had much experience writing articles at the popular mathematics level, and so I found this task to be remarkably difficult. At this level of exposition, one not only needs to explain the facts, but also to tell a story; I have experience in the former but not in the latter.
I decided to write on the topic of universality – the phenomenon that the macroscopic behaviour of a dynamical system can be largely independent of the precise microscopic structure. Below the fold is a first draft of the article; I would definitely welcome feedback and corrections. It does not yet have any pictures, but I plan to rectify that in the final draft. It also does not have a title, but this will be easy to address later. But perhaps the biggest thing lacking right now is a narrative “hook”; I don’t yet have any good ideas as to how to make the story of universality compelling to a lay audience. Any suggestions in this regard would be particularly appreciated.
I have not yet decided where I would try to publish this article; in fact, I might just publish it here on this blog (and eventually, in one of the blog book compilations).
One of the most fundamental concepts in Euclidean geometry is that of the measure of a solid body
in one or more dimensions. In one, two, and three dimensions, we refer to this measure as the length, area, or volume of
respectively. In the classical approach to geometry, the measure of a body was often computed by partitioning that body into finitely many components, moving around each component by a rigid motion (e.g. a translation or rotation), and then reassembling those components to form a simpler body which presumably has the same area. One could also obtain lower and upper bounds on the measure of a body by computing the measure of some inscribed or circumscribed body; this ancient idea goes all the way back to the work of Archimedes at least. Such arguments can be justified by an appeal to geometric intuition, or simply by postulating the existence of a measure
that can be assigned to all solid bodies
, and which obeys a collection of geometrically reasonable axioms. One can also justify the concept of measure on “physical” or “reductionistic” grounds, viewing the measure of a macroscopic body as the sum of the measures of its microscopic components.
With the advent of analytic geometry, however, Euclidean geometry became reinterpreted as the study of Cartesian products of the real line
. Using this analytic foundation rather than the classical geometrical one, it was no longer intuitively obvious how to define the measure
of a general subset
of
; we will refer to this (somewhat vaguely defined) problem of writing down the “correct” definition of measure as the problem of measure. (One can also pose the problem of measure on other domains than Euclidean space, such as a Riemannian manifold, but we will focus on the Euclidean case here for simplicity.)
To see why this problem exists at all, let us try to formalise some of the intuition for measure discussed earlier. The physical intuition of defining the measure of a body to be the sum of the measure of its component “atoms” runs into an immediate problem: a typical solid body would consist of an infinite (and uncountable) number of points, each of which has a measure of zero; and the product
is indeterminate. To make matters worse, two bodies that have exactly the same number of points, need not have the same measure. For instance, in one dimension, the intervals
and
are in one-to-one correspondence (using the bijection
from
to
), but of course
is twice as long as
. So one can disassemble
into an uncountable number of points and reassemble them to form a set of twice the length.
Of course, one can point to the infinite (and uncountable) number of components in this disassembly as being the cause of this breakdown of intuition, and restrict attention to just finite partitions. But one still runs into trouble here for a number of reasons, the most striking of which is the Banach-Tarski paradox, which shows that the unit ball in three dimensions can be disassembled into a finite number of pieces (in fact, just five pieces suffice), which can then be reassembled (after translating and rotating each of the pieces) to form two disjoint copies of the ball
. (The paradox only works in three dimensions and higher, for reasons having to do with the property of amenability; see this blog post for further discussion of this interesting topic, which is unfortunately too much of a digression from the current subject.)
Here, the problem is that the pieces used in this decomposition are highly pathological in nature; among other things, their construction requires use of the axiom of choice. (This is in fact necessary; there are models of set theory without the axiom of choice in which the Banach-Tarski paradox does not occur, thanks to a famous theorem of Solovay.) Such pathological sets almost never come up in practical applications of mathematics. Because of this, the standard solution to the problem of measure has been to abandon the goal of measuring every subset of
, and instead to settle for only measuring a certain subclass of “non-pathological” subsets of
, which are then referred to as the measurable sets. The problem of measure then divides into several subproblems:
- What does it mean for a subset
of
to be measurable?
- If a set
is measurable, how does one define its measure?
- What nice properties or axioms does measure (or the concept of measurability) obey?
- Are “ordinary” sets such as cubes, balls, polyhedra, etc. measurable?
- Does the measure of an “ordinary” set equal the “naive geometric measure” of such sets? (e.g. is the measure of an
rectangle equal to
?)
These questions are somewhat open-ended in formulation, and there is no unique answer to them; in particular, one can expand the class of measurable sets at the expense of losing one or more nice properties of measure in the process (e.g. finite or countable additivity, translation invariance, or rotation invariance). However, there are two basic answers which, between them, suffice for most applications. The first is the concept of Jordan measure of a Jordan measurable set, which is a concept closely related to that of the Riemann integral (or Darboux integral). This concept is elementary enough to be systematically studied in an undergraduate analysis course, and suffices for measuring most of the “ordinary” sets (e.g. the area under the graph of a continuous function) in many branches of mathematics. However, when one turns to the type of sets that arise in analysis, and in particular those sets that arise as limits (in various senses) of other sets, it turns out that the Jordan concept of measurability is not quite adequate, and must be extended to the more general notion of Lebesgue measurability, with the corresponding notion of Lebesgue measure that extends Jordan measure. With the Lebesgue theory (which can be viewed as a completion of the Jordan-Darboux-Riemann theory), one keeps almost all of the desirable properties of Jordan measure, but with the crucial additional property that many features of the Lebesgue theory are preserved under limits (as exemplified in the fundamental convergence theorems of the Lebesgue theory, such as the monotone convergence theorem and the dominated convergence theorem, which do not hold in the Jordan-Darboux-Riemann setting). As such, they are particularly well suited for applications in analysis, where limits of functions or sets arise all the time. (There are other ways to extend Jordan measure and the Riemann integral, but the Lebesgue approach handles limits better than the other alternatives, and so has become the standard approach in analysis.)
In the rest of the course, we will formally define Lebesgue measure and the Lebesgue integral, as well as the more general concept of an abstract measure space and the associated integration operation. In the rest of this post, we will discuss the more elementary concepts of Jordan measure and the Riemann integral. This material will eventually be superceded by the more powerful theory to be treated in the main body of the course; but it will serve as motivation for that later material, as well as providing some continuity with the treatment of measure and integration in undergraduate analysis courses.
Recent Comments