You are currently browsing the tag archive for the ‘absolute continuity’ tag.

Let {[a,b]} be a compact interval of positive length (thus {-\infty < a < b < +\infty}). Recall that a function {F: [a,b] \rightarrow {\bf R}} is said to be differentiable at a point {x \in [a,b]} if the limit

\displaystyle  F'(x) := \lim_{y \rightarrow x; y \in [a,b] \backslash \{x\}} \frac{F(y)-F(x)}{y-x} \ \ \ \ \ (1)

exists. In that case, we call {F'(x)} the strong derivative, classical derivative, or just derivative for short, of {F} at {x}. We say that {F} is everywhere differentiable, or differentiable for short, if it is differentiable at all points {x \in [a,b]}, and differentiable almost everywhere if it is differentiable at almost every point {x \in [a,b]}. If {F} is differentiable everywhere and its derivative {F'} is continuous, then we say that {F} is continuously differentiable.

Remark 1 Much later in this sequence, when we cover the theory of distributions, we will see the notion of a weak derivative or distributional derivative, which can be applied to a much rougher class of functions and is in many ways more suitable than the classical derivative for doing “Lebesgue” type analysis (i.e. analysis centred around the Lebesgue integral, and in particular allowing functions to be uncontrolled, infinite, or even undefined on sets of measure zero). However, for now we will stick with the classical approach to differentiation.

Exercise 1 If {F: [a,b] \rightarrow {\bf R}} is everywhere differentiable, show that {F} is continuous and {F'} is measurable. If {F} is almost everywhere differentiable, show that the (almost everywhere defined) function {F'} is measurable (i.e. it is equal to an everywhere defined measurable function on {[a,b]} outside of a null set), but give an example to demonstrate that {F} need not be continuous.

Exercise 2 Give an example of a function {F: [a,b] \rightarrow {\bf R}} which is everywhere differentiable, but not continuously differentiable. (Hint: choose an {F} that vanishes quickly at some point, say at the origin {0}, but which also oscillates rapidly near that point.)

In single-variable calculus, the operations of integration and differentiation are connected by a number of basic theorems, starting with Rolle’s theorem.

Theorem 1 (Rolle’s theorem) Let {[a,b]} be a compact interval of positive length, and let {F: [a,b] \rightarrow {\bf R}} be a differentiable function such that {F(a)=F(b)}. Then there exists {x \in (a,b)} such that {F'(x)=0}.

Proof: By subtracting a constant from {F} (which does not affect differentiability or the derivative) we may assume that {F(a)=F(b)=0}. If {F} is identically zero then the claim is trivial, so assume that {F} is non-zero somewhere. By replacing {F} with {-F} if necessary, we may assume that {F} is positive somewhere, thus {\sup_{x \in [a,b]} F(x) > 0}. On the other hand, as {F} is continuous and {[a,b]} is compact, {F} must attain its maximum somewhere, thus there exists {x \in [a,b]} such that {F(x) \geq F(y)} for all {y \in [a,b]}. Then {F(x)} must be positive and so {x} cannot equal either {a} or {b}, and thus must lie in the interior. From the right limit of (1) we see that {F'(x) \leq 0}, while from the left limit we have {F'(x) \geq 0}. Thus {F'(x)=0} and the claim follows. \Box

Remark 2 Observe that the same proof also works if {F} is only differentiable in the interior {(a,b)} of the interval {[a,b]}, so long as it is continuous all the way up to the boundary of {[a,b]}.

Exercise 3 Give an example to show that Rolle’s theorem can fail if {f} is merely assumed to be almost everywhere differentiable, even if one adds the additional hypothesis that {f} is continuous. This example illustrates that everywhere differentiability is a significantly stronger property than almost everywhere differentiability. We will see further evidence of this fact later in these notes; there are many theorems that assert in their conclusion that a function is almost everywhere differentiable, but few that manage to conclude everywhere differentiability.

Remark 3 It is important to note that Rolle’s theorem only works in the real scalar case when {F} is real-valued, as it relies heavily on the least upper bound property for the domain {{\bf R}}. If, for instance, we consider complex-valued scalar functions {F: [a,b] \rightarrow {\bf C}}, then the theorem can fail; for instance, the function {F: [0,1] \rightarrow {\bf C}} defined by {F(x) := e^{2\pi i x} - 1} vanishes at both endpoints and is differentiable, but its derivative {F'(x) = 2\pi i e^{2\pi i x}} is never zero. (Rolle’s theorem does imply that the real and imaginary parts of the derivative {F'} both vanish somewhere, but the problem is that they don’t simultaneously vanish at the same point.) Similar remarks to functions taking values in a finite-dimensional vector space, such as {{\bf R}^n}.

One can easily amplify Rolle’s theorem to the mean value theorem:

Corollary 2 (Mean value theorem) Let {[a,b]} be a compact interval of positive length, and let {F: [a,b] \rightarrow {\bf R}} be a differentiable function. Then there exists {x \in (a,b)} such that {F'(x)=\frac{F(b)-F(a)}{b-a}}.

Proof: Apply Rolle’s theorem to the function {x \mapsto F(x) - \frac{F(b)-F(a)}{b-a} (x-a)}. \Box

Remark 4 As Rolle’s theorem is only applicable to real scalar-valued functions, the more general mean value theorem is also only applicable to such functions.

Exercise 4 (Uniqueness of antiderivatives up to constants) Let {[a,b]} be a compact interval of positive length, and let {F: [a,b] \rightarrow {\bf R}} and {G: [a,b] \rightarrow {\bf R}} be differentiable functions. Show that {F'(x)=G'(x)} for every {x \in [a,b]} if and only if {F(x)=G(x)+C} for some constant {C \in {\bf R}} and all {x \in [a,b]}.

We can use the mean value theorem to deduce one of the fundamental theorems of calculus:

Theorem 3 (Second fundamental theorem of calculus) Let {F: [a,b] \rightarrow {\bf R}} be a differentiable function, such that {F'} is Riemann integrable. Then the Riemann integral {\int_a^b F'(x)\ dx} of {F'} is equal to {F(b) - F(a)}. In particular, we have {\int_a^b F'(x)\ dx = F(b)-F(a)} whenever {F} is continuously differentiable.

Proof: Let {\epsilon > 0}. By the definition of Riemann integrability, there exists a finite partition {a = t_0 < t_1 < \ldots < t_k = b} such that

\displaystyle  |\sum_{j=1}^k F'(t^*_j) (t_j - t_{j-1}) - \int_a^b F'(x)| \leq \epsilon

for every choice of {t^*_j \in [t_{j-1},t_j]}.

Fix this partition. From the mean value theorem, for each {1 \leq j \leq k} one can find {t^*_j \in [t_{j-1},t_j]} such that

\displaystyle  F'(t^*_j) (t_j - t_{j-1}) = F(t_j) - F(t_{j-1})

and thus by telescoping series

\displaystyle  |(F(b)-F(a)) - \int_a^b F'(x)| \leq \epsilon.

Since {\epsilon > 0} was arbitrary, the claim follows. \Box

Remark 5 Even though the mean value theorem only holds for real scalar functions, the fundamental theorem of calculus holds for complex or vector-valued functions, as one can simply apply that theorem to each component of that function separately.

Of course, we also have the other half of the fundamental theorem of calculus:

Theorem 4 (First fundamental theorem of calculus) Let {[a,b]} be a compact interval of positive length. Let {f: [a,b] \rightarrow {\bf C}} be a continuous function, and let {F: [a,b] \rightarrow {\bf C}} be the indefinite integral {F(x) := \int_a^x f(t)\ dt}. Then {F} is differentiable on {[a,b]}, with derivative {F'(x) = f(x)} for all {x \in [a,b]}. In particular, {F} is continuously differentiable.

Proof: It suffices to show that

\displaystyle  \lim_{h \rightarrow 0^+} \frac{F(x+h)-F(x)}{h} = f(x)

for all {x \in [a,b)}, and

\displaystyle  \lim_{h \rightarrow 0^-} \frac{F(x+h)-F(x)}{h} = f(x)

for all {x \in (a,b]}. After a change of variables, we can write

\displaystyle  \frac{F(x+h)-F(x)}{h} = \int_0^1 f(x+ht)\ dt

for any {x \in [a,b)} and any sufficiently small {h>0}, or any {x \in (a,b]} and any sufficiently small {h<0}. As {f} is continuous, the function {t \mapsto f(x+ht)} converges uniformly to {f(x)} on {[0,1]} as {h \rightarrow 0} (keeping {x} fixed). As the interval {[0,1]} is bounded, {\int_0^1 f(x+ht)\ dt} thus converges to {\int_0^1 f(x)\ dt = f(x)}, and the claim follows. \Box

Corollary 5 (Differentiation theorem for continuous functions) Let {f: [a,b] \rightarrow {\bf C}} be a continuous function on a compact interval. Then we have

\displaystyle  \lim_{h \rightarrow 0^+} \frac{1}{h} \int_{[x,x+h]} f(t)\ dt = f(x)

for all {x \in [a,b)},

\displaystyle  \lim_{h \rightarrow 0^+} \frac{1}{h} \int_{[x-h,x]} f(t)\ dt = f(x)

for all {x \in (a,b]}, and thus

\displaystyle  \lim_{h \rightarrow 0^+} \frac{1}{2h} \int_{[x-h,x+h]} f(t)\ dt = f(x)

for all {x \in (a,b)}.

In these notes we explore the question of the extent to which these theorems continue to hold when the differentiability or integrability conditions on the various functions {F, F', f} are relaxed. Among the results proven in these notes are

  • The Lebesgue differentiation theorem, which roughly speaking asserts that Corollary 5 continues to hold for almost every {x} if {f} is merely absolutely integrable, rather than continuous;
  • A number of differentiation theorems, which assert for instance that monotone, Lipschitz, or bounded variation functions in one dimension are almost everywhere differentiable; and
  • The second fundamental theorem of calculus for absolutely continuous functions.

The material here is loosely based on Chapter 3 of Stein-Shakarchi. Read the rest of this entry »

For these notes, X = (X, {\mathcal X}) is a fixed measurable space. We shall often omit the \sigma-algebra {\mathcal X}, and simply refer to elements of {\mathcal X} as measurable sets. Unless otherwise indicated, all subsets of X appearing below are restricted to be measurable, and all functions on X appearing below are also restricted to be measurable.

We let {\mathcal M}_+(X) denote the space of measures on X, i.e. functions \mu: {\mathcal X} \to [0,+\infty] which are countably additive and send \emptyset to 0. For reasons that will be clearer later, we shall refer to such measures as unsigned measures. In this section we investigate the structure of this space, together with the closely related spaces of signed measures and finite measures.

Suppose that we have already constructed one unsigned measure m \in {\mathcal M}_+(X) on X (e.g. think of X as the real line with the Borel \sigma-algebra, and let m be Lebesgue measure). Then we can obtain many further unsigned measures on X by multiplying m by a function f: X \to [0,+\infty], to obtain a new unsigned measure m_f, defined by the formula

m_f(E) := \int_X 1_E f\ d\mu. (1)

If f = 1_A is an indicator function, we write m\downharpoonright_A for m_{1_A}, and refer to this measure as the restriction of m to A.

Exercise 1. Show (using the monotone convergence theorem) that m_f is indeed a unsigned measure, and for any g: X \to [0,+\infty], we have {}\int_X g\ dm_f = \int_X gf\ dm. We will express this relationship symbolically as

dm_f = f dm.\diamond (2)

Exercise 2. Let m be \sigma-finite. Given two functions f, g: X \to [0,+\infty], show that m_f = m_g if and only if f(x) = g(x) for m-almost every x. (Hint: as usual, first do the case when m is finite. The key point is that if f and g are not equal m-almost everywhere, then either f>g on a set of positive measure, or f<g on a set of positive measure.) Give an example to show that this uniqueness statement can fail if m is not \sigma-finite. (Hint: take a very simple example, e.g. let X consist of just one point.) \diamond

In view of Exercises 1 and 2, let us temporarily call a measure \mu differentiable with respect to m if d\mu = f dm (i.e. \mu = m_f) for some f: X \to [0,+\infty], and call f the Radon-Nikodym derivative of \mu with respect to m, writing

\displaystyle f = \frac{d\mu}{dm}; (3)

by Exercise 2, we see if m is \sigma-finite that this derivative is defined up to m-almost everywhere equivalence.

Exercise 3. (Relationship between Radon-Nikodym derivative and classical derivative) Let m be Lebesgue measure on {}[0,+\infty), and let \mu be an unsigned measure that is differentiable with respect to m. If \mu has a continuous Radon-Nikodym derivative \frac{d\mu}{dm}, show that the function x \mapsto \mu( [0,x]) is differentiable, and \frac{d}{dx} \mu([0,x]) = \frac{d\mu}{dm}(x) for all x. \diamond

Exercise 4. Let X be at most countable. Show that every measure on X is differentiable with respect to counting measure \#. \diamond

If every measure was differentiable with respect to m (as is the case in Exercise 4), then we would have completely described the space of measures of X in terms of the non-negative functions of X (modulo m-almost everywhere equivalence). Unfortunately, not every measure is differentiable with respect to every other: for instance, if x is a point in X, then the only measures that are differentiable with respect to the Dirac measure \delta_x are the scalar multiples of that measure. We will explore the precise obstruction that prevents all measures from being differentiable, culminating in the Radon-Nikodym-Lebesgue theorem that gives a satisfactory understanding of the situation in the \sigma-finite case (which is the case of interest for most applications).

In order to establish this theorem, it will be important to first study some other basic operations on measures, notably the ability to subtract one measure from another. This will necessitate the study of signed measures, to which we now turn.

[The material here is largely based on Folland's text, except for the last section.]

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,868 other followers