Let {[a,b]} be a compact interval of positive length (thus {-\infty < a < b < +\infty}). Recall that a function {F: [a,b] \rightarrow {\bf R}} is said to be differentiable at a point {x \in [a,b]} if the limit

\displaystyle  F'(x) := \lim_{y \rightarrow x; y \in [a,b] \backslash \{x\}} \frac{F(y)-F(x)}{y-x} \ \ \ \ \ (1)

exists. In that case, we call {F'(x)} the strong derivative, classical derivative, or just derivative for short, of {F} at {x}. We say that {F} is everywhere differentiable, or differentiable for short, if it is differentiable at all points {x \in [a,b]}, and differentiable almost everywhere if it is differentiable at almost every point {x \in [a,b]}. If {F} is differentiable everywhere and its derivative {F'} is continuous, then we say that {F} is continuously differentiable.

Remark 1 Much later in this sequence, when we cover the theory of distributions, we will see the notion of a weak derivative or distributional derivative, which can be applied to a much rougher class of functions and is in many ways more suitable than the classical derivative for doing “Lebesgue” type analysis (i.e. analysis centred around the Lebesgue integral, and in particular allowing functions to be uncontrolled, infinite, or even undefined on sets of measure zero). However, for now we will stick with the classical approach to differentiation.

Exercise 1 If {F: [a,b] \rightarrow {\bf R}} is everywhere differentiable, show that {F} is continuous and {F'} is measurable. If {F} is almost everywhere differentiable, show that the (almost everywhere defined) function {F'} is measurable (i.e. it is equal to an everywhere defined measurable function on {[a,b]} outside of a null set), but give an example to demonstrate that {F} need not be continuous.

Exercise 2 Give an example of a function {F: [a,b] \rightarrow {\bf R}} which is everywhere differentiable, but not continuously differentiable. (Hint: choose an {F} that vanishes quickly at some point, say at the origin {0}, but which also oscillates rapidly near that point.)

In single-variable calculus, the operations of integration and differentiation are connected by a number of basic theorems, starting with Rolle’s theorem.

Theorem 1 (Rolle’s theorem) Let {[a,b]} be a compact interval of positive length, and let {F: [a,b] \rightarrow {\bf R}} be a differentiable function such that {F(a)=F(b)}. Then there exists {x \in (a,b)} such that {F'(x)=0}.

Proof: By subtracting a constant from {F} (which does not affect differentiability or the derivative) we may assume that {F(a)=F(b)=0}. If {F} is identically zero then the claim is trivial, so assume that {F} is non-zero somewhere. By replacing {F} with {-F} if necessary, we may assume that {F} is positive somewhere, thus {\sup_{x \in [a,b]} F(x) > 0}. On the other hand, as {F} is continuous and {[a,b]} is compact, {F} must attain its maximum somewhere, thus there exists {x \in [a,b]} such that {F(x) \geq F(y)} for all {y \in [a,b]}. Then {F(x)} must be positive and so {x} cannot equal either {a} or {b}, and thus must lie in the interior. From the right limit of (1) we see that {F'(x) \leq 0}, while from the left limit we have {F'(x) \geq 0}. Thus {F'(x)=0} and the claim follows. \Box

Remark 2 Observe that the same proof also works if {F} is only differentiable in the interior {(a,b)} of the interval {[a,b]}, so long as it is continuous all the way up to the boundary of {[a,b]}.

Exercise 3 Give an example to show that Rolle’s theorem can fail if {f} is merely assumed to be almost everywhere differentiable, even if one adds the additional hypothesis that {f} is continuous. This example illustrates that everywhere differentiability is a significantly stronger property than almost everywhere differentiability. We will see further evidence of this fact later in these notes; there are many theorems that assert in their conclusion that a function is almost everywhere differentiable, but few that manage to conclude everywhere differentiability.

Remark 3 It is important to note that Rolle’s theorem only works in the real scalar case when {F} is real-valued, as it relies heavily on the least upper bound property for the domain {{\bf R}}. If, for instance, we consider complex-valued scalar functions {F: [a,b] \rightarrow {\bf C}}, then the theorem can fail; for instance, the function {F: [0,1] \rightarrow {\bf C}} defined by {F(x) := e^{2\pi i x} - 1} vanishes at both endpoints and is differentiable, but its derivative {F'(x) = 2\pi i e^{2\pi i x}} is never zero. (Rolle’s theorem does imply that the real and imaginary parts of the derivative {F'} both vanish somewhere, but the problem is that they don’t simultaneously vanish at the same point.) Similar remarks to functions taking values in a finite-dimensional vector space, such as {{\bf R}^n}.

One can easily amplify Rolle’s theorem to the mean value theorem:

Corollary 2 (Mean value theorem) Let {[a,b]} be a compact interval of positive length, and let {F: [a,b] \rightarrow {\bf R}} be a differentiable function. Then there exists {x \in (a,b)} such that {F'(x)=\frac{F(b)-F(a)}{b-a}}.

Proof: Apply Rolle’s theorem to the function {x \mapsto F(x) - \frac{F(b)-F(a)}{b-a} (x-a)}. \Box

Remark 4 As Rolle’s theorem is only applicable to real scalar-valued functions, the more general mean value theorem is also only applicable to such functions.

Exercise 4 (Uniqueness of antiderivatives up to constants) Let {[a,b]} be a compact interval of positive length, and let {F: [a,b] \rightarrow {\bf R}} and {G: [a,b] \rightarrow {\bf R}} be differentiable functions. Show that {F'(x)=G'(x)} for every {x \in [a,b]} if and only if {F(x)=G(x)+C} for some constant {C \in {\bf R}} and all {x \in [a,b]}.

We can use the mean value theorem to deduce one of the fundamental theorems of calculus:

Theorem 3 (Second fundamental theorem of calculus) Let {F: [a,b] \rightarrow {\bf R}} be a differentiable function, such that {F'} is Riemann integrable. Then the Riemann integral {\int_a^b F'(x)\ dx} of {F'} is equal to {F(b) - F(a)}. In particular, we have {\int_a^b F'(x)\ dx = F(b)-F(a)} whenever {F} is continuously differentiable.

Proof: Let {\epsilon > 0}. By the definition of Riemann integrability, there exists a finite partition {a = t_0 < t_1 < \ldots < t_k = b} such that

\displaystyle  |\sum_{j=1}^k F'(t^*_j) (t_j - t_{j-1}) - \int_a^b F'(x)| \leq \epsilon

for every choice of {t^*_j \in [t_{j-1},t_j]}.

Fix this partition. From the mean value theorem, for each {1 \leq j \leq k} one can find {t^*_j \in [t_{j-1},t_j]} such that

\displaystyle  F'(t^*_j) (t_j - t_{j-1}) = F(t_j) - F(t_{j-1})

and thus by telescoping series

\displaystyle  |(F(b)-F(a)) - \int_a^b F'(x)| \leq \epsilon.

Since {\epsilon > 0} was arbitrary, the claim follows. \Box

Remark 5 Even though the mean value theorem only holds for real scalar functions, the fundamental theorem of calculus holds for complex or vector-valued functions, as one can simply apply that theorem to each component of that function separately.

Of course, we also have the other half of the fundamental theorem of calculus:

Theorem 4 (First fundamental theorem of calculus) Let {[a,b]} be a compact interval of positive length. Let {f: [a,b] \rightarrow {\bf C}} be a continuous function, and let {F: [a,b] \rightarrow {\bf C}} be the indefinite integral {F(x) := \int_a^x f(t)\ dt}. Then {F} is differentiable on {[a,b]}, with derivative {F'(x) = f(x)} for all {x \in [a,b]}. In particular, {F} is continuously differentiable.

Proof: It suffices to show that

\displaystyle  \lim_{h \rightarrow 0^+} \frac{F(x+h)-F(x)}{h} = f(x)

for all {x \in [a,b)}, and

\displaystyle  \lim_{h \rightarrow 0^-} \frac{F(x+h)-F(x)}{h} = f(x)

for all {x \in (a,b]}. After a change of variables, we can write

\displaystyle  \frac{F(x+h)-F(x)}{h} = \int_0^1 f(x+ht)\ dt

for any {x \in [a,b)} and any sufficiently small {h>0}, or any {x \in (a,b]} and any sufficiently small {h<0}. As {f} is continuous, the function {t \mapsto f(x+ht)} converges uniformly to {f(x)} on {[0,1]} as {h \rightarrow 0} (keeping {x} fixed). As the interval {[0,1]} is bounded, {\int_0^1 f(x+ht)\ dt} thus converges to {\int_0^1 f(x)\ dt = f(x)}, and the claim follows. \Box

Corollary 5 (Differentiation theorem for continuous functions) Let {f: [a,b] \rightarrow {\bf C}} be a continuous function on a compact interval. Then we have

\displaystyle  \lim_{h \rightarrow 0^+} \frac{1}{h} \int_{[x,x+h]} f(t)\ dt = f(x)

for all {x \in [a,b)},

\displaystyle  \lim_{h \rightarrow 0^+} \frac{1}{h} \int_{[x-h,x]} f(t)\ dt = f(x)

for all {x \in (a,b]}, and thus

\displaystyle  \lim_{h \rightarrow 0^+} \frac{1}{2h} \int_{[x-h,x+h]} f(t)\ dt = f(x)

for all {x \in (a,b)}.

In these notes we explore the question of the extent to which these theorems continue to hold when the differentiability or integrability conditions on the various functions {F, F', f} are relaxed. Among the results proven in these notes are

  • The Lebesgue differentiation theorem, which roughly speaking asserts that Corollary 5 continues to hold for almost every {x} if {f} is merely absolutely integrable, rather than continuous;
  • A number of differentiation theorems, which assert for instance that monotone, Lipschitz, or bounded variation functions in one dimension are almost everywhere differentiable; and
  • The second fundamental theorem of calculus for absolutely continuous functions.

The material here is loosely based on Chapter 3 of Stein-Shakarchi.

— 1. The Lebesgue differentiation theorem in one dimension —

The main objective of this section is to show

Theorem 6 (Lebesgue differentiation theorem, one-dimensional case) Let {f: {\bf R} \rightarrow {\bf C}} be an absolutely integrable function, and let {F: {\bf R} \rightarrow {\bf C}} be the definite integral {F(x) := \int_{[-\infty,x]} f(t)\ dt}. Then {F} is continuous and almost everywhere differentiable, and {F'(x)= f(x)} for almost every {x \in {\bf R}}.

This can be viewed as a variant of Corollary 5; the hypotheses are weaker because {f} is only assumed to be absolutely integrable, rather than continuous (and can live on the entire real line, and not just on a compact interval); but the conclusion is weaker too, because {F} is only found to be almost everywhere differentiable, rather than everywhere differentiable. (But such a relaxation of the conclusion is necessary at this level of generality; consider for instance the example when {f = 1_{[0,1]}}.)

The continuity is an easy exercise:

Exercise 5 Let {f: {\bf R} \rightarrow {\bf C}} be an absolutely integrable function, and let {F: {\bf R} \rightarrow {\bf C}} be the definite integral {F(x) := \int_{[-\infty,x]} f(t)\ dt}. Show that {F} is continuous.

The main difficulty is to show that {F'(x)=f(x)} for almost every {x \in {\bf R}}. This will follow from

Theorem 7 (Lebesgue differentiation theorem, second formulation) Let {f: {\bf R} \rightarrow {\bf C}} be an absolutely integrable function. Then

\displaystyle  \lim_{h \rightarrow 0^+} \frac{1}{h} \int_{[x,x+h]} f(t)\ dt = f(x) \ \ \ \ \ (2)

for almost every {x \in {\bf R}}, and

\displaystyle  \lim_{h \rightarrow 0^+} \frac{1}{h} \int_{[x-h,x]} f(t)\ dt = f(x) \ \ \ \ \ (3)

for almost every {x \in {\bf R}}.

Exercise 6 Show that Theorem 6 follows from Theorem 7.

We will just prove the first fact (2); the second fact (3) is similar (or can be deduced from (2) by replacing {f} with the reflected function {x \mapsto f(-x)}.

We are taking {f} to be complex valued, but it is clear from taking real and imaginary parts that it suffices to prove the claim when {f} is real-valued, and we shall thus assume this for the rest of the argument.

The conclusion (2) we want to prove is a convergence theorem – an assertion that for all functions {f} in a given class (in this case, the class of absolutely integrable functions {f: {\bf R} \rightarrow {\bf R}}), a certain sequence of linear expressions {T_h f} (in this case, the right averages {T_h f(x) = \frac{1}{h} \int_{[x,x+h]} f(t)\ dt}) converge in some sense (in this case, pointwise almost everywhere) to a specified limit (in this case, {f}). There is a general and very useful argument to prove such convergence theorems, known as the density argument. This argument requires two ingredients, which we state informally as follows:

  1. A verification of the convergence result for some “dense subclass” of “nice” functions {f}, such as continuous functions, smooth functions, simple functions, etc.. By “dense”, we mean that a general function {f} in the original class can be approximated to arbitrary accuracy in a suitable sense by a function in the nice subclass.
  2. A quantitative estimate that upper bounds the maximal fluctuation of the linear expressions {T_h f} in terms of the “size” of the function {f} (where the precise definition of “size” depends on the nature of the approximation in the first ingredient).

Once one has these two ingredients, it is usually not too hard to put them together to obtain the desired convergence theorem for general functions {f} (not just those in the dense subclass). We illustrate this with a simple example:

Proposition 8 (Translation is continuous in {L^1}) Let {f: {\bf R}^d \rightarrow {\bf C}} be an absolutely integrable function, and for each {h \in {\bf R}^d}, let {f_h: {\bf R}^d \rightarrow {\bf C}} be the shifted function

\displaystyle  f_h(x) := f(x-h).

Then {f_h} converges in {L^1} norm to {f} as {h \rightarrow 0}, thus

\displaystyle  \lim_{h \rightarrow 0} \int_{{\bf R}^d} |f_h(x) - f(x)|\ dx = 0.

Proof: We first verify this claim for a dense subclass of {f}, namely the functions {f} which are continuous and compactly supported (i.e. they vanish outside of a compact set). Such functions are continuous, and thus {f_h} converges uniformly to {f} as {h \rightarrow 0}. Furthermore, as {f} is compactly supported, the support of {f_h-f} stays uniformly bounded for {h} in a bounded set. From this we see that {f_h} also converges to {f} in {L^1} norm as required.

Next, we observe the quantitative estimate

\displaystyle  \int_{{\bf R}^d} |f_h(x) - f(x)|\ dx \leq 2 \int_{{\bf R}^d} |f(x)|\ dx \ \ \ \ \ (4)

for any {h \in {\bf R}^d}. This follows easily from the triangle inequality

\displaystyle  \int_{{\bf R}^d} |f_h(x) - f(x)|\ dx \leq \int_{{\bf R}^d} |f_h(x)|\ dx + \int_{{\bf R}^d} |f(x)|\ dx

together with the translation invariance of the Lebesgue integral:

\displaystyle  \int_{{\bf R}^d} |f_h(x)|\ dx = \int_{{\bf R}^d} |f(x)|\ dx.

Now we put the two ingredients together. Let {f: {\bf R}^d \rightarrow {\bf C}} be absolutely integrable, and let {\epsilon > 0} be arbitrary. Applying Littlewood’s second principle (Theorem 15 from Notes 2) to the absolutely integrable function {F'}, we can find a continuous, compactly supported function {g: {\bf R}^d \rightarrow {\bf C}} such that

\displaystyle  \int_{{\bf R}^d} |f(x)-g(x)|\ dx \leq \epsilon.

Applying (4), we conclude that

\displaystyle  \int_{{\bf R}^d} |(f-g)_h(x)-(f-g)(x)|\ dx \leq 2\epsilon,

which we rearrange as

\displaystyle  \int_{{\bf R}^d} |(f_h-f)(x)-(g_h-g)(x)|\ dx \leq 2\epsilon.

By the dense subclass result, we also know that

\displaystyle  \int_{{\bf R}^d} |g_h(x)-g(x)|\ dx \leq \epsilon

for all {h} sufficiently close to zero. From the triangle inequality, we conclude that

\displaystyle  \int_{{\bf R}^d} |f_h(x)-f(x)|\ dx \leq 3\epsilon

for all {h} sufficiently close to zero, and the claim follows. \Box

Remark 6 In the above application of the density argument, we proved the required quantitative estimate directly for all functions {f} in the original class of functions. However, it is also possible to use the density argument a second time and initially verify the quantitative estimate just for functions {f} in a nice subclass (e.g. continuous functions of compact support). In many cases, one can then extend that estimate to the general case by using tools such as Fatou’s lemma, which are particularly suited for showing that upper bound estimates are preserved with respect to limits.

Exercise 7 Let {f: {\bf R}^d \rightarrow {\bf C}}, {g: {\bf R}^d \rightarrow {\bf C}} be Lebesgue measurable functions such that {f} is absolutely integrable and {g} is essentially bounded (i.e. bounded outside of a null set). Show that the convolution {f*g: {\bf R}^d \rightarrow {\bf C}} defined by the formula

\displaystyle  f*g(x) = \int_{{\bf R}^d} f(y) g(x-y)\ dy

is well-defined (in the sense that the integrand on the right-hand side is absolutely integrable) and that {f*g} is a bounded, continuous function.

The above exercise is illustrative of a more general intuition, which is that convolutions tend to be smoothing in nature; the convolution {f*g} of two functions is usually at least as regular as, and often more regular than, either of the two factors {f, g}.

This smoothing phenomenon gives rise to an important fact, namely the Steinhaus theorem:

Exercise 8 (Steinhaus theorem) Let {E \subset {\bf R}^d} be a Lebesgue measurable set of positive measure. Show that the set {E-E := \{ x-y: x, y \in E \}} contains an open neighbourhood of the origin. (Hint: reduce to the case when {E} is bounded, and then apply the previous exercise to the convolution {1_E * 1_{-E}}, where {-E := \{ -y: y \in E \}}.)

Exercise 9 A homomorphism {f: {\bf R}^d \rightarrow {\bf C}} is a map with the property that {f(x+y)=f(x)+f(y)} for all {x,y \in {\bf R}^d}.

  • Show that all measurable homomorphisms are continuous. (Hint: for any disk {D} centered at the origin in the complex plane, show that {f^{-1}(z+D)} has positive measure for at least one {z \in {\bf C}}, and then use the Steinhaus theorem from the previous exercise.)
  • Show that {f} is a measurable homomorphism if and only if it takes the form {f(x_1,\ldots,x_d) = x_1 z_1 +\ldots + x_d z_d} for all {x_1,\ldots,x_d \in {\bf R}} and some complex coefficients {z_1,\ldots,z_d}. (Hint: first establish this for rational {x_1,\ldots,x_d}, and then use the previous part of this exercise.)
  • (For readers familiar with Zorn’s lemma) Show that there exist homomorphisms {f: {\bf R}^d \rightarrow {\bf C}} which are not of the form in the previous exercise. (Hint: view {{\bf R}^d} (or {{\bf C}}) as a vector space over the rationals {{\bf Q}}, and use the fact (from Zorn’s lemma) that every vector space – even an infinite-dimensional one – has at least one basis.) This gives an alternate construction of a non-measurable set to that given in previous notes.

Remark 7 One drawback with the density argument is it gives convergence results which are qualitative rather than quantitative – there is no explicit bound on the rate of convergence. For instance, in Proposition 8, we know that for any {\epsilon > 0}, there exists {\delta > 0} such that {\int_{{\bf R}^d} |f_h(x)-f(x)|\ dx \leq \epsilon} whenever {|h| \leq \delta}, but we do not know exactly how {\delta} depends on {\epsilon} and {f}. Actually, the proof does eventually give such a bound, but it depends on “how measurable” the function {f} is, or more precisely how “easy” it is to approximate {f} by a “nice” function. To illustrate this issue, let’s work in one dimension and consider the function {f(x) := \sin(Nx) 1_{[0,2\pi]}(x)}, where {N \geq 1} is a large integer. On the one hand, {f} is bounded in the {L^1} norm uniformly in {N}: {\int_{\bf R} |f(x)|\ dx \leq 2\pi} (indeed, the left-hand side is equal to {4}). On the other hand, it is not hard to see that {\int_{\bf R} |f_{\pi/N}(x) - f(x)|\ dx \geq c} for some absolute constant {c>0}. Thus, if one force {\int_{\bf R} |f_h(x) - f(x)|\ dx} to drop below {c}, one has to make {h} at most {\pi/N} from the origin. Making {N} large, we thus see that the rate of convergence of {\int_{\bf R} |f_h(x) - f(x)|\ dx} to zero can be arbitrarily slow, even though {f} is bounded in {L^1}. The problem is that as {N} gets large, it becomes increasingly difficult to approximate {f} well by a “nice” function, by which we mean a uniformly continuous function with a reasonable modulus of continuity, due to the increasingly oscillatory nature of {f}. See this blog post for some further discussion of this issue, and what quantitative substitutes are available for such qualitative results.

Now we return to the Lebesgue differentiation theorem, and apply the density argument. The dense subclass result is already contained in Corollary 5, which asserts that (2) holds for all continuous functions {f}. The quantitative estimate we will need is the following special case of the Hardy-Littlewood maximal inequality:

Lemma 9 (One-sided Hardy-Littlewood maximal inequality) Let {f: {\bf R} \rightarrow {\bf C}} be an absolutely integrable function, and let {\lambda > 0}. Then

\displaystyle  m( \{ x \in {\bf R}: \sup_{h>0} \frac{1}{h} \int_{[x,x+h]} |f(t)|\ dt \geq \lambda \} ) \leq \frac{1}{\lambda} \int_{\bf R} |f(t)|\ dt.

We will prove this lemma shortly, but let us first see how this, combined with the dense subclass result, will give the Lebesgue differentiation theorem. Let {f: {\bf R} \rightarrow {\bf C}} be absolutely integrable, and let {\epsilon, \lambda > 0} be arbitrary. Then by Littlewood’s second principle, we can find a function {g: {\bf R} \rightarrow {\bf C}} which is continuous and compactly supported, with

\displaystyle  \int_{\bf R} |f(x)-g(x)|\ dx \leq \epsilon.

Applying the one-sided Hardy-Littlewood maximal inequality, we conclude that

\displaystyle  m( \{ x \in {\bf R}: \sup_{h>0} \frac{1}{h} \int_{[x,x+h]} |f(t)-g(t)|\ dt \geq \lambda \} ) \leq \frac{\epsilon}{\lambda}.

In a similar spirit, from Markov’s inequality we have

\displaystyle  m( \{ x \in {\bf R}: |f(x)-g(x)| \geq \lambda \} ) \leq \frac{\epsilon}{\lambda}.

By subadditivity, we conclude that for all {x \in {\bf R}} outside of a set {E} of measure at most {2\epsilon/\lambda}, one has both

\displaystyle  \frac{1}{h} \int_{[x,x+h]} |f(t)-g(t)|\ dt < \lambda \ \ \ \ \ (5)

and

\displaystyle  |f(x)-g(x)| < \lambda \ \ \ \ \ (6)

for all {h > 0}.

Now let {x \in {\bf R} \backslash E}. From the dense subclass result (Corollary 5) applied to the continuous function {g}, we have

\displaystyle  |\frac{1}{h} \int_{[x,x+h]} g(t)\ dt - g(x)| < \lambda

whenever {h} is sufficiently close to {x}. Combining this with (5), (6), and the triangle inequality, we conclude that

\displaystyle  |\frac{1}{h} \int_{[x,x+h]} f(t)\ dt - f(x)| < 3\lambda

for all {h} sufficiently close to zero. In particular we have

\displaystyle  \limsup_{h \rightarrow 0} |\frac{1}{h} \int_{[x,x+h]} f(t)\ dt - f(x)| < 3\lambda

for all {x} outside of a set of measure {2\epsilon/\lambda}. Keeping {\lambda} fixed and sending {\epsilon} to zero, we conclude that

\displaystyle  \limsup_{h \rightarrow 0} |\frac{1}{h} \int_{[x,x+h]} f(t)\ dt - f(x)| < 3\lambda

for almost every {x \in {\bf R}}. If we then let {\lambda} go to zero along a countable sequence (e.g. {\lambda := 1/n} for {n=1,2,\ldots}), we conclude that

\displaystyle  \limsup_{h \rightarrow 0} |\frac{1}{h} \int_{[x,x+h]} f(t)\ dt - f(x)| = 0

for almost every {x \in {\bf R}}, and the claim follows.

The only remaining task is to establish the one-sided Hardy-Littlewood maximal inequality. We will do so by using the rising sun lemma:

Lemma 10 (Rising sun lemma) Let {[a,b]} be a compact interval, and let {F: [a,b] \rightarrow {\bf R}} be a continuous function. Then one can find an at most countable family of disjoint non-empty open intervals {I_n = (a_n,b_n)} in {[a,b]} with the following properties:

  1. For each {n}, either {F(a_n)=F(b_n)}, or else {a_n=a} and {F(b_n) \geq F(a_n)}.
  2. If {x \in [a,b]} does not lie in any of the intervals {I_n}, then one must have {F(y) \leq F(x)} for all {x \leq y \leq b}.

Remark 8 To explain the name “rising sun lemma”, imagine the graph {\{ (x, F(x)): x \in [a,b] \}} of {F} as depicting a hilly landscape, with the sun shining horizontally from the rightward infinity {(+\infty,0)} (or rising from the east, if you will). Those {x} for which {F(y) \leq F(x)} are the locations on the landscape which are illuminated by the sun. The intervals {I_n} then represent the portions of the landscape that are in shadow.

This lemma is proven using the following basic fact:

Exercise 10 Show that any open subset {U} of {{\bf R}} can be written as the union of at most countably many disjoint non-empty open intervals, whose endpoints lie outside of {U}. (Hint: first show that every {x} in {U} is contained in a maximal open subinterval {(a,b)} of {U}, and that these maximal open subintervals are disjoint, with each such interval containing at least one rational number.)

Proof: (Proof of rising sun lemma) Let {U} be the set of all {x \in (a,b)} such that {F(y) > F(x)} for at least one {x < y < b}. As {F} is continuous, {U} is open, and so {U} is the union of at most countably many disjoint non-empty open intervals {I_n = (a_n,b_n)}, with the endpoints {a_n, b_n} lying outside of {U}.

The second conclusion of the rising sun lemma is clear from construction, so it suffices to establish the first. Suppose first that {I_n = (a_n,b_n)} is such that {a_n \neq a}. As the endpoint {a_n} does not lie in {U}, we must have {F(y) \leq F(a_n)} for all {a_n \leq y \leq b}; similarly we have {F(y) \leq F(b_n)} for all {b_n \leq y \leq b}. In particular we have {F(b_n) \leq F(a_n)}. By the continuity of {F}, it will then suffice to show that {F(b_n) \geq F(t)} for all {a_n < t < b_n}.

Suppose for contradiction that there was {a_n < t < b_n} with {F(b_n) < F(t)}. Let {A := \{ s \in [t,b]: F(s) \geq F(t)\}}, then {A} is a closed set that contains {t} but not {b}. Set {t_* := \sup(A)}, then {t_* \in [t,b) \subset I_n \subset U}, and thus there exists {t_* < y \leq b} such that {F(y) > F(t_*)}. Since {F(t_*) \geq F(t) > F(b_n)}, and {F(b_n) \geq F(z)} for all {b_n \leq z \leq b}, we see that {y} cannot exceed {b_n}, and thus lies in {A}, but this contradicts the fact that {t_*} is the supremum of {A}.

The case when {a_n=a} is similar and is left to the reader; the only difference is that we can no longer assert that {F(y) \leq F(a_n)} for all {a_n \leq y \leq b}, and so do not have the upper bound {F(b_n) \leq F(a_n)}. \Box

Now we can prove the one-sided Hardy-Littlewood maximal inequality. By upwards monotonicity, it will suffice to show that

\displaystyle  m( \{ x \in [a,b]: \sup_{h>0; [x,x+h] \subset [a,b]} \frac{1}{h} \int_{[x,x+h]} |f(t)|\ dt \geq \lambda \} ) \leq \frac{1}{\lambda} \int_{\bf R} |f(t)|\ dt

for any compact interval {[a,b]}. By modifying {\lambda} by an epsilon, we may replace the non-strict inequality here with strict inequality:

\displaystyle  m( \{ x \in [a,b]: \sup_{h>0; [x,x+h] \subset [a,b]} \frac{1}{h} \int_{[x,x+h]} |f(t)|\ dt > \lambda \} ) \leq \frac{1}{\lambda} \int_{\bf R} |f(t)|\ dt \ \ \ \ \ (7)

Fix {[a,b]}. We apply the rising sun lemma to the function {F: [a,b] \rightarrow {\bf R}} defined as

\displaystyle  F(x) := \int_{[a,x]} |f(t)|\ dt - (x-a) \lambda.

By Lemma 5, {F} is continuous, and so we can find an at most countable sequence of intervals {I_n = (a_n,b_n)} with the properties given by the rising sun lemma. From the second property of that lemma, we observe that

\displaystyle  \{ x \in [a,b]: \sup_{h>0; [x,x+h] \subset [a,b]} \frac{1}{h} \int_{[x,x+h]} |f(t)|\ dt > \lambda \} \subset \bigcup_n I_n,

since the property {\frac{1}{h} \int_{[x,x+h]} |f(t)|\ dt > \lambda} can be rearranged as {F(x+h) > F(x)}. By countable additivity, we may thus upper bound the left-hand side of (7) by {\sum_n (b_n-a_n)}. On the other hand, since {F(b_n)-F(a_n) \geq 0}, we have

\displaystyle  \int_{I_n} |f(t)|\ dt \geq \lambda (b_n-a_n)

and thus

\displaystyle  \sum_n (b_n-a_n) \leq \frac{1}{\lambda} \sum_n \int_{I_n} |f(t)|\ dt.

As the {I_n} are disjoint intervals in {I}, we may apply monotone convergence and monotonicity to conclude that

\displaystyle  \sum_n \int_{I_n} |f(t)|\ dt \leq \int_{[a,b]} |f(t)|\ dt,

and the claim follows.

Exercise 11 (Two-sided Hardy-Littlewood maximal inequality) Let {f: {\bf R} \rightarrow {\bf C}} be an absolutely integrable function, and let {\lambda > 0}. Show that

\displaystyle  m( \{ x \in {\bf R}: \sup_{x \in I} \frac{1}{|I|} \int_{I} |f(t)|\ dt \geq \lambda \} ) \leq \frac{2}{\lambda} \int_{\bf R} |f(t)|\ dt,

where the supremum ranges over all intervals {I} of positive length that contain {x}.

Exercise 12 (Rising sun inequality) Let {f: {\bf R} \rightarrow {\bf R}} be an absolutely integrable function, and let {f^*: {\bf R} \rightarrow {\bf R}} be the one-sided signed Hardy-Littlewood maximal function

\displaystyle  f^*(x) := \sup_{h>0} \frac{1}{h} \int_{[x,x+h]} f(t)\ dt.

Establish the rising sun inequality

\displaystyle  \lambda m( \{ f^*(x) > \lambda \} ) \leq \int_{x: f^*(x) > \lambda} f(x)\ dx

for all real {\lambda} (note here that we permit {\lambda} to be zero or negative), and show that this inequality implies Lemma 12. (Hint: First do the {\lambda=0} case, by invoking the rising sun lemma.) See these lecture notes for some further discussion of inequalities of this type, and applications to ergodic theory (and in particular the maximal ergodic theorem).

Exercise 13 Show that the left and right-hand sides in Exercise 12 are in fact equal when \lambda>0. (Hint: one may first wish to try this in the case when {f} has compact support, in which case one can apply the rising sun lemma to a sufficiently large interval containing the support of {f}.)

— 2. The Lebesgue differentiation theorem in higher dimensions —

Now we extend the Lebesgue differentiation theorem to higher dimensions. Theorem 6 does not have an obvious high-dimensional analogue, but Theorem 7 does:

Theorem 11 (Lebesgue differentiation theorem in high dimensions) Let {f: {\bf R}^d \rightarrow {\bf C}} be an absolutely integrable function. Then for almost every {x \in {\bf R}^d}, one has

\displaystyle  \lim_{r \rightarrow 0} \frac{1}{m(B(x,r))} \int_{B(x,r)} |f(y) - f(x)|\ dy = 0 \ \ \ \ \ (8)

and

\displaystyle  \lim_{r \rightarrow 0} \frac{1}{m(B(x,r))} \int_{B(x,r)} f(y)\ dy = f(x),

where {B(x,r) := \{ y \in {\bf R}^d: |x-y| < r \}} is the open ball of radius {r} centred at {x}.

From the triangle inequality we see that

\displaystyle  |\frac{1}{m(B(x,r))} \int_{B(x,r)} f(y)\ dy - f(x)|

\displaystyle  = |\frac{1}{m(B(x,r))} \int_{B(x,r)} f(y) - f(x)\ dy|

\displaystyle \leq \frac{1}{m(B(x,r))} \int_{B(x,r)} |f(y) - f(x)|\ dy,

so we see that the first conclusion of Theorem 11 implies the second. A point {x} for which (8) holds is called a Lebesgue point of {f}; thus, for an absolutely integrable function {f}, almost every point in {{\bf R}^d} will be a Lebesgue point for {{\bf R}^d}.

Exercise 14 Call a function {f: {\bf R}^d \rightarrow {\bf C}} locally integrable if, for every {x \in {\bf R}^d}, there exists an open neighbourhood of {x} on which {f} is absolutely integrable.

  • Show that {f} is locally integrable if and only if {\int_{B(0,r)} |f(x)|\ dx < \infty} for all {r>0}.
  • Show that Theorem 11 implies a generalisation of itself in which the condition of absolute integrability of {f} is weakened to local integrability.

Exercise 15 For each {h>0}, let {E_h} be a subset of {B(0,h)} with the property that {m(E_h) \geq c m(B(0,h))} for some {c>0} independent of {h}. Show that if {f: {\bf R}^d \rightarrow {\bf C}} is locally integrable, and {x} is a Lebesgue point of {f}, then

\displaystyle  \lim_{h \rightarrow 0} \frac{1}{m(E_h)} \int_{x+E_h} f(y)\ dy = f(x).

Conclude that Theorem 11 implies Theorem 7.

To prove Theorem 11, we use the density argument. The dense subclass case is easy:

Exercise 16 Show that Theorem 11 holds whenever {f} is continuous.

The quantitative estimate needed is the following:

Theorem 12 (Hardy-Littlewood maximal inequality) Let {f: {\bf R}^d \rightarrow {\bf C}} be an absolutely integrable function, and let {\lambda > 0}. Then

\displaystyle  m( \{ x \in {\bf R}^d: \sup_{r>0} \frac{1}{m(B(x,r))} \int_{B(x,r)} |f(y)|\ dy \geq \lambda \} )

\displaystyle  \leq \frac{C_d}{\lambda} \int_{\bf R} |f(t)|\ dt

for some constant {C_d>0} depending only on {d}.

Remark 9 The expression {\sup_{r>0} \frac{1}{m(B(x,r))} \int_{B(x,r)} |f(y)|\ dy \geq \lambda \}} is known as the Hardy-Littlewood maximal function of {f}, and is often denoted {Mf(x)}. It is an important function in the field of (real-variable) harmonic analysis.

Exercise 17 Use the density argument to show that Theorem 12 implies Theorem 11.

In the one-dimensional case, this estimate was established via the rising sun lemma. Unfortunately, that lemma relied heavily on the ordered nature of {{\bf R}}, and does not have an obvious analogue in higher dimensions. Instead, we will use the following covering lemma. Given an open ball {B = B(x,r)} in {{\bf R}^d} and a real number {c > 0}, we write {cB := B(x,cr)} for the ball with the same centre as {B}, but {c} times the radius. (Note that this is slightly different from the set {c \cdot B := \{ cy: y \in B \}} – why?) Note that {|cB| = c^d |B|} for any open ball {B \subset {\bf R}^d} and any {c>0}.

Lemma 13 (Vitali-type covering lemma) Let {B_1,\ldots,B_n} be a finite collection of open balls in {{\bf R}^d} (not necessarily disjoint). Then there exists a subcollection {B'_1,\ldots,B'_m} of disjoint balls in this collection, such that

\displaystyle  \bigcup_{i=1}^n B_i \subset \bigcup_{j=1}^m 3 B'_j. \ \ \ \ \ (9)

In particular, by finite subadditivity,

\displaystyle  m( \bigcup_{i=1}^n B_i ) \leq 3^d \sum_{j=1}^m m(B'_j).

Proof: We use a greedy algorithm argument, selecting the balls {B'_i} to be as large as possible while remaining disjoint. More precisely, we run the following algorithm:

  • Step 0. Initialise {m=0} (so that, initially, there are no balls {B'_1,\ldots,B'_m} in the desired collection).
  • Step 1. Look at all the balls {B_j} that do not already intersect one of the {B'_1,\ldots,B'_m} (which, initially, will be all the balls {B_1,\ldots,B_n}). If there are no such balls, STOP. Otherwise, go on to Step 2.
  • Step 2. Locate the largest ball {B_j} that does not already intersect one of the {B'_1,\ldots,B'_m}. (If there are multiple largest balls with exactly the same radius, break the tie arbitrarily.) Add this ball to the collection {B'_1,\ldots,B'_m} by setting {B'_{m+1} := B_j} and then incrementing {m} to {m+1}. Then return to Step 1.

Note that at each iteration of this algorithm, the number of available balls amongst the {B_1,\ldots,B_n} drops by at least one (since each ball selected certainly intersects itself and so cannot be selected again). So this algorithm terminates in finite time. It is also clear from construction that the {B'_1,\ldots,B'_m} are a subcollection of the {B_1,\ldots,B_n} consisting of disjoint balls. So the only task remaining is to verify that (9) holds at the completion of the algorithm, i.e. to show that each ball {B_i} in the original collection is covered by the triples {3B'_j} of the subcollection.

For this, we argue as follows. Take any ball {B_i} in the original collection. Because the algorithm only halts when there are no more balls that are disjoint from the {B'_1,\ldots,B'_m}, the ball {B_i} must intersect at least one of the balls {B'_j} in the subcollection. Let {B'_j} be the first ball with this property, thus {B_i} is disjoint from {B'_1,\ldots,B'_{j-1}}, but intersects {B'_j}. Because {B'_j} was chosen to be largest amongst all balls that did not intersect {B'_1,\ldots,B'_{j-1}}, we conclude that the radius of {B_i} cannot exceed that of {B'_j}. From the triangle inequality, this implies that {B_i \subset 3B'_j}, and the claim follows. \Box

Exercise 18 Technically speaking, the above algorithmic argument was not phrased in the standard language of formal mathematical deduction, because in that language, any mathematical object (such as the natural number {m}) can only be defined once, and not redefined multiple times as is done in most algorithms. Rewrite the above argument in a way that avoids redefining any variable. (Hint: introduce a “time” variable {t}, and recursively construct families {B'_{1,t},\ldots,B'_{m_t,t}} of balls that represent the outcome of the above algorithm after {t} iterations (or {t_*} iterations, if the algorithm halted at some previous time {t_* < t}). For this particular algorithm, there are also more ad hoc approaches that exploit the relatively simple nature of the algorithm to allow for a less notationally complicated construction.) More generally, it is possible to use this time parameter trick to convert any construction involving a provably terminating algorithm into a construction that does not redefine any variable. (It is however dangerous to work with any algorithm that has an infinite run time, unless one has a suitably strong convergence result for the algorithm that allows one to take limits, either in the classical sense or in the more general sense of jumping to limit ordinals; in the latter case, one needs to use transfinite induction in order to ensure that the use of such algorithms is rigorous.)

Remark 10 The actual Vitali covering lemma is slightly different to this one, as the linked Wikipedia page shows. Actually there is a family of related covering lemmas which are useful for a variety of tasks in harmonic analysis, see for instance this book by de Guzmán for further discussion.

Now we can prove the Hardy-Littlewood inequality, which we will do with the constant {C_d := 3^d}. It suffices to verify the claim with strict inequality,

\displaystyle  m( \{ x \in {\bf R}^d: \sup_{r>0} \frac{1}{m(B(x,r))} \int_{B(x,r)} |f(y)|\ dy > \lambda \} ) \leq \frac{C_d}{\lambda} \int_{\bf R} |f(t)|\ dt

as the non-strict case then follows by perturbing {\lambda} slightly and then taking limits.

Fix {f} and {\lambda}. By inner regularity, it suffices to show that

\displaystyle  m( K )\leq \frac{3^d}{\lambda} \int_{\bf R} |f(t)|\ dt

whenever {K} is a compact set that is contained in {\{ x \in {\bf R}^d: \sup_{r>0} \frac{1}{m(B(x,r))} \int_{B(x,r)} |f(y)|\ dy > \lambda \}}.

By construction, for every {x \in K}, there exists an open ball {B(x,r)} such that

\displaystyle  \frac{1}{m(B(x,r))} \int_{B(x,r)} |f(y)|\ dy > \lambda. \ \ \ \ \ (10)

By compactness of {K}, we can cover {K} by a finite number {B_1,\ldots,B_n} of such balls. Applying the Vitali-type covering lemma, we can find a subcollection {B'_1,\ldots,B'_m} of disjoint balls such that

\displaystyle  m( \bigcup_{i=1}^n B_i ) \leq 3^d \sum_{j=1}^m m(B'_j).

By (10), on each ball {B'_j} we have

\displaystyle  m(B'_j) < \frac{1}{\lambda} \int_{B'_j} |f(y)|\ dy;

summing in {j} and using the disjointness of the {B'_j} we conclude that

\displaystyle  m( \bigcup_{i=1}^n B_i ) \leq \frac{3^d}{\lambda} \int_{{\bf R}^d} |f(y)|\ dy.

Since the {B_1,\ldots,B_n} cover {K}, we obtain Theorem 12 as desired.

Exercise 19 Improve the constant {3^d} in the Hardy-Littlewood maximal inequality to {2^d}. (Hint: observe that with the construction used to prove the Vitali covering lemma, the centres of the balls {B_i} are contained in {\bigcup_{j=1}^m 2B'_j} and not just in {\bigcup_{j=1}^m 3B'_j}. To exploit this observation one may need to first create an epsilon of room, as the centers are not by themselves sufficient to cover the required set.)

Remark 11 The optimal value of {C_d} is not known in general, although a fairly recent result of Melas gives the surprising conclusion that the optimal value of {C_1} is {C_1 = \frac{11+\sqrt{61}}{12} = 1.56\ldots}. It is known that {C_d} grows at most linearly in {d}, thanks to a result of Stein and Strömberg, but it is not known if {C_d} is bounded in {d} or grows as {d \rightarrow \infty}. See this blog post for some further discussion.

Exercise 20 (Dyadic maximal inequality) If {f: {\bf R}^d \rightarrow {\bf C}} is an absolutely integrable function, establish the dyadic Hardy-Littlewood maximal inequality

\displaystyle  m( \{ x \in {\bf R}^d: \sup_{x \in Q} \frac{1}{|Q|} \int_{Q} |f(y)|\ dy \geq \lambda \} ) \leq \frac{1}{\lambda} \int_{\bf R} |f(t)|\ dt

where the supremum ranges over all dyadic cubes {Q} that contain {x}. (Hint: the nesting property of dyadic cubes will be useful when it comes to the covering lemma stage of the argument, much as it was in Exercise 8 of Notes 1.)

Exercise 21 (Besicovich covering lemma in one dimension) Let {I_1,\ldots,I_n} be a finite family of open intervals in {{\bf R}} (not necessarily disjoint). Show that there exist a subfamily {I'_1,\ldots,I'_m} of intervals such that

  1. {\bigcup_{i=1}^n I_n = \bigcup_{j=1}^m I'_m}; and
  2. Each point {x \in {\bf R}} is contained in at most two of the {I'_m}.

(Hint: First refine the family of intervals so that no interval {I_i} is contained in the union of the the other intervals. At that point, show that it is no longer possible for a point to be contained in three of the intervals.) There is a variant of this lemma that holds in higher dimensions, known as the Besicovitch covering lemma.

Exercise 22 Let {\mu} be a Borel measure (i.e. a countably additive measure on the Borel {\sigma}-algebra) on {{\bf R}}, such that {0 < \mu(I) < \infty} for every interval {I} of positive length. Assume that {\mu} is inner regular, in the sense that {\mu(E) = \sup_{K \subset E, \hbox{ compact}} \mu(K)} for every Borel measurable set {E}. (As it turns out, from the theory of Radon measures, all locally finite Borel measures have this property, but we will not prove this here; see Exercise 12 of these notes.) Establish the Hardy-Littlewood maximal inequality

\displaystyle  \mu( \{ x \in {\bf R}: \sup_{x \in I} \frac{1}{\mu(I)} \int_{I} |f(y)|	\ d\mu(y) \geq \lambda \} ) \leq \frac{2}{\lambda} \int_{\bf R} |f(y)|\ d\mu(y)

for any absolutely integrable function {f \in L^1(\mu)}, where the supremum ranges over all open intervals {I} that contain {x}. Note that this essentially generalises Exercise 11, in which {\mu} is replaced by Lebesgue measure. (Hint: Repeat the proof of the usual Hardy-Littlewood maximal inequality, but use the Besicovich covering lemma in place of the Vitaly-type covering lemma. Why do we need the former lemma here instead of the latter?)

Exercise 23 (Cousin’s theorem) Prove Cousin’s theorem: given any function {\delta: [a,b] \rightarrow (0,+\infty)} on a compact interval {[a,b]} of positive length, there exists a partition {a = t_0 < t_1 < \ldots < t_k = b} with {k \geq 1}, together with real numbers {t^*_j \in [t_{j-1},t_j]} for each {1 \leq j \leq k} and {t_j - t_{j-1} \leq \delta(t^*_j)}. (Hint: use the Heine-Borel theorem, which asserts that any open cover of {[a,b]} has a finite subcover, followed by the Besicovitch covering lemma.) This theorem is useful in a variety of applications related to the second fundamental theorem of calculus, as we shall see below. The positive function {\delta} is known as a gauge function.

Now we turn to consequences of the Lebesgue differentiation theorem. Given a Lebesgue measurable set {E \subset {\bf R}^d}, call a point {x \in {\bf R}^d} a point of density for {E} if {\frac{m(E \cap B(x,r))}{m(B(x,r))} \rightarrow 1} as {r \rightarrow 0}. Thus, for instance, if {E = [-1,1] \backslash \{0\}}, then every point in {(-1,1)} (including the boundary point {0}) is a point of density for {E}, but the endpoints {-1, 1} (as well as the exterior of {E}) are not points of density. One can think of a point of density as being an “almost interior” point of {E}; it is not necessarily the case that one can fit an small ball {B(x,r)} centred at {x} inside of {E}, but one can fit most of that small ball inside {E}.

Exercise 24 If {E \subset {\bf R}^d} is Lebesgue measurable, show that almost every point in {E} is a point of density for {E}, and almost every point in the complement of {E} is not a point of density for {E}.

Exercise 25 Let {E \subset {\bf R}^d} be a measurable set of positive measure, and let {\epsilon > 0}.

  1. Using Exercise 15 and Exercise 24, show that there exists a cube {Q \subset {\bf R}^d} of positive sidelength such that {m(E \cap Q) > (1-\epsilon) m(Q)}.
  2. Give an alternate proof of the above claim that avoids the Lebesgue differentiation theorem. (Hint: reduce to the case when {E} is bounded, then approximate {E} by an almost disjoint union of cubes.)
  3. Use the above result to give an alternate proof of the Steinhaus theorem (Exercise 8).

Of course, one can replace cubes here by other comparable shapes, such as balls. (Indeed, a good principle to adopt in analysis is that cubes and balls are “equivalent up to constants”, in that a cube of some sidelength can be contained in a ball of comparable radius, and vice versa. This type of mental equivalence is analogous to, though not identical with, the famous dictum that a topologist cannot distinguish a doughnut from a coffee cup.)

Exercise 26

  • Give an example of a compact set {K \subset {\bf R}} of positive measure such that {m(K \cap I) < |I|} for every interval {I} of positive length. (Hint: first construct an open dense subset of {[0,1]} of measure strictly less than {1}.)
  • Give an example of a measurable set {E \subset {\bf R}} such that {0 < m(E \cap I) < |I|} for every interval {I} of positive length. (Hint: first work in a bounded interval, such as {(-1,2)}. The complement of the set {K} in the first example is the union of at most countably many open intervals, thanks to Exercise 10. Now fill in these open intervals and iterate.)

Exercise 27 (Approximations to the identity) Define a good kernel to be a measurable function {P: {\bf R}^d \rightarrow {\bf R}^+} which is non-negative, radial (which means that there is a function {\tilde P: [0,+\infty) \rightarrow {\bf R}^+} such that {P(x) = \tilde P(|x|)}), radially non-increasing (so that {\tilde P} is a non-increasing function), and has total mass {\int_{{\bf R}^d} P(x)\ dx} equal to {1}. The functions {P_t(x) := \frac{1}{t^d} P(\frac{x}{t})} for {t>0} are then said to be a good family of approximations to the identity.

  • Show that the heat kernels {P_t(x) := \frac{1}{(4\pi t^2)^{d/2}} e^{-|x|^2/4t^2}} and Poisson kernels {P_t(x) := c_d \frac{t}{(t^2+|x|^2)^{(d+1)/2}}} are good families of approximations to the identity, if the constant {c_d > 0} is chosen correctly (in fact one has {c_d = \Gamma((d+1)/2)/\pi^{(d+1)/2}}, but you are not required to establish this). (Note that we have modified the usual formulation of the heat kernel by replacing {t} with {t^2} in order to make it conform to the notational conventions used in this exercise.)
  • Show that if {P} is a good kernel, then

    \displaystyle  c_d < \sum_{n=-\infty}^\infty 2^{dn} \tilde P(2^n) \leq C_d

    for some constants {0 < c_d < C_d} depending only on {d}. (Hint: compare {P} with such “horizontal wedding cake” functions as {\sum_{n=-\infty}^\infty 1_{2^{n-1}< |x| \leq 2^n} \tilde P(2^n)}.)

  • Establish the quantitative upper bound

    \displaystyle  |\int_{{\bf R}^d} f(y) P_t(x-y)\ dy| \leq C'_d \sup_{r>0} \frac{1}{|B(x,r)|} \int_{B(x,r)} |f(y)|\ dy

    for any absolutely integrable function {f} and some constant {C'_d > 0} depending only on {d}.

  • Show that if {f: {\bf R}^d \rightarrow {\bf C}} is absolutely integrable and {x} is a Lebesgue point of {f}, then the convolution

    \displaystyle  f*P_t(x) := \int_{{\bf R}^d} f(y) P_t(x-y)\ dy

    converges to {f(x)} as {t \rightarrow 0}. (Hint: split {f(y)} as the sum of {f(x)} and {f(y)-f(x)}.) In particular, {f*P_t} converges pointwise almost everywhere to {f}.

— 3. Almost everywhere differentiability —

As we see in undergraduate real analysis, not every continuous function {f: {\bf R} \rightarrow {\bf R}} is differentiable, with the standard example being the absolute value function {f(x) := |x|}, which is continuous not differentiable at the origin {x=0}. Of course, this function is still almost everywhere differentiable. With a bit more effort, one can construct continuous functions that are in fact nowhere differentiable:

Exercise 28 (Weierstrass function) Let {F: {\bf R} \rightarrow {\bf R}} be the function

\displaystyle  F(x) := \sum_{n=1}^\infty 4^{-n} \sin(8^n \pi x).

  1. Show that {F} is well-defined (in the sense that the series is absolutely convergent) and that {F} is a bounded continuous function.
  2. Show that for every {8}-dyadic interval {[\frac{j}{8^n}, \frac{j+1}{8^n}]} with {n \geq 1}, one has {|F(\frac{j+1}{8^n}) - F(\frac{j}{8^n})| \geq c 4^{-n}} for some absolute constant {c>0}.
  3. Show that {F} is not differentiable at any point {x \in {\bf R}}. (Hint: argue by contradiction and use the previous part of this exercise.) Note that it is not enough to formally differentiate the series term by term and observe that the resulting series is divergent – why not?

The difficulty here is that a continuous function can still contain a large amount of oscillation, which can lead to breakdown of differentiability. However, if one can somehow limit the amount of oscillation present, then one can often recover a fair bit of differentiability. For instance, we have

Theorem 14 (Monotone differentiation theorem) Any function {F: {\bf R} \rightarrow {\bf R}} which is monotone (either monotone non-decreasing or monotone non-increasing) is differentiable almost everywhere.

Exercise 29 Show that every monotone function is measurable.

To prove this theorem, we just treat the case when {F} is monotone non-decreasing, as the non-increasing case is similar (and can be deduced from the non-decreasing case by replacing {F} with {-F}).

We also first focus on the case when {F} is continuous, as this allows us to use the rising sun lemma. To understand the differentiability of {F}, we introduce the four Dini derivatives of {F} at {x}:

  • The upper right derivative {\overline{D^+} F(x) := \limsup_{h \rightarrow 0^+} \frac{F(x+h)-F(x)}{h}};
  • The lower right derivative {\underline{D^+} F(x) := \liminf_{h \rightarrow 0^+} \frac{F(x+h)-F(x)}{h}};
  • The upper left derivative {\overline{D^-} F(x) := \limsup_{h \rightarrow 0^-} \frac{F(x+h)-F(x)}{h}};
  • The lower right derivative {\underline{D^-} F(x) := \liminf_{h \rightarrow 0^-} \frac{F(x+h)-F(x)}{h}}.

Regardless of whether {F} is differentiable or not (or even whether {F} is continuous or not), the four Dini derivatives always exist and take values in the extended real line {[-\infty,\infty]}. (If {F} is only defined on an interval {[a,b]}, rather than on the endpoints, then some of the Dini derivatives may not exist at the endpoints, but this is a measure zero set and will not impact our analysis.)

Exercise 30 If {F} is monotone, show that the four Dini derivatives of {F} are measurable. (Hint: the main difficulty is to reformulate the derivatives so that {h} ranges over a countable set rather than an uncountable one.)

A function {F} is differentiable at {x} precisely when the four derivatives are equal and finite:

\displaystyle  \overline{D^+} F(x) = \underline{D^+} F(x) = \overline{D^-} F(x) = \underline{D^-} F(x) \in (-\infty,+\infty). \ \ \ \ \ (11)

We also have the trivial inequalities

\displaystyle  \underline{D^+} F(x) \leq \overline{D^+} F(x); \quad \underline{D^-} F(x) \leq \overline{D^-} F(x).

If {F} is non-decreasing, all these quantities are non-negative, thus

\displaystyle  0 \leq \underline{D^+} F(x) \leq \overline{D^+} F(x); \quad 0 \leq \underline{D^-} F(x) \leq \overline{D^-} F(x).

The one-sided Hardy-Littlewood maximal inequality has an analogue in this setting:

Lemma 15 (One-sided Hardy-Littlewood inequality) Let {F: [a,b] \rightarrow {\bf R}} be a continuous monotone non-decreasing function, and let {\lambda > 0}. Then we have

\displaystyle  m( \{ x \in [a,b]: \overline{D^+} F(x) \geq \lambda \} ) \leq \frac{F(b)-F(a)}{\lambda}.

Similarly for the other three Dini derivatives of {F}.

If {F} is not assumed to be continuous, then we have the weaker inequality

\displaystyle  m( \{ x \in [a,b]: \overline{D^+} F(x) \geq \lambda \} ) \leq C\frac{F(b)-F(a)}{\lambda}

for some absolute constant {C>0}.

Remark 12 Note that if one naively applies the fundamental theorems of calculus, one can formally see that the first part of Lemma 15 is equivalent to Lemma 12. We cannot however use this argument rigorously because we have not established the necessary fundamental theorems of calculus to do this. Nevertheless, we can borrow the proof of Lemma 12 without difficulty to use here, and this is exactly what we will do.

Proof: We just prove the continuous case and leave the discontinuous case as an exercise.

It suffices to prove the claim for {\overline{D^+} F}; by reflection (replacing {F(x)} with {-F(-x)}, and {[a,b]} with {[-b,-a]}), the same argument works for {\overline{D^-} F}, and then this trivially implies the same inequalities for {\underline{D^+} F} and {\underline{D^-} F}. By modifying {\lambda} by an epsilon, and dropping the endpoints from {[a,b]} as they have measure zero, it suffices to show that

\displaystyle  m( \{ x \in (a,b): \overline{D^+} F(x) > \lambda \} ) \leq \frac{F(b)-F(a)}{\lambda}

We may apply the rising sun lemma (Lemma 10) to the continuous function {G(x) := F(x) - \lambda x}. This gives us an at most countable family of intervals {I_n = (a_n,b_n)} in {(a,b)}, such that {G(b_n) \geq G(a_n)} for each {n}, and such that {G(y) \leq G(x)} whenever {a \leq x \leq y\leq b} and {x} lies outside of all of the {I_n}.

Observe that if {x \in (a,b)}, and {G(y) \leq G(x)} for all {x \leq y \leq b}, then {\overline{D^+} F(x) \leq \lambda}. Thus we see that the set {\{ x \in (a,b): \overline{D^+} F(x) > \lambda \}} is contained in the union of the {I_n}, and so by countable additivity

\displaystyle  m( \{ x \in (a,b): \overline{D^+} F(x) > \lambda \} ) \leq \sum_n b_n - a_n.

But we can rearrange the inequality {G(b_n) \leq G(a_n)} as {b_n - a_n \leq \frac{F(b_n)-F(a_n)}{\lambda}}. From telescoping series and the monotone nature of {F} we have {\sum_n F(b_n)-F(a_n) \leq F(b)-F(a)} (this is easiest to prove by first working with a finite subcollection of the intervals {(a_n,b_n)}, and then taking suprema), and the claim follows.

The discontinuous case is left as an exercise. \Box

Exercise 31 Prove Lemma 15 in the discontinuous case. (Hint: the rising sun lemma is no longer available, but one can use either the Vitali-type covering lemma (which will give {C=3}) or the Besicovitch lemma (which will give {C=2}), by modifying the proof of Theorem 12.

Sending {\lambda \rightarrow \infty} in the above lemma (cf. Exercise 18 from Notes 2), and then sending {[a,b]} to {{\bf R}}, we conclude as a corollary that all the four Dini derivatives of a continuous monotone non-decreasing function are finite almost everywhere. So to prove Theorem 14 for continuous monotone non-decreasing functions, it suffices to show that (11) holds for almost every {x}. In view of the trivial inequalities, it suffices to show that {\overline{D_+} F(x) \leq \underline{D_-} F(x)} and {\overline{D_-} F(x) \leq \underline{D_+} F(x)} for almost every {x}. We will just show the first inequality, as the second follows by replacing {F} with its reflection {x \mapsto -F(-x)}. It will suffice to show that for every pair {0 < r < R} of real numbers, the set

\displaystyle  E = E_{r,R} := \{ x \in {\bf R}: \overline{D_+} F(x) > R > r > \underline{D_-} F(x) \}

is a null set, since by letting {R, r} range over rationals with {R>r > 0} and taking countable unions, we would conclude that the set {\{ x \in {\bf R}: \overline{D_+} F(x) > \underline{D_-} F(x) \}} is a null set (recall that the Dini derivatives are all non-negative when {F} is non-decreasing), and the claim follows.

Clearly {E} is a measurable set. To prove that it is null, we will establish the following estimate:

Lemma 16 ({E} has density less than one) For any interval {[a,b]} and any {0 < r < R}, one has {m( E_{r,R} \cap [a,b] ) \leq \frac{r}{R} |b-a|}.

Indeed, this lemma implies that {E} has no points of density, which by Exercise 24 forces {E} to be a null set.

Proof: We begin by applying the rising sun lemma to the function {G(x) := r x + F(-x)} on {[-b,-a]}; the large number of negative signs present here is needed in order to properly deal with the lower left Dini derivative {\underline{D_-} F}. This gives an at most countable family of disjoint intervals {-I_n = (-b_n,-a_n)} in {(-b,-a)}, such that {G(-a_n) \geq G(-b_n)} for all {n}, and such that {G(-x) \leq G(-y)} whenever {-x \leq -y \leq -a} and {-x \in (-b,-a)} lies outside of all of the {-I_n}. Observe that if {x \in (a,b)}, and {G(-x) \leq G(-y)} for all {-x \leq -y \leq -a}, then {\underline{D_-} F(x) \geq r}. Thus we see that {E_{r,R}} is contained inside the union of the intervals {I_n = (a_n,b_n)}. On the other hand, from the first part of Lemma 15 we have

\displaystyle  m( E_{r,R} \cap (a_n,b_n) ) \leq \frac{F(b_n)-F(a_n)}{R}.

But we can rearrange the inequality {G(-a_n) \leq G(-b_n)} as {F(b_n) - F(a_n) \leq r (b_n-a_n)}. From countable additivity, one thus has

\displaystyle  m( E_{r,R} ) \leq \frac{r}{R} \sum_n b_n - a_n.

But the {(a_n,b_n)} are disjoint inside {(a,b)}, so from countable additivity again, we have {\sum_n b_n - a_n \leq b-a}, and the claim follows. \Box

Remark 13 Note if {F} was not assumed to be continuous, then one would lose a factor of {C} here from the second part of Lemma 15, and one would then be unable to prevent {\overline{D^+} F} from being up to {C} times as large as {\underline{D_-} F}. So sometimes, even when all one is seeking is a qualitative result such as differentiability, it is still important to keep track of constants. (But this is the exception rather than the rule: for a large portion of arguments in analysis, the constants are not terribly important.)

This concludes the proof of Theorem 14 in the continuous monotone non-decreasing case. Now we work on removing the continuity hypothesis (which was needed in order to make the rising sun lemma work properly). If we naively try to run the density argument as we did in previous sections, then (for once) the argument does not work very well, as the space of continuous monotone functions are not sufficiently dense in the space of all monotone functions in the relevant sense (which, in this case, is in the total variation sense, which is what is needed to invoke such tools as Lemma 15.). To bridge this gap, we have to supplement the continuous monotone functions with another class of monotone functions, known as the jump functions.

Definition 17 (Jump function) A basic jump function {J} is a function of the form

\displaystyle  J(x) := \left\{ \begin{array}{ll} 0 & \hbox{ when } x < x_0 \\ \theta & \hbox{ when } x = x_0 \\ 1 & \hbox{ when } x > x_0 \end{array} \right.

for some real numbers {x_0 \in {\bf R}} and {0 \leq \theta \leq 1}; we call {x_0} the point of discontinuity for {J} and {\theta} the fraction. Observe that such functions are monotone non-decreasing, but have a discontinuity at one point. A jump function is any absolutely convergent combination of basic jump functions, i.e. a function of the form {F = \sum_n c_n J_n}, where {n} ranges over an at most countable set, each {J_n} is a basic jump function, and the {c_n} are positivereals with {\sum_n c_n < \infty}. If there are only finitely many {n} involved, we say that {F} is a piecewise constant jump function.

Thus, for instance, if {q_1, q_2, q_3, \ldots} is any enumeration of the rationals, then {\sum_{n=1}^\infty 2^{-n} 1_{[q_n,+\infty)}} is a jump function.

Clearly, all jump functions are monotone non-decreasing. From the absolute convergence of the {c_n} we see that every jump function is the uniform limit of piecewise constant jump functions, for instance {\sum_{n=1}^\infty c_n J_n} is the uniform limit of {\sum_{n=1}^N c_n J_n}. One consequence of this is that the points of discontinuity of a jump function {\sum_{n=1}^\infty c_n J_n} are precisely those of the individual summands {c_n J_n}, i.e. of the points {x_n} where each {J_n} jumps.

The key fact is that these functions, together with the continuous monotone functions, essentially generate all monotone functions, at least in the bounded case:

Lemma 18 (Continuous-singular decomposition for monotone functions) Let {F: {\bf R} \rightarrow {\bf R}} be a monotone non-decreasing function.

  1. The only discontinuities of {F} are jump discontinuities. More precisely, if {x} is a point where {F} is discontinuous, then the limits {\lim_{y \rightarrow x^-} F(y)} and {\lim_{y \rightarrow x^+} F(y)} both exist, but are unequal, with {\lim_{y \rightarrow x^-} F(y) <\lim_{y \rightarrow x^+} F(y)}.
  2. There are at most countably many discontinuities of {F}.
  3. If {F} is bounded, then {F} can be expressed as the sum of a continuous monotone non-decreasing function {F_c} and a jump function {F_{pp}}.

Remark 14 This decomposition is part of the more general Lebesgue decomposition, which we will discuss later in this course.

Proof: By monotonicity, the limits {F_-(x) := \lim_{y \rightarrow x^-} F(y)} and {F^+(x) := \lim_{y \rightarrow x^+} F(y)} always exist, with {F_-(x) \leq F(x) \leq F_+(x)} for all {x}. This gives 1.

By 1., whenever there is a discontinuity {x} of {F}, there is at least one rational number {q_x} strictly between {F_-(x)} and {F_+(x)}, and from monotonicity, each rational number can be assigned to at most one discontinuity. This gives 2.

Now we prove 3. Let {A} be the set of discontinuities of {F}, thus {A} is at most countable. For each {x \in A}, we define the jump {c_x := F_+(x) - F_-(x) > 0}, and the fraction {\theta_x := \frac{F(x)-F_-(x)}{F_+(x)-F_-(x)} \in [0,1]}. Thus

\displaystyle  F_+(x) = F_-(x) + c_x \hbox{ and } F(x) = F_-(x) + \theta_x c_x.

Note that {c_x} is the measure of the interval {(F_-(x),F_+(x))}. By monotonicity, these intervals are disjoint; by the boundedness of {F}, their union is bounded. By countable additivity, we thus have {\sum_{x \in A} c_x < \infty}, and so if we let {J_x} be the basic jump function with point of discontinuity {x} and fraction {\theta_x}, then the function

\displaystyle  F_{pp} := \sum_{x \in A} c_x J_x

is a jump function.

As discussed previously, {G} is discontinuous only at {A}, and for each {x \in A} one easily checks that

\displaystyle  (F_{pp})_+(x) = (F_{pp})_-(x) + c_x \hbox{ and } F_{pp}(x) = (F_{pp})_-(x) + \theta_x c_x

where {(F_{pp})_-(x) := \lim_{y \rightarrow x^-} F_{pp}(y)}, and {(F_{pp})_+(x) := \lim_{y\rightarrow x^+} F_{pp}(y)}. We thus see that the difference {F_c := F-F_{pp}} is continuous. The only remaining task is to verify that {F_c} is monotone non-decreasing, thus we need

\displaystyle  F_{pp}(b)-F_{pp}(a) \leq F(b)-F(a)

for all {a < b}. But the left-hand side can be rewritten as {\sum_{x \in A \cap [a,b]} c_x}. As each {c_x} is the measure of the interval {(F_-(x), F_+(x))}, and these intervals for {x \in A \cap [a,b]} are disjoint and lie in {(F(a),F(b))}, the claim follows from countable additivity. \Box

Exercise 32 Show that the decomposition of a bounded monotone non-decreasing function {F} into continuous {F_c} and jump components {F_{pp}} given by the above lemma is unique.

Exercise 33 Find a suitable generalisation of the notion of a jump function that allows one to extend the above decomposition to unbounded monotone functions, and then prove this extension. (Hint: the notion to shoot for here is that of a “locally jump function”.)

Now we can finish the proof of Theorem 14. As noted previously, it suffices to prove the claim for monotone non-decreasing functions. As differentiability is a local condition, we can easily reduce to the case of bounded monotone non-decreasing functions, since to test differentiability of a monotone non-decreasing function {F} in any compact interval {[a,b]} we may replace {F} by the bounded monotone non-decreasing function {\max( \min(F, F(b)), F(a))} with no change in the differentiability in {[a,b]} (except perhaps at the endpoints {a,b}, but these form a set of measure zero). As we have already proven the claim for continuous functions, it suffices by Lemma 18 (and linearity of the derivative) to verify the claim for jump functions.

Now, finally, we are able to use the density argument, using the piecewise constant jump functions as the dense subclass, and using the second part of Lemma 15 for the quantitative estimate; fortunately for us, the density argument does not particularly care that there is a loss of a constant factor in this estimate.

For piecewise constant jump functions, the claim is clear (indeed, the derivative exists and is zero outside of finitely many discontinuities). Now we run the density argument. Let {F} be a bounded jump function, and let {\epsilon > 0} and {\lambda > 0} be arbitrary. As every jump function is the uniform limit of piecewise constant jump functions, we can find a piecewise constant jump function {F_\epsilon} such that {|F(x)-F_\epsilon(x)| \leq \epsilon} for all {x}. Indeed, by taking {F_\epsilon} to be a partial sum of the basic jump functions that make up {F}, we can ensure that {F-F_\epsilon} is also a monotone non-decreasing function. Applying the second part of Lemma 15, we have

\displaystyle  \{ x \in {\bf R}: \overline{D^+} (F-F_\epsilon)(x) \geq \lambda \} \leq \frac{2C\epsilon}{\lambda}

for some absolute constant {C}, and similarly for the other four Dini derivatives. Thus, outside of a set of measure at most {8C\epsilon/\lambda}, all of the Dini derivatives of {F-F_\epsilon} are less than {\lambda}. Since {F'_\epsilon} is almost everywhere differentiable, we conclude that outside of a set of measure at most {8C\epsilon/\lambda}, all the Dini derivatives of {F(x)} lie within {\lambda} of {F'_\epsilon(x)}, and in particular are finite and lie within {2\lambda} of each other. Sending {\epsilon} to zero (holding {\lambda} fixed), we conclude that for almost every {x}, the Dini derivatives of {F} are finite and lie within {2\lambda} of each other. If we then send {\lambda} to zero, we see that for almost every {x}, the Dini derivatives of {F} agree with each other and are finite, and the claim follows. This concludes the proof of Theorem 14.

Just as the integration theory of unsigned functions can be used to develop the integration theory of the absolutely convergent functions (see Notes 2), the differentiation theory of monotone functions can be used to develop a parallel differentiation theory for the class of functions of bounded variation:

Definition 19 (Bounded variation) Let {F: {\bf R} \rightarrow {\bf R}} be a function. The total variation {\|F\|_{TV({\bf R})}} (or {\|F\|_{TV}} for short) of {F} is defined to be the supremum

\displaystyle  \|F\|_{TV({\bf R})} := \sup_{x_0 < \ldots < x_n} \sum_{i=1}^n |F(x_i) - F(x_{i+1})|

where the supremum ranges over all finite increasing sequences {x_0,\ldots,x_n} of real numbers with {n \geq 0}; this is a quantity in {[0,+\infty]}. We say that {F} has bounded variation (on {{\bf R}}) if {\|F\|_{TV({\bf R})}} is finite. (In this case, {\|F\|_{TV({\bf R})}} is often written as {\|F\|_{BV({\bf R})}} or just {\|F\|_{BV}}.)

Given any interval {[a,b]}, we define the total variation {\|F\|_{TV([a,b])}} of {F} on {[a,b]} as

\displaystyle  \|F\|_{TV([a,b])} := \sup_{a \leq x_0 < \ldots < x_n \leq b} \sum_{i=1}^n |F(x_i) - F(x_{i+1})|;

thus the definition is the same, but the points {x_0,\ldots,x_n} are restricted to lie in {[a,b]}. Thus for instance {\|F\|_{TV({\bf R})} = \sup_{N \rightarrow \infty} \|F\|_{TV([-N,N])}}. We say that a function {F} has bounded variation on {[a,b]} if {\|F\|_{BV([a,b])}} is finite.

Exercise 34 If {F: {\bf R} \rightarrow {\bf R}} is a monotone function, show that {\|F\|_{TV([a,b])} = |F(b)-F(a)|} for any interval {[a,b]}, and that {F} has bounded variation on {{\bf R}} if and only if it is bounded.

Exercise 35 For any functions {F, G: {\bf R} \rightarrow {\bf R}}, establish the triangle property {\|F+G\|_{TV({\bf R})} \leq \|F\|_{TV({\bf R})} + \|G\|_{TV({\bf R})}} and the homogeneity property {\|cF\|_{TV({\bf R})} = |c| \|F\|_{TV({\bf R})}} for any {c \in {\bf R}}. Also show that {\|F\|_{TV}=0} if and only if {F} is constant.

Exercise 36 If {F: {\bf R} \rightarrow {\bf R}} is a function, show that {\|F\|_{TV([a,b])} + \|F\|_{TV([b,c])} = \|F\|_{TV([a,c])}} whenever {a \leq b \leq c}.

Exercise 37

  1. Show that every function {f: {\bf R} \rightarrow {\bf R}} of bounded variation is bounded, and that the limits {\lim_{x \rightarrow +\infty} f(x)} and {\lim_{x \rightarrow -\infty} f(x)}, are well-defined.
  2. Give a counterexample of a bounded, continuous, compactly supported function {f} that is not of bounded variation.

Exercise 38 Let {f: {\bf R} \rightarrow {\bf R}} be an absolutely integrable function, and let {F: {\bf R} \rightarrow {\bf R}} be the indefinite integral {F(x) := \int_{[-\infty,x]} f(x)}. Show that {F} is of bounded variation, and that {\|F\|_{TV({\bf R})} = \|f\|_{L^1({\bf R})}}. (Hint: the upper bound {\|F\|_{TV({\bf R})} \leq \|f\|_{L^1({\bf R})}} is relatively easy to establish. To obtain the lower bound, use the density argument.)

Much as an absolutely integrable function can be expressed as the difference of its positive and negative parts, a bounded variation function can be expressed as the difference of two bounded monotone functions:

Proposition 20 A function {F: {\bf R} \rightarrow {\bf R}} is of bounded variation if and only if it is the difference of two bounded monotone functions.

Proof: It is clear from Exercises 34, 35 that the difference of two bounded monotone functions is bounded. Now define the positive variation {F^+: {\bf R} \rightarrow {\bf R}} of {F} by the formula

\displaystyle  F^+(x) := \sup_{x_0 < \ldots < x_n \leq x} \sum_{i=1}^n \max(F(x_{i+1}) - F(x_{i}),0). \ \ \ \ \ (12)

It is clear from construction that this is a monotone increasing function, taking values between {0} and {\|F\|_{TV({\bf R})}}, and is thus bounded. To conclude the proposition, it suffices to (by writing {F = F_+ - (F_+-F_-)} to show that {F_+-F} is non-decreasing, or in other words to show that

\displaystyle  F^+(b) \geq F^+(a) + F(b)-F(a).

If {F(b)-F(a)} is negative then this is clear from the monotone non-decreasing nature of {F^+}, so assume that {F(b)-F(a) \geq 0}. But then the claim follows because any sequence of real numbers {x_0 < \ldots < x_n \leq a} can be extended by one or two elements by adding {a} and {b}, thus increasing the sum {\sup_{x_0 < \ldots < x_n} \sum_{i=1}^n \max(F(x_i) - F(x_{i+1}),0)} by at least {F(b)-F(a)}. \Box

Exercise 39 Let {F: {\bf R} \rightarrow {\bf R}} be of bounded variation. Define the positive variation {F^+} by (12), and the negative variation {F^-} by

\displaystyle  F^-(x) := \sup_{x_0 < \ldots < x_n \leq x} \sum_{i=1}^n \max(-F(x_{i+1}) + F(x_{i}),0).

Establish the identities

\displaystyle  F(x) = F(-\infty) + F^+(x) - F^-(x),

\displaystyle  \|F\|_{TV[a,b]} = F^+(b)-F^+(a) + F^-(b)-F^-(a),

and

\displaystyle  \|F\|_{TV} = F^+(+\infty) + F^-(+\infty)

for every interval {[a,b]}, where {F(-\infty) := \lim_{x \rightarrow -\infty} F(x)}, {F^+(+\infty) := \lim_{x \rightarrow +\infty} F^+(x)}, and {F^-(+\infty) := \lim_{x \rightarrow +\infty} F^-(x)}. (Hint: The main difficulty comes from the fact that a partition {x_0 < \ldots < x_n \leq x} that is good for {F^+} need not be good for {F^-}, and vice versa. However, this can be fixed by taking a good partition for {F^+} and a good partition for {F^-} and combining them together into a common refinement.)

From Proposition 20 and Theorem 14 we immediately obtain

Corollary 21 (BV differentiation theorem) Every bounded variation function is differentiable almost everywhere.

Exercise 40 Call a function locally of bounded variation if it is of bounded variation on every compact interval {[a,b]}. Show that every function that is locally of bounded variation is differentiable almost everywhere.

Exercise 41 (Lipschitz differentiation theorem, one-dimensional case) A function {f: {\bf R} \rightarrow {\bf R}} is said to be Lipschitz continuous if there exists a constant {C >0} such that {|f(x)-f(y)| \leq C|x-y|} for all {x,y \in {\bf R}}; the smallest {C} with this property is known as the Lipschitz constant of {f}. Show that every Lipschitz continuous function {F} is locally of bounded variation, and hence differentiable almost everywhere. Furthermore, show that the derivative {F'}, when it exists, is bounded in magnitude by the Lipschitz constant of {F}.

Remark 15 The same result is true in higher dimensions, and is known as the Radamacher differentiation theorem, but we will defer the proof of this theorem to subsequent notes, when we have the powerful tool of the Fubini-Tonelli theorem available, that is particularly useful for deducing higher-dimensional results in analysis from lower-dimensional ones.

Exercise 42 A function {f: {\bf R} \rightarrow {\bf R}} is said to be convex if one has {f((1-t) x + ty) \leq (1-t) f(x) + t f(y)} for all {x < y} and {0 < t < 1}. Show that if {f} is convex, then it is continuous and almost everywhere differentiable, and its derivative {f'} is equal almost everywhere to a monotone non-decreasing function, and so is itself almost everywhere differentiable. (Hint: Drawing the graph of {f}, together with a number of chords and tangent lines, is likely to be very helpful in providing visual intuition.) Thus we see that in some sense, convex functions are “almost everywhere twice differentiable”. Similar claims also hold for concave functions, of course.

— 4. The second fundamental theorem of calculus —

We are now finally ready to attack the second fundamental theorem of calculus in the cases where {F} is not assumed to be continuously differentiable. We begin with the case when {F: [a,b] \rightarrow {\bf R}} is monotone non-decreasing. From Theorem 14 (extending {F} to the rest of the real line if needed), this implies that {F} is differentiable almost everywhere in {[a,b]}, so {F'} is defined a.e.; from monotonicity we see that {F'} is non-negative whenever it is defined. Also, an easy modification of Exercise 1 shows that {F'} is measurable.

One half of the second fundamental theorem is easy:

Proposition 22 (Upper bound for second fundamental theorem) Let {F: [a,b] \rightarrow {\bf R}} be monotone non-decreasing (so that, as discussed above, {F'} is defined almost everywhere, is unsigned, and is measurable). Then

\displaystyle  \int_{[a,b]} F'(x)\ dx \leq F(b)-F(a).

In particular, {F'} is absolutely integrable.

Proof: It is convenient to extend {F} to all of {{\bf R}} by declaring {F(x) := F(b)} for {x>b} and {F(x) := F(a)} for {x<a}, then {F} is now a bounded monotone function on {{\bf R}}, and {F'} vanishes outside of {[a,b]}. As {F} is almost everywhere differentiable, the Newton quotients

\displaystyle  f_n(x) := \frac{F(x+1/n) - F(x)}{1/n}

converge pointwise almost everywhere to {F'}. Applying Fatou’s lemma (Corollary 16 of Notes 3), we conclude that

\displaystyle  \int_{[a,b]} F'(x)\ dx \leq \liminf_{n \rightarrow \infty} \int_{[a,b]} \frac{F(x+1/n) - F(x)}{1/n}\ dx.

The right-hand side can be rearranged as

\displaystyle  \liminf_{n \rightarrow \infty} n (\int_{[a+1/n,b+1/n]} F(y)\ dy - \int_{[a,b]} F(x)\ dx)

which can be rearranged further as

\displaystyle  \liminf_{n \rightarrow \infty} n (\int_{[b,b+1/n]} F(x)\ dx - \int_{[a,a+1/n]} F(x)\ dx).

Since {F} is equal to {F(b)} for the first integral and is at least {F(a)} for the second integral, this expression is at most

\displaystyle  \leq \liminf_{n \rightarrow \infty} n (F(b)/n - F(a)/n) = F(b)-F(a)

and the claim follows. \Box

Exercise 43 Show that any function of bounded variation has an (almost everywhere defined) derivative that is absolutely integrable.

In the Lipschitz case, one can do better:

Exercise 44 (Second fundamental theorem for Lipschitz functions) Let {F: [a,b] \rightarrow {\bf R}} be Lipschitz continuous. Show that {\int_{[a,b]} F'(x)\ dx = F(b)-F(a)}. (Hint: Argue as in the proof of Proposition 22, but use the dominated convergence theorem in place of Fatou’s lemma.)

Exercise 45 (Integration by parts formula) Let {F, G: [a,b] \rightarrow {\bf R}} be Lipschitz continuous functions. Show that

\displaystyle \int_{[a,b]} F'(x) G(x)\ dx = F(b) G(b)-F(a) G(a)

\displaystyle - \int_{[a,b]} F(x) G'(x)\ dx.

(Hint: first show that the product of two Lipschitz continuous functions on {[a,b]} is again Lipschitz continuous.)

Now we return to the monotone case. Inspired by the Lipschitz case, one may hope to recover equality in Proposition 22 for such functions {F}. However, there is an important obstruction to this, which is that all the variation of {F} may be concentrated in a set of measure zero, and thus undetectable by the Lebesgue integral of {F'}. This is most obvious in the case of a discontinuous monotone function, such as the (appropriately named) Heaviside function {F := 1_{[0,+\infty)}}; it is clear that {F'} vanishes almost everywhere, but {F(b)-F(a)} is not equal to {\int_{[a,b]} F'(x)\ dx} if {b} and {a} lie on opposite sides of the discontinuity at {0}. In fact, the same problem arises for all jump functions:

Exercise 46 Show that if {F} is a jump function, then {F'} vanishes almost everywhere. (Hint: use the density argument, starting from piecewise constant jump functions and using Proposition 22 as the quantitative estimate.)

One may hope that jump functions – in which all the fluctuation is concentrated in a countable set – are the only obstruction to the second fundamental theorem of calculus holding for monotone functions, and that as long as one restricts attention to continuous monotone functions, that one can recover the second fundamental theorem. However, this is still not true, because it is possible for all the fluctuation to now be concentrated, not in a countable collection of jump discontinuities, but instead in an uncountable set of zero measure, such as the middle thirds Cantor set (Exercise 10 from Notes 1). This can be illustrated by the key counterexample of the Cantor function, also known as the Devil’s staircase function. The construction of this function is detailed in the exercise below.

Exercise 47 (Cantor function) Define the functions {F_0, F_1, F_2, \ldots: [0,1] \rightarrow {\bf R}} recursively as follows:

  1. Set {F_0(x) := x} for all {x \in [0,1]}.
  2. For each {n=1,2,\ldots} in turn, define

    \displaystyle  F_n(x) := \left\{ \begin{array}{ll} \frac{1}{2} F_{n-1}(3x) & \hbox{ if } x \in [0,1/3]; \\ \frac{1}{2} & \hbox{ if } x \in (1/3,2/3); \\ \frac{1}{2} + \frac{1}{2} F_{n-1}(3x-2) & \hbox{ if } x \in [2/3,1] \end{array} \right.

  1. Graph {F_0}, {F_1}, {F_2}, and {F_3} (preferably on a single graph).
  2. Show that for each {n=0,1,\ldots}, {F_n} is a continuous monotone non-decreasing function with {F_n(0)=0} and {F_n(1)=1}. (Hint: induct on {n}.)
  3. Show that for each {n=0,1,\ldots}, one has {|F_{n+1}(x) - F_n(x)| \leq 2^{-n}} for each {x \in [0,1]}. Conclude that the {F_n} converge uniformly to a limit {F: [0,1] \rightarrow {\bf R}}. This limit is known as the Cantor function.
  4. Show that the Cantor function {F} is continuous and monotone non-decreasing, with {F(0)=0} and {F(1)=1}.
  5. Show that if {x \in [0,1]} lies outside the middle thirds Cantor set (Exercise 10 from Notes 1), then {F} is constant in a neighbourhood of {x}, and in particular {F'(x)=0}. Conclude that {\int_{[0,1]} F'(x)\ dx = 0 \neq 1 = F(1)-F(0)}, so that the second fundamental theorem of calculus fails for this function.
  6. Show that {F( \sum_{n=1}^\infty a_n 3^{-n} ) = \sum_{n=1}^\infty \frac{a_n}{2} 2^{-n}} for any digits {a_1,a_2,\ldots \in \{0,2\}}. Thus the Cantor function, in some sense, converts base three expansions to base two expansions.
  7. Let {I = [ \sum_{i=1}^n \frac{a_i}{3^i}, \sum_{i=1}^n \frac{a_i}{3^i} + \frac{1}{3^n}]} be one of the intervals used in the {n^{th}} cover {I_n} of {C} (see Exercise 10 from Notes 1), thus {n \geq 0} and {a_1,\ldots,a_n \in \{0,2\}}. Show that {I} is an interval of length {3^{-n}}, but {F(I)} is an interval of length {2^{-n}}.
  8. Show that {F} is not differentiable at any element of the Cantor set {C}.

Remark 16 This example shows that the classical derivative {F'(x) := \lim_{h \rightarrow 0; h \neq 0} \frac{F(x+h)-F(x)}{h}} of a function has some defects; it cannot “see” some of the variation of a continuous monotone function such as the Cantor function. Much later in this series, we will rectify this by introducing the concept of the weak derivative of a function, which despite the name, is more able than the strong derivative to detect this type of singular variation behaviour. (We will also encounter the Riemann-Stieltjes integral in later notes, which is another (closely related) way to capture all of the variation of a monotone function, and which is related to the classical derivative via the Lebesgue-Radon-Nikodym theorem.)

In view of this counterexample, we see that we need to add an additional hypothesis to the continuous monotone non-increasing function {F} before we can recover the second fundamental theorem. One such hypothesis is absolute continuity. To motivate this definition, let us recall two existing definitions:

  • A function {F: {\bf R} \rightarrow {\bf R}} is continuous if, for every {\epsilon > 0} and {x_0 \in {\bf R}}, there exists a {\delta > 0} such that {|F(b)-F(a)| \leq \epsilon} whenever {(a,b)} is an interval of length at most {\delta} that contains {x_0}.
  • A function {F: {\bf R} \rightarrow {\bf R}} is uniformly continuous if, for every {\epsilon > 0}, there exists a {\delta > 0} such that {|F(b)-F(a)| \leq \epsilon} whenever {(a,b)} is an interval of length at most {\delta}.

Definition 23 A function {F: {\bf R} \rightarrow {\bf R}} is said to be absolutely continuous if, for every {\epsilon > 0}, there exists a {\delta > 0} such that {\sum_{j=1}^n |F(b_j)-F(a_j)| \leq \epsilon} whenever {(a_1,b_1),\ldots,(a_n,b_n)} is a finite collection of disjoint intervals of total length {\sum_{j=1}^n b_j - a_j} at most {\delta}.

We define absolute continuity for a function {F: [a,b] \rightarrow {\bf R}} defined on an interval {[a,b]} similarly, with the only difference being that the intervals {[a_j,b_j]} are of course now required to lie in the domain {[a,b]} of {F}.

The following exercise places absolute continuity in relation to other regularity properties:

Exercise 48

  1. Show that every absolutely continuous function is uniformly continuous and therefore continuous.
  2. Show that every absolutely continuous function is of bounded variation on every compact interval {[a,b]}. (Hint: first show this is true for any sufficiently small interval.) In particular (by Exercise 40), absolutely continuous functions are differentiable almost everywhere.
  3. Show that every Lipschitz continuous function is absolutely continuous.
  4. Show that the function {x \mapsto \sqrt{x}} is absolutely continuous, but not Lipschitz continuous, on the interval {[0,1]}.
  5. Show that the Cantor function from Exercise 47 is continuous, monotone, and uniformly continuous, but not absolutely continuous, on {[0,1]}.
  6. If {f: {\bf R} \rightarrow {\bf R}} is absolutely integrable, show that the indefinite integral {F(x) := \int_{[-\infty,x]} f(y)\ dy} is absolutely continuous, and that {F} is differentiable almost everywhere with {F'(x)=f(x)} for almost every {x}.
  7. Show that the sum or product of two absolutely continuous functions on an interval {[a,b]} remains absolutely continuous. What happens if we work on {{\bf R}} instead of on {[a,b]}?

Exercise 49

  1. Show that absolutely continuous functions map null sets to null sets, i.e. if {F: {\bf R} \rightarrow{\bf R}} is absolutely continuous and {E} is a null set then {F(E) := \{ F(x): x \in E \}} is also a null set.
  2. Show that the Cantor function does not have this property.

For absolutely continuous functions, we can recover the second fundamental theorem of calculus:

Theorem 24 (Second fundamental theorem for absolutely continuous functions) Let {F: [a,b] \rightarrow {\bf R}} be absolutely continuous. Then {\int_{[a,b]} F'(x)\ dx = F(b)-F(a)}.

Proof: Our main tool here will be Cousin’s theorem (Exercise 23).

By Exercise 43, {F'} is absolutely integrable. By Exercise 8 of Notes 4, {F'} is thus uniformly integrable. Now let {\epsilon > 0}. By Exercise 11 of Notes 4, we can find {\kappa > 0} such that {\int_U |F'(x)|\ dx \leq \epsilon} whenever {U \subset [a,b]} is a measurable set of measure at most {\kappa}. (Here we adopt the convention that {F'} vanishes outside of {[a,b]}.) By making {\kappa} small enough, we may also assume from absolute continuity that {\sum_{j=1}^n |F(b_j)-F(a_j)| \leq \epsilon} whenever {(a_1,b_1),\ldots,(a_n,b_n)} is a finite collection of disjoint intervals of total length {\sum_{j=1}^n b_j - a_j} at most {\kappa}.

Let {E \subset [a,b]} be the set of points {x} where {F} is not differentiable, together with the endpoints {a,b}, as well as the points where {x} is not a Lebesgue point of {F'}. thus {E} is a null set. By outer regularity (or the definition of outer measure) we can find an open set {U} containing {E} of measure {m(U) < \kappa}. In particular, {\int_U |F'(x)|\ dx \leq \epsilon}.

Now define a gauge function {\delta: [a,b] \rightarrow (0,+\infty)} as follows.

  • If {x \in E}, we define {\delta(x)>0} to be small enough that the open interval {(x-\delta(x), x+\delta(x))} lies in {U}.
  • If {x \not \in E}, then {F} is differentiable at {x} and {x} is a Lebesgue point of {F'}. We let {\delta(x)>0} be small enough that {|F(y)-F(x)-(y-x)F'(x)| \leq \epsilon |y-x|} holds whenever {|y-x| \leq \delta(x)}, and such that {|\frac{1}{|I|} \int_I F'(y)\ dy - F'(x)| \leq \epsilon} whenever {I} is an interval containing {x} of length at most {\delta(x)}; such a {\delta(x)} exists by the definition of differentiability, and of Lebesgue point. We rewrite these properties using big-O notation as {F(y) - F(x) = (y-x) F'(x) + O(\epsilon |y-x|)} and {\int_I F'(y)\ dy = |I| F'(x) + O(\epsilon |I|)}.

Applying Cousin’s theorem, we can find a partition {a = t_0 < t_1 < \ldots < t_k = b} with {k \geq 1}, together with real numbers {t^*_j \in [t_{j-1},t_j]} for each {1 \leq j \leq k} and {t_j - t_{j-1} \leq \delta(t^*_j)}.

We can express {F(b)-F(a)} as a telescoping series

\displaystyle  F(b)-F(a) = \sum_{j=1}^k F(t_j) - F(t_{j-1}).

To estimate the size of this sum, let us first consider those {j} for which {t^*_j \in E}. Then, by construction, the intervals {(t_{j-1},t_j)} are disjoint in {U}. By construction of {\kappa}, we thus have

\displaystyle  \sum_{j: t^*_j \in E} |F(t_j) - F(t_{j-1})| \leq \epsilon

and thus

\displaystyle  \sum_{j: t^*_j \in E} F(t_j) - F(t_{j-1}) = O(\epsilon).

Next, we consider those {j} for which {t^*_j \not \in E}. By construction, for those {j} we have

\displaystyle  F(t_j) - F(t_{j}^*) = (t_j - t_j^*) F'(t^*_j) + O(\epsilon |t_j - t^*_j| )

and

\displaystyle  F(t_j^*) - F(t_{j-1}) = (t_j^* - t_{j-1}) F'(t^*_j) + O(\epsilon |t_j^* - t_{j-1}| )

and thus

\displaystyle  F(t_j) - F(t_{j-1}) = (t_j - t_{j-1}) F'(t^*_j) + O(\epsilon |t_j - t_{j-1}| ).

On the other hand, from construction again we have

\displaystyle  \int_{[t_{j-1},t_j]} F'(y)\ dy = (t_j -t_{j-1}) F'(t^*_j) + O(\epsilon |t_j - t_{j-1}| )

and thus

\displaystyle  F(t_j) - F(t_{j-1}) = \int_{[t_{j-1},t_j]} F'(y)\ dy + O(\epsilon |t_j - t_{j-1}| ).

Summing in {j}, we conclude that

\displaystyle  \sum_{j: t^*_j \not \in E} F(t_j) - F(t_{j-1}) = \int_{S} F'(y)\ dy + O(\epsilon (b-a) ),

where {S} is the union of all the {[t_{j-1},t_j]} with {t^*_j \not \in E}. By construction, this set is contained in {[a,b]} and contains {[a,b] \backslash U}. Since {\int_U |F'(x)|\ dx \leq \epsilon}, we conclude that

\displaystyle  \int_{S} F'(y)\ dy = \int_{[a,b]} F'(y)\ dy + O(\epsilon).

Putting everything together, we conclude that

\displaystyle  F(b)-F(a) = \int_{[a,b]} F'(y)\ dy + O(\epsilon) + O( \epsilon |b-a| ).

Since {\epsilon > 0} was arbitrary, the claim follows. \Box

Combining this result with Exercise 48, we obtain a satisfactory classification of the absolutely continuous functions:

Exercise 50 Show that a function {F: [a,b] \rightarrow {\bf R}} is absolutely continuous if and only if it takes the form {F(x) = \int_{[a,x]} f(y)\ dy + C} for some absolutely integrable {f: [a,b] \rightarrow {\bf R}} and a constant {C}.

Exercise 51 (Compatibility of the strong and weak derivatives in the absolutely continuous case) Let {F: [a,b] \rightarrow {\bf R}} be an absolutely continuous function, and let {\phi: [a,b] \rightarrow {\bf R}} be a continuously differentiable function supported in a compact subset of {(a,b)}. Show that {\int_{[a,b]} F' \phi(x) \ dx = - \int_{[a,b]} F \phi'(x)\ dx}.

Inspecting the proof of Theorem 24, we see that the absolute continuity was used primarily in two ways: firstly, to ensure the almost everywhere existence, and to control an exceptional null set {E}. It turns out that one can achieve the latter control by making a different hypothesis, namely that the function {F} is everywhere differentiable rather than merely almost everywhere differentiable. More precisely, we have

Proposition 25 (Second fundamental theorem of calculus, again) Let {[a,b]} be a compact interval of positive length, let {F: [a,b] \rightarrow {\bf R}} be a differentiable function, such that {F'} is absolutely integrable. Then the Lebesgue integral {\int_{[a,b]} F'(x)\ dx} of {F'} is equal to {F(b) - F(a)}.

Proof: This will be similar to the proof of Theorem 24, the one main new twist being that we need several open sets {U} instead of just one. Let {E \subset [a,b]} be the set of points {x} which are not Lebesgue points of {F'}, together with the endpoints {a,b}. This is a null set. Let {\epsilon > 0}, and then let {\kappa > 0} be small enough that {\int_U |F'(x)|\ dx \leq \epsilon} whenever {U} is measurable with {m(U) \leq \kappa}. We can also ensure that {\kappa \leq \epsilon}.

For every natural number {m=1,2,\ldots} we can find an open set {U_m} containing {E} of measure {m(U_m) \leq \kappa/4^m}. In particular we see that {m( \bigcup_{m=1}^{\infty} U_m ) \leq \kappa} and thus {\int_{\bigcup_{m=1}^\infty U_m} |F'(x)|\ dx \leq \epsilon}.

Now define a gauge function {\delta: [a,b] \rightarrow (0,+\infty)} as follows.

  • If {x \in E}, we define {\delta(x)>0} to be small enough that the open interval {(x-\delta(x), x+\delta(x))} lies in {U_m}, where {m} is the first natural number such that {|F'(x)| \leq 2^m}, and also small enough that {|F(y)-F(x)-(y-x)F'(x)| \leq \epsilon |y-x|} holds whenever {|y-x| \leq \delta(x)}. (Here we crucially use the everywhere differentiability to ensure that {f'(x)} exists and is finite here.)
  • If {x \not \in E}, we let {\delta(x)>0} be small enough that {|F(y)-F(x)-(y-x)F'(x)| \leq \epsilon |y-x|} holds whenever {|y-x| \leq \delta(x)}, and such that {|\frac{1}{|I|} \int_I F'(y)\ dy - F'(x)| \leq \epsilon} whenever {I} is an interval containing {x} of length at most {\delta(x)}, exactly as in the proof of Theorem 24.

Applying Cousin’s theorem, we can find a partition {a = t_0 < t_1 < \ldots < t_k = b} with {k \geq 1}, together with real numbers {t^*_j \in [t_{j-1},t_j]} for each {1 \leq j \leq k} and {t_j - t_{j-1} \leq \delta(t^*_j)}.

As before, we express {F(b)-F(a)} as a telescoping series

\displaystyle  F(b)-F(a) = \sum_{j=1}^k F(t_j) - F(t_{j-1}).

For the contributions of those {j} with {t^*_j \not \in E}, we argue exactly as in the proof of Theorem 24 to conclude eventually that

\displaystyle  \sum_{j: t^*_j \not \in E} F(t_j) - F(t_{j-1}) = \int_{S} F'(y)\ dy + O(\epsilon (b-a) ),

where {S} is the union of all the {[t_{j-1},t_j]} with {t^*_j \not \in E}. Since

\displaystyle  \int_{[a,b] \backslash S} |F'(x)|\ dx \leq \int_{\bigcup_{m=1}^\infty U_m} |F'(x)|\ dx \leq \epsilon

we thus have

\displaystyle  \int_{S} F'(y)\ dy = \int_{[a,b]} F'(y)\ dy + O(\epsilon).

Now we turn to those {j} with {t^*_j \in E}. By construction, we have

\displaystyle  F(t_j) - F(t_{j-1}) = (t_j - t_{j-1}) F'(t^*_j) + O(\epsilon |t_j - t_{j-1}| )

fir these intervals, and so

\displaystyle  \sum_{j: t^*_j \in E} F(t_j) - F(t_{j-1}) = (\sum_{j: t^*_j \in E} (t_j - t_{j-1}) F'(t^*_j)) + O(\epsilon (b-a) ).

Next, for each {j} we have {F'(t^*_j) \leq 2^m} and {[t_{j-1},t_j] \subset U_m} for some natural number {m=1,2,\ldots}, by construction. By countable additivity, we conclude that

\displaystyle  (\sum_{j: t^*_j \in E} (t_j - t_{j-1}) F'(t^*_j)) \leq \sum_{m=1}^\infty 2^m m(U_m) \leq \sum_{m=1}^\infty 2^m \epsilon/4^m = O(\epsilon).

Putting all this together, we again have

\displaystyle  F(b)-F(a) = \int_{[a,b]} F'(y)\ dy + O(\epsilon) + O( \epsilon |b-a| ).

Since {\epsilon > 0} was arbitrary, the claim follows. \Box

Remark 17 The above proposition is yet another illustration of how the property of everywhere differentiability is significantly better than that of almost everywhere differentiability. In practice, though, the above proposition is not as useful as one might initially think, because there are very few methods that establish the everywhere differentiability of a function that do not also establish continuous differentiability (or at least Riemann integrability of the derivative), at which point one could just use Theorem 3 instead.

Exercise 52 Let {F: [-1,1] \rightarrow {\bf R}} be the function defined by setting {F(x) := x^2 \sin(\frac{1}{x^3})} when {x} is non-zero, and {F(0) := 0}. Show that {F} is everywhere differentiable, but the deriative {F'} is not absolutely integrable, and so the second fundamental theorem of calculus does not apply in this case (at least if we interpret {\int_{[a,b]} F'(x)\ dx} using the absolutely convergent Lebesgue integral). See however the next exercise.

Exercise 53 (Henstock-Kurzweil integral) Let {[a,b]} be a compact interval of positive length. We say that a function {f: [a,b] \rightarrow {\bf R}} is Henstock-Kurzweil integrable with integral {L \in {\bf R}} if for every {\epsilon > 0} there exists a gauge function {\delta: [a,b] \rightarrow (0,+\infty)} such that one has

\displaystyle  | \sum_{j=1}^k f(t^*_j) (t_j - t_{j-1}) - L | \leq \epsilon

whenever {k \geq 1} and {a = t_0 < t_1 < \ldots < t_k = b} and {t^*_1,\ldots,t^*_k} are such that {t^*_j \in [t_{j-1},t_j]} and {|t_j-t_{j-1}| \leq \delta(t^*_j)} for every {1 \leq j \leq k}. When this occurs, we call {L} the Henstock-Kurzweil integral of {f} and write it as {\int_{[a,b]} f(x)\ dx}.

  • Show that if a function is Henstock-Kurzweil integrable, it has a unique Henstock-Kurzweil integral. (Hint: use Cousin’s theorem.)
  • Show that if a function is Riemann integrable, then it is Henstock-Kurzweil integrable, and the Henstock-Kurzweil integral {\int_{[a,b]} f(x)\ dx} is equal to the Riemann integral {\int_a^b f(x)\ dx}.
  • Show that if a function {f: [a,b] \rightarrow {\bf R}} is everywhere defined, everywhere finite, and is absolutely integrable, then it is Henstock-Kurzweil integrable, and the Henstock-Kurzweil integral {\int_{[a,b]} f(x)\ dx} is equal to the Lebesgue integral {\int_{[a,b]} f(x)\ dx}. (Hint: this is a variant of the proof of Theorem 24 or Proposition 25.)
  • Show that if {F: [a,b] \rightarrow {\bf R}} is everywhere differentiable, then {F'} is Henstock-Kurzweil integrable, and the Henstock-Kurzweil integral {\int_{[a,b]} F'(x)\ dx} is equal to {F(b)-F(a)}. (Hint: this is a variant of the proof of Theorem 24 or Proposition 25.)
  • Explain why the above results give an alternate proof of Exercise 4 and of Proposition 25.

Remark 18 As the above exercise indicates, the Henstock-Kurzweil integral (also known as the Denjoy integral or Perron integral) extends the Riemann integral and the absolutely convergent Lebesgue integral, at least as long as one restricts attention to functions that are defined and are finite everywhere (in contrast to the Lebesgue integral, which is willing to tolerate functions being infinite or undefined so long as this only occurs on a null set). It is the notion of integration that is most naturally associated with the fundamental theorem of calculus for everywhere differentiable functions, as seen in part 4 of the above exercise; it can also be used as a unified framework for all the proofs in this section that invoked Cousin’s theorem. The Henstock-Kurzweil integral can also integrate some (highly oscillatory) functions that the Lebesgue integral cannot, such as the derivative {F'} of the function {F} appearing in Exercise 52. This is analogous to how conditional summation {\lim_{N \rightarrow \infty} \sum_{n=1}^N a_n} can sum conditionally convergent series {\sum_{n=1}^\infty a_n}, even if they are not absolutely integrable. However, much as conditional summation is not always well-behaved with respect to rearrangement, the Henstock-Kurzweil integral does not always react well to changes of variable; also, due to its reliance on the order structure of the real line {{\bf R}}, it is difficult to extend the Henstock-Kurzweil integral to more general spaces, such as the Euclidean space {{\bf R}^d}, or to abstract measure spaces.