A fundamental characteristic of many mathematical spaces (e.g. vector spaces, metric spaces, topological spaces, etc.) is their dimension, which measures the “complexity” or “degrees of freedom” inherent in the space. There is no single notion of dimension; instead, there are a variety of different versions of this concept, with different versions being suitable for different classes of mathematical spaces. Typically, a single mathematical object may have several subtly different notions of dimension that one can place on it, which will be related to each other, and which will often agree with each other in “non-pathological” cases, but can also deviate from each other in many other situations. For instance:

  • One can define the dimension of a space {X} by seeing how it compares to some standard reference spaces, such as {{\bf R}^n} or {{\bf C}^n}; one may view a space as having dimension {n} if it can be (locally or globally) identified with a standard {n}-dimensional space. The dimension of a vector space or a manifold can be defined in this fashion.
  • Another way to define dimension of a space {X} is as the largest number of “independent” objects one can place inside that space; this can be used to give an alternate notion of dimension for a vector space, or of an algebraic variety, as well as the closely related notion of the transcendence degree of a field. The concept of VC dimension in machine learning also broadly falls into this category.
  • One can also try to define dimension inductively, for instance declaring a space {X} to be {n}-dimensional if it can be “separated” somehow by an {n-1}-dimensional object; thus an {n}-dimensional object will tend to have “maximal chains” of sub-objects of length {n} (or {n+1}, depending on how one initialises the chain and how one defines length). This can give a notion of dimension for a topological space or a commutative ring.

The notions of dimension as defined above tend to necessarily take values in the natural numbers (or the cardinal numbers); there is no such space as {{\bf R}^{\sqrt{2}}}, for instance, nor can one talk about a basis consisting of {\pi} linearly independent elements, or a chain of maximal ideals of length {e}. There is however a somewhat different approach to the concept of dimension which makes no distinction between integer and non-integer dimensions, and is suitable for studying “rough” sets such as fractals. The starting point is to observe that in the {d}-dimensional space {{\bf R}^d}, the volume {V} of a ball of radius {R} grows like {R^d}, thus giving the following heuristic relationship

\displaystyle  \frac{\log V}{\log R} \approx d \ \ \ \ \ (1)

between volume, scale, and dimension. Formalising this heuristic leads to a number of useful notions of dimension for subsets of {{\bf R}^n} (or more generally, for metric spaces), including (upper and lower) Minkowski dimension (also known as box-packing dimension or Minkowski-Bougliand dimension), and Hausdorff dimension.

[In {K}-theory, it is also convenient to work with ``virtual" vector spaces or vector bundles, such as formal differences of such spaces, and which may therefore have a negative dimension; but as far as I am aware there is no connection between this notion of dimension and the metric ones given here.]

Minkowski dimension can either be defined externally (relating the external volume of {\delta}-neighbourhoods of a set {E} to the scale {\delta}) or internally (relating the internal {\delta}-entropy of {E} to the scale). Hausdorff dimension is defined internally by first introducing the {d}-dimensional Hausdorff measure of a set {E} for any parameter {0 \leq d < \infty}, which generalises the familiar notions of length, area, and volume to non-integer dimensions, or to rough sets, and is of interest in its own right. Hausdorff dimension has a lengthier definition than its Minkowski counterpart, but is more robust with respect to operations such as countable unions, and is generally accepted as the “standard” notion of dimension in metric spaces. We will compare these concepts against each other later in these notes.

One use of the notion of dimension is to create finer distinctions between various types of “small” subsets of spaces such as {{\bf R}^n}, beyond what can be achieved by the usual Lebesgue measure (or Baire category). For instance, a point, line, and plane in {{\bf R}^3} all have zero measure with respect to three-dimensional Lebesgue measure (and are nowhere dense), but of course have different dimensions ({0}, {1}, and {2} respectively). (The Kakeya set conjecture, discussed recently on this blog, offers another good example.) This can be used to clarify the nature of various singularities, such as that arising from non-smooth solutions to PDE; a function which is non-smooth on a set of large Hausdorff dimension can be considered less smooth than one which is non-smooth on a set of small Hausdorff dimension, even if both are smooth almost everywhere. While many properties of the singular set of such a function are worth studying (e.g. their rectifiability), understanding their dimension is often an important starting point. The interplay between these types of concepts is the subject of geometric measure theory.

— 1. Minkowski dimension —

Before we study the more standard notion of Hausdorff dimension, we begin with the more elementary concept of the (upper and lower) Minkowski dimension of a subset {E} of a Euclidean space {{\bf R}^n}.

There are several equivalent ways to approach Minkowski dimension. We begin with an “external” approach, based on a study of the {\delta}-neighbourhoods {E_\delta := \{ x \in {\bf R}^n: \hbox{dist}(x,E) < \delta \}} of {E}, where {\hbox{dist}(x,E) := \inf \{ |x-y|: y \in E \}} and we use the Euclidean metric on {{\bf R}^n}. These are open sets in {{\bf R}^n} and therefore have a {d}-dimensional volume (or Lebesgue measure) {\hbox{vol}^d(E_\delta)}. To avoid divergences, let us assume for now that {E} is bounded, so that the {E_\delta} have finite volume.

Let {0 \leq d \leq n}. Suppose {E} is a bounded portion of a {d}-dimensional subspace, e.g. {E = B^d(0,1) \times \{0\}^{n-d}}, where {B^d(0,1) \subset {\bf R}^d} is the unit ball in {{\bf R}^d} and we identify {{\bf R}^n} with {{\bf R}^d \times {\bf R}^{n-d}} in the usual manner. Then we see from the triangle inequality that

\displaystyle  B^d(0,1) \times B^{n-d}(0,\delta) \subset E_\delta \subset B^d(0,2) \times B^{n-d}(0,\delta)

for all {0 < \delta < 1}, which implies that

\displaystyle  c \delta^{n-d} \leq \hbox{vol}^n( E_\delta ) \leq C \delta^{n-d}

for some constants {c, C> 0} depending only on {n, d}. In particular, we have

\displaystyle  \lim_{\delta \rightarrow 0} n - \frac{ \log \hbox{vol}^n( E_\delta ) }{ \log \delta } = d

(compare with (1)). This motivates our first definition of Minkowski dimension:

Definition 1 Let {E} be a bounded subset of {{\bf R}^n}. The upper Minkowski dimension {\overline{\hbox{dim}}_M(E)} is defined as

\displaystyle  \overline{\hbox{dim}}_M(E) := \limsup_{\delta \rightarrow 0} n - \frac{ \log \hbox{vol}^n( E_\delta ) }{ \log \delta }

and the lower Minkowski dimension {\underline{\hbox{dim}}_M(E)} is defined as

\displaystyle  \underline{\hbox{dim}}_M(E) := \liminf_{\delta \rightarrow 0} n - \frac{ \log \hbox{vol}^n( E_\delta ) }{ \log \delta }.

If the upper and lower Minkowski dimensions match, we refer to {\hbox{dim}_M(E) := \overline{\hbox{dim}}_M(E) = \underline{\hbox{dim}}_M(E)} as the Minkowski dimension of {E}. In particular, the empty set has a Minkowski dimension of {-\infty}.

Unwrapping all the definitions, we have the following equivalent formulation, where {E} is a bounded subset of {{\bf R}^n} and {\alpha \in {\bf R}}:

  • We have {\overline{\hbox{dim}}_M(E) < \alpha} iff for every {\epsilon > 0}, one has {\hbox{vol}^d(E_\delta) \leq C \delta^{n-d-\epsilon}} for all sufficiently small {\delta > 0} and some {C > 0}.
  • We have {\underline{\hbox{dim}}_M(E) < \alpha} iff for every {\epsilon > 0}, one has {\hbox{vol}^d(E_\delta) \leq C \delta^{n-d-\epsilon}} for arbitrarily small {\delta > 0} and some {C > 0}.
  • We have {\overline{\hbox{dim}}_M(E) > \alpha} iff for every {\epsilon > 0}, one has {\hbox{vol}^d(E_\delta) \geq c \delta^{n-d-\epsilon}} for arbitrarily small {\delta > 0} and some {c > 0}.
  • We have {\overline{\hbox{dim}}_M(E) > \alpha} iff for every {\epsilon > 0}, one has {\hbox{vol}^d(E_\delta) \geq c \delta^{n-d-\epsilon}} for all sufficiently small {\delta > 0} and some {c > 0}.

Exercise 1

  • (i) Let {C \subset {\bf R}} be the Cantor set consisting of all base {4} strings {\sum_{i=1}^\infty a_i 4^{-i}}, where each {a_i} takes values in {\{0,3\}}. Show that {C} has Minkowski dimension {1/2}. (Hint: approximate any small {\delta} by a negative power of {4}.)
  • (ii) Let {C' \subset {\bf R}} be the Cantor set consisting of all base {4} strings {\sum_{i=1}^\infty a_i 4^{-i}}, where each {a_i} takes values in {\{0,3\}} when {(2k)! \leq i < (2k+1)!} for some integer {k \geq 0}, and {a_i} is arbitrary for the other values of {i}. Show that {C'} has a lower Minkowski dimension of {1/2} and an upper Minkowski dimension of {1}.

Exercise 2 Suppose that {E \subset {\bf R}^n} is a compact set with the property that there exist {0 < r < 1} and an integer {k > 1} such that {E} is equal to the union of {k} disjoint translates of {r \cdot E := \{ rx: x \in E \}}. (This is a special case of a self-similar fractal; the Cantor set is a typical example.) Show that {E} has Minkowski dimension {\frac{\log k}{\log 1/r}}.

If the {k} translates of {r \cdot E} are allowed to overlap, establish the upper bound {\overline{\hbox{dim}}_M(E) \leq \frac{\log k}{\log 1/r}}.

It is clear that we have the inequalities

\displaystyle  0 \leq \underline{\hbox{dim}}_M(E) \leq \overline{\hbox{dim}}_M(E) \leq n

for non-empty bounded {E \subset {\bf R}^n}, and the monotonicity properties

\displaystyle  \underline{\hbox{dim}}_M(E) \leq \underline{\hbox{dim}}_M(F); \quad \overline{\hbox{dim}}_M(E) \leq \overline{\hbox{dim}}_M(F)

whenever {E \subset F \subset {\bf R}^n} are bounded sets. It is thus natural to extend the definitions of lower and upper Minkowski dimension to unbounded sets {E} by defining

\displaystyle  \underline{\hbox{dim}}_M(E) := \sup_{F \subset E, \hbox{ bounded}} \underline{\hbox{dim}}_M(F) \ \ \ \ \ (2)

and

\displaystyle  \overline{\hbox{dim}}_M(E) := \sup_{F \subset E, \hbox{ bounded}} \overline{\hbox{dim}}_M(F). \ \ \ \ \ (3)

In particular, we easily verify that {d}-dimensional subspaces of {{\bf R}^n} have Minkowski dimension {d}.

Exercise 3 Show that any subset of {{\bf R}^n} with lower Minkowski dimension less than {n} has Lebesgue measure zero. In particular, any subset {E \subset {\bf R}^n} of positive Lebesgue measure must have full Minkowski dimension {\hbox{dim}_M(E) = n}.

Now we turn to other formulations of Minkowski dimension. Given a bounded set {E} and {\delta > 0}, we make the following definitions:

  • {{\mathcal N}_\delta^{\hbox{ext}}(E)} (the external {\delta}-covering number of {E}) is the fewest number of open balls of radius {\delta} with centres in {{\bf R}^n} needed to cover {E}.
  • {{\mathcal N}_\delta^{\hbox{int}}(E)} (the internal {\delta}-covering number of {E}) is the fewest number of open balls of radius {\delta} with centres in {E} needed to cover {E}.
  • {{\mathcal N}_\delta^{\hbox{net}}(E)} (the {\delta}-metric entropy) is the cardinality of the largest {\delta}-net in {E}, i.e. the largest set {x_1,\ldots,x_k} in {E} such that {|x_i-x_j| \geq \delta} for every {1 \leq i < j \leq k}.
  • {{\mathcal N}_\delta^{\hbox{pack}}(E)} (the {\delta}-packing number of {E}) is the largest number of disjoint open balls one can find of radius {\delta} with centres in {E}.

These three quantities are closely related to each other, and to the volumes {\hbox{vol}^n( E_\delta )}:

Exercise 4 For any bounded set {E \subset {\bf R}^n} and any {\delta > 0}, show that

\displaystyle  {\mathcal N}_{2\delta}^{\hbox{net}}(E) = {\mathcal N}_\delta^{\hbox{pack}}(E) \leq \frac{\hbox{vol}^n( E_\delta )}{\hbox{vol}^n(B^n(0,\delta))} \leq 2^n {\mathcal N}_\delta^{\hbox{ext}}(E)

and

\displaystyle  {\mathcal N}_\delta^{\hbox{ext}}(E) \leq {\mathcal N}_\delta^{\hbox{int}}(E) \leq {\mathcal N}_\delta^{\hbox{net}}(E).

As a consequence of this exercise, we see that

\displaystyle  \overline{\hbox{dim}}_M(E) = \limsup_{\delta \rightarrow 0} \frac{ {\mathcal N}_\delta^*(E) }{ \log {1/\delta} } \ \ \ \ \ (4)

and

\displaystyle  \underline{\hbox{dim}}_M(E) = \liminf_{\delta \rightarrow 0} \frac{ {\mathcal N}_\delta^*(E) }{ \log {1/\delta} }. \ \ \ \ \ (5)

where {*} is any of {\hbox{ext}, \hbox{int}, \hbox{net}, \hbox{pack}}.

One can now take the formulae (4), (5) as the definition of Minkowski dimension for bounded sets (and then use (2), (3) to extend to unbounded sets). The formulations (4), (5) for {* = \hbox{int}, \hbox{net}, \hbox{pack}} have the advantage of being intrinsic – they only involve {E}, rather than the ambient space {{\bf R}^n}. For metric spaces, one still has a partial analogue of Exercise 4, namely

\displaystyle  {\mathcal N}_{2\delta}^{\hbox{net}}(E) \leq {\mathcal N}_\delta^{\hbox{pack}}(E) \leq {\mathcal N}_\delta^{\hbox{int}}(E) \leq {\mathcal N}_\delta^{\hbox{net}}(E).

As such, these formulations of Minkowski dimension extend without any difficulty to arbitrary bounded metric spaces {(E,d)} (at least when the spaces are locally compact), and then to unbounded metric spaces by (2), (3).

Exercise 5 If {\phi: (X,d_X) \rightarrow (Y,d_Y)} is a Lipschitz map between metric spaces, show that {\overline{\hbox{dim}}_M(\phi(E)) \leq \overline{\hbox{dim}}_M(E)} and {\underline{\hbox{dim}}_M(\phi(E)) \leq \underline{\hbox{dim}}_M(E)} for all {E \subset X}. Conclude in particular that the graph {\{ (x,\phi(x)): x \in {\bf R}^d \}} of any Lipschitz function {\phi: {\bf R}^d \rightarrow {\bf R}^{n-d}} has Minkowski dimension {d}, and the graph of any measurable function {\phi: {\bf R}^d \rightarrow {\bf R}^{n-d}} has Minkowski dimension at least {d}.

Note however that the dimension of graphs can become larger than that of the base in the non-Lipschitz case:

Exercise 6 Show that the graph {\{ (x,\sin\frac{1}{x}): 0 < x < 1 \}} has Minkowski dimension {3/2}.

Exercise 7 Let {(X,d)} be a bounded metric space. For each {n \geq 0}, let {E_n} be a maximal {2^{-n}}-net of {X} (thus the cardinality of {E_n} is {{\mathcal N}_{2^{-n}}^{net}(X)}). Show that for any continuous function {f: X \rightarrow {\bf R}} and any {x_0 \in X}, one has the inequality

\displaystyle  \sup_{x \in X} f(x) \leq \sup_{x_0 \in E_0} f(x_0) +

\displaystyle  + \sum_{n=0}^\infty \sup_{x_n \in E_n, x_{n+1} \in E_{n+1}: |x_n-x_{n+1}| \leq \frac{3}{2} 2^{-n}} (f(x_n)-f(x_{n+1})).

(Hint: For any {x \in X}, define {x_n \in E_n} to be the nearest point in {E_n} to {x}, and use a telescoping series.) This inequality (and variants thereof), which replaces a continuous supremum of a function {f(x)} by a sum of discrete suprema of differences {f(x_n)-f(x_{n+1})} of that function, is the basis of the generic chaining technique in probability, used to estimate the supremum of a continuous family of random processes. It is particularly effective when combined with bounds on the metric entropy {{\mathcal N}_{2^{-n}}^{net}(X)}, which of course is closely related to the Minkowski dimension of {X}, and with large deviation bounds on the differences {f(x_n)-f(x_{n+1})}. A good reference for generic chaining is the text by Talagrand.

Exercise 8 If {E \subset {\bf R}^n} and {F \subset {\bf R}^m} are bounded sets, show that

\displaystyle  \underline{\hbox{dim}}_M(E) + \underline{\hbox{dim}}_M(F) \leq \underline{\hbox{dim}}_M(E \times F)

and

\displaystyle  \overline{\hbox{dim}}_M(E \times F) \leq \overline{\hbox{dim}}_M(E) + \overline{\hbox{dim}}_M(F).

Give a counterexample that shows that either of the inequalities here can be strict. (Hint: There are many possible constructions; one of them is a modification of Exercise 1(ii).)

It is easy to see that Minkowski dimension reacts well to finite unions, and more precisely that

\displaystyle  \overline{\hbox{dim}}_M(E \cup F) = \max( \overline{\hbox{dim}}_M(E), \overline{\hbox{dim}}_M(F) )

and

\displaystyle  \underline{\hbox{dim}}_M(E \cup F) = \max( \underline{\hbox{dim}}_M(E), \underline{\hbox{dim}}_M(F) )

for any {E, F \subset {\bf R}^n}. However, it does not respect countable unions. For instance, the rationals {{\bf Q}} have Minkowski dimension {1}, despite being the countable union of points, which of course have Minkowski dimension {0}. More generally, it is not difficult to see that any set {E \subset {\bf R}^n} has the same upper or lower Minkowski dimension as its topological closure {\overline{E}}, since both sets have the same {\delta}-neighbourhoods. Thus we see that the notion of Minkowski dimension misses some of the fine structure of a set {E}, in particular the presence of “holes” within the set. We now turn to the notion of Hausdorff dimension, which rectifies some of these defects.

— 2. Hausdorff measure —

The Hausdorff approach to dimension begins by noting that {d}-dimensional objects in {{\bf R}^n} tend to have a meaningful {d}-dimensional measure to assign to them. For instance, the {1}-dimensional boundary of a polygon has a perimeter, the {0}-dimensional vertices of that polygon have a cardinality, and the polygon itself has an area. So to define the notion of a {d}-Hausdorff dimensional set, we will first define the notion of the {d}-dimensional Hausdorff measure {{\mathcal H}^d(E)} of a set {E}.

To do this, let us quickly review one of the (many) constructions of {n}-dimensional Lebesgue measure, which we are denoting here by {\hbox{vol}^n}. One way to build this measure is to work with half-open boxes {B = \prod_{i=1}^n [a_i,b_i)} in {{\bf R}^n}, which we assign a volume of {|B| := \prod_{i=1}^n (b_i - a_i)}. Given this notion of volume for boxes, we can then define the outer Lebesgue measure {(\hbox{vol}^n)^*(E)} of any set {E \subset {\bf R}^n} by the formula

\displaystyle  (\hbox{vol}^n)^*(E) := \inf \{ \sum_{k=1}^\infty |B_k|: B_k \hbox{ covers } E \}

where the infimum ranges over all at most countable collections {B_1,B_2,\ldots} of boxes that cover {E}. One easily verifies that {(\hbox{vol}^n)^*} is indeed an outer measure (i.e. it is monotone, countably subadditive, and assigns zero to the empty set). We then define a set {A \subset {\bf R}^n} to be {(\hbox{vol}^n)^*}-measurable if one has the additivity property

\displaystyle (\hbox{vol}^n)^*(E) = (\hbox{vol}^n)^*(E \cap A) + (\hbox{vol}^n)^*(E \backslash A)

for all {E \subset {\bf R}^n}. By Carathéodory’s theorem, the space of {(\hbox{vol}^n)^*}-measurable sets is a {\sigma}-algebra, and outer Lebesgue measure is a countably additive measure on this {\sigma}-algebra, which we denote {\hbox{vol}^n}. Furthermore, one easily verifies that every box {B} is {(\hbox{vol}^n)^*}-measurable, which soon implies that every Borel set is also; thus Lebesgue measure is a Borel measure (though it can of course measure some non-Borel sets also).

Finally, one needs to verify that the Lebesgue measure {\hbox{vol}^n(B)} of a box is equal to its classical volume {|B|}; the above construction trivially gives {\hbox{vol}^n(B) \leq |B|} but the converse is not as obvious. This is in fact a rather delicate matter, relying in particular on the completeness of the reals; if one replaced {{\bf R}} by the rationals {{\bf Q}}, for instance, then all the above constructions go through but now boxes have Lebesgue measure zero (why?). See Chapter 1 of Folland’s book, for instance, for details.

Anyway, we can use this construction of Lebesgue measure as a model for building {d}-dimensional Hausdorff measure. Instead of using half-open boxes as the building blocks, we will instead work with the open balls {B(x,r)}. For {d}-dimensional measure, we will assign each ball {B(x,r)} a measure {r^d} (cf. (1)). We can then define the unlimited Hausdorff content {h_{d,\infty}(E)} of a set {E \subset {\bf R}^n} by the formula

\displaystyle  h_{d,\infty}(E) := \inf \{ \sum_{k=1}^\infty r_k^d: B(x_k,r_k) \hbox{ covers } E \}

where the infimum ranges over all at most countable families of balls that cover {E}. (Note that if {E} is compact, then it would suffice to use finite coverings, since every open cover of {E} has a finite subcover. But in general, for non-compact {E} we must allow the use of infinitely many balls.)

As with Lebesgue measure, {h_{d,\infty}} is easily seen to be an outer measure, and one could define the notion of a {h_{d,\infty}}-measurable set on which Carathéodory’s theorem applies to build a countably additive measre. Unfortunately, a key problem arises: once {d} is less than {n}, most sets cease to be {h_{d,\infty}}-measurable! We illustrate this in the one-dimensional case with {n=1} and {d=1/2}, and consider the problem of computing the unlimited Hausdorff content {h_{1/2,\infty}([a,b])}. On the one hand, this content is at most {|\frac{b-a}{2}|^{1/2}}, since one can cover {[a,b]} by the ball of radius {\frac{b-a}{2}+\epsilon} centred at {\frac{a+b}{2}} for any {\epsilon > 0}. On the other hand, the content is also at least {|\frac{b-a}{2}|^{1/2}}. To see this, suppose we cover {[a,b]} by a finite or countable family of balls {B(x_k,r_k)} (one can reduce to the finite case by compactness, though it isn’t necessary to do so here). The total one-dimensional Lebesgue measure {\sum_k 2r_k} of these balls must equal or exceed the Lebesgue measure of the entire interval {|b-a|}, thus

\displaystyle \sum_k r_k \geq \frac{|b-a|}{2}.

From the inequality {\sum_k r_k \leq (\sum_k r_k^{1/2})^2} (which is obvious after expanding the RHS and discarding cross-terms) we see that

\displaystyle  \sum_k r_k^{1/2} \geq (\frac{|b-a|}{2})^{1/2}

and the claim follows.

We now see some serious breakdown of additivity: for instance, the unlimited {1/2}-dimensional content of {[0,2]} is {1}, despite being the disjoint union of {[0,1]} and {(1,2]}, which each have an unlimited content of {1/\sqrt{2}}. In particular, this shows that {[0,1]} (for instance) is not measurable with respect to the unlimited content. The basic problem here is that the most efficient cover of a union such as {[0,1] \cup (1,2]} for the purposes of unlimited {1/2}-dimensional content is not coming from covers of the separate components {[0,1]} and {(1,2]} of that union, but is instead coming from one giant ball that covers {[0,2]} directly.

To fix this, we will limit the Hausdorff content by working only with small balls. More precisely, for any {r > 0}, we define the Hausdorff content {h_{d,r}(E)} of a set {E \subset {\bf R}^n} by the formula

\displaystyle  h_{d,r}(E) := \inf \{ \sum_{k=1}^\infty r_k^d: B(x_k,r_k) \hbox{ covers } E; r_k \leq r \}

where the balls {B(x_k,r_k)} are now restricted to be less than or equal to {r} in radius. This quantity is increasing in {r}, and we then define the Hausdorff outer measure {({\mathcal H}^d)^*(E)} by the formula

\displaystyle  ({\mathcal H}^d)^*(E) := \lim_{r \rightarrow 0} h_{d,r}(E).

(This is analogous to the Riemann integral approach to volume of sets, covering them by balls, boxes, or rectangles of increasingly small size; this latter approach is also closely connected to the Minkowski dimension concept studied earlier. The key difference between the Lebesgue/Hausdorff approach and the Riemann/Minkowski approach is that in the former approach one allows the balls or boxes to be countable in number, and to be variable in size, whereas in the latter approach the cover is finite and uniform in size.)

Exercise 9 Show that if {d > n}, then {({\mathcal H}^d)^*(E) = 0} for all {E \subset {\bf R}^n}. Thus {d}-dimensional Hausdorff measure is only a non-trivial concept for subsets of {{\bf R}^n} in the regime {0 \leq d \leq n}.

Since each of the {h_{d,r}} are outer measures, {({\mathcal H}^d)^*} is also. But the key advantage of moving to the Hausdorff measure rather than Hausdorff content is that we obtain a lot more additivity. For instance:

Exercise 10 Let {E, F} be subsets of {{\bf R}^n} which have a non-zero separation, i.e. the quantity {\hbox{dist}(E,F) = \inf \{ |x-y|: x \in E, y \in F \}} is strictly positive. Show that {({\mathcal H}^d)^*(E \cup F) = ({\mathcal H}^d)^*(E) + ({\mathcal H}^d)^*(F)}. (Hint: one inequality is easy. For the other, observe that any small ball can intersect {E} or intersect {F}, but not both.)

One consequence of this is that there is a large class of measurable sets:

Proposition 2 Let {d \geq 0}. Then every Borel subset of {{\bf R}^n} is {({\mathcal H}^d)^*}-measurable.

Proof: Since the collection of {({\mathcal H}^d)^*}-measurable sets is a {\sigma}-algebra, it suffices to show the claim for closed sets {A}. (It will be slightly more convenient technically to work with closed sets rather than open ones here.) Thus, we take an arbitrary set {E \subset {\bf R}^n} and seek to show that

\displaystyle  ({\mathcal H}^d)^*(E) = ({\mathcal H}^d)^*(E \cap A) + ({\mathcal H}^d)^*(E \backslash A).

We may assume that {({\mathcal H}^d)^*(E \cap A)} and {({\mathcal H}^d)^*(E \backslash A)} are both finite, since the claim is obvious otherwise from monotonicity.

From Exercise 10 and the fact that {({\mathcal H}^d)^*} is an outer measure, we already have

\displaystyle  ({\mathcal H}^d)^*(E \cap A) + ({\mathcal H}^d)^*(E \backslash A_{1/m}) \leq ({\mathcal H}^d)^*(E) \leq ({\mathcal H}^d)^*(E \cap A) + ({\mathcal H}^d)^*(E \backslash A),

where {A_{1/m}} is the {1/m}-neighbourhood of {A}. So it suffices to show that

\displaystyle  \lim_{m \rightarrow \infty} ({\mathcal H}^d)^*(E \backslash A_{1/m}) = ({\mathcal H}^d)^*(E \backslash A).

For any {m}, we have the telescoping sum {E \backslash A = (E \backslash A_{1/m}) \cup \bigcup_{l > m} F_l}, where {F_l := (E \backslash A_{1/(l+1)}) \cap A_l}, and thus by countable subadditivity and monotonicity

\displaystyle  ({\mathcal H}^d)^*(E \backslash A_{1/m}) \leq ({\mathcal H}^d)^*(E \backslash A) \leq ({\mathcal H}^d)^*(E \backslash A_{1/m}) + \sum_{l>m} ({\mathcal H}^d)^*(F_l)

so it suffices to show that the sum {\sum_{l=1}^\infty ({\mathcal H}^d)^*(F_l)} is absolutely convergent.

Consider the even-indexed sets {F_2, F_4, F_6, \ldots}. These sets are separated from each other, so by many applications of Exercise 10 followed by monotonicity we have

\displaystyle  \sum_{l=1}^L ({\mathcal H}^d)^*(F_{2l}) = ({\mathcal H}^d)^*(\bigcup_{l=1}^L F_{2l}) \leq ({\mathcal H}^d)^*(E \backslash A) < \infty

for all {L}, and thus {\sum_{l=1}^\infty ({\mathcal H}^d)^*(F_{2l})} is absolutely convergent. Similarly for {\sum_{l=1}^\infty ({\mathcal H}^d)^*(F_{2l-1})}, and the claim follows. \Box

On the {({\mathcal H}^d)^*}-measurable sets {E}, we write {{\mathcal H}^d(E)} for {({\mathcal H}^d)^*(E)}, thus {{\mathcal H}^d} is a Borel measure on {{\bf R}^n}. We now study what this measure looks like for various values of {d}. The case {d=0} is easy:

Exercise 11 Show that every subset of {{\bf R}^n} is {({\mathcal H}^0)^*}-measurable, and that {{\mathcal H}^0} is counting measure.

Now we look at the opposite case {d=n}. It is easy to see that any Lebesgue-null set of {{\bf R}^n} has {n}-dimensional Hausdorff measure zero (since it may be covered by balls of arbitrarily small total content). Thus {n}-dimensional Hausdorff measure is absolutely continuous with respect to Lebesgue measure, and we thus have {\frac{d {\mathcal H}^n}{d \hbox{vol}^n} = c} for some locally integrable function {c}. As Hausdorff measure and Lebesgue measure are clearly translation-invariant, {c} must also be translation-invariant and thus constant. We therefore have

\displaystyle  {\mathcal H}^n = c \hbox{vol}^n

for some constant {c \geq 0}.

We now compute what this constant is. If {\omega_n} denotes the volume of the unit ball {B(0,1)}, then we have

\displaystyle  \sum_k r_k^n = \frac{1}{\omega_n} \sum_k \hbox{vol}^n(B(x_k,r_k)) \geq \frac{1}{\omega_n} \hbox{vol}^n( \bigcup_k B(x_k,r_k) )

for any at most countable collection of balls {B(x_k,r_k)}. Taking infima, we conclude that

\displaystyle {\mathcal H}^n \geq \frac{1}{\omega_n} \hbox{vol}^n

and so {c \geq \frac{1}{\omega_n}}.

In the opposite direction, observe from Exercise 4 that given any {0 < r < 1}, one can cover the unit cube {[0,1]^n} by at most {C_n r^{-n}} balls of radius {r}, where {C_n} depends only on {n}; thus

\displaystyle  {\mathcal H}^n( [0,1]^n ) \leq C_n

and so {c \leq C_n}; in particular, {c} is finite.

We can in fact compute {c} explicitly (although knowing that {c} is finite and non-zero already suffices for many applications):

Lemma 3 We have {c = \frac{1}{\omega_n}}, or in other words {{\mathcal H}^n = \frac{1}{\omega_n} \hbox{vol}^n}. (In particular, a ball {B^n(x,r)} has {n}-dimensional Hausdorff measure {r^n}.)

Proof: Let us consider the Hausdorff measure {{\mathcal H}^n( [0,1]^n )} of the unit cube. By definition, for any {\epsilon > 0} one can find an {0 < r < 1/2} such that

\displaystyle  h_{n,r}( [0,1]^n ) \geq {\mathcal H}^n( [0,1]^n ) - \epsilon.

Observe (using Exercise 4) that we can find at least {c_n r^{-n}} disjoint balls {B(x_1,r),\ldots,B(x_k,r)} of radius {r} inside the unit cube. We then observe that

\displaystyle  h_{n,r}( [0,1]^n ) \leq k r^n + {\mathcal H}^n( [0,1]^n \backslash \bigcup_{i=1}^k B(x_k,r) ).

On the other hand,

\displaystyle  {\mathcal H}^n( [0,1]^n \backslash \bigcup_{i=1}^k B(x_k,r) ) = c \hbox{vol}^n( [0,1]^n \backslash \bigcup_{i=1}^k B(x_k,r) ) = c( 1 - k \omega_n r^n );

putting all this together, we obtain

\displaystyle  c = {\mathcal H}^n( [0,1]^n ) \leq k r^n + c (1-k \omega_n r^n) + \varepsilon

which rearranges as

\displaystyle  1-c\omega_n \geq -\frac{\varepsilon}{k r^n}.

Since {kr^n} is bounded below by {c_n}, we can then send {\varepsilon \rightarrow 0} and conclude that {c \leq \frac{1}{\omega_n}}; since we already showed {c \geq \frac{1}{\omega_n}}, the claim follows. \Box

Thus {n}-dimensional Hausdorff measure is an explicit constant multiple of {n}-dimensional Lebesgue measure. The same argument shows that for integers {0 < d < n}, the restriction of {d}-dimensional Hausdorff measure to any {d}-dimensional linear subspace (or affine subspace) {V} is equal to the constant {\frac{1}{\omega_d}} times {d}-dimensional Lebesgue measure on {V}. (This shows, by the way, that {{\mathcal H}^d} is not a {\sigma}-finite measure on {{\bf R}^n} in general, since one can partition {{\bf R}^n} into uncountably many {d}-dimensional affine subspaces. In particular, it is not a Radon measure in general).

One can then compute {d}-dimensional Hausdorff measure for other sets than subsets of {d}-dimensional affine subspaces by changes of variable. For instance:

Exercise 12 Let {0 \leq d \leq n} be an integer, let {\Omega} be an open subset of {{\bf R}^d}, and let {\phi: \Omega \rightarrow {\bf R}^n} be a smooth injective map which is non-degenerate in the sense that the Hessian {D \phi} (which is a {d \times n} matrix) has full rank at every point of {\Omega}. For any compact subset {E} of {\Omega}, establish the formula

\displaystyle  {\mathcal H}^d( \phi(E) ) = \int_E J\ d{\mathcal H}^d = \frac{1}{\omega_d} \int_E J\ d\hbox{vol}^d

where the Jacobian {J} is the square root of the sum of squares of all the determinants of the {d \times d} minors of the {d \times n} matrix {D \phi}. (Hint: By working locally, one can assume that {\phi} is the graph of some map from {\Omega} to {{\bf R}^{n-d}}, and so can be inverted by the projection function; by working even more locally, one can assume that the Jacobian is within an epsilon of being constant. The image of a small ball in {\Omega} then resembles a small ellipsoid in {\phi(\Omega)}, and conversely the projection of a small ball in {\phi(\Omega)} is a small ellipsoid in {\Omega}. Use some linear algebra and several variable calculus to relate the content of these ellipsoids to the radius of the ball.) It is possible to extend this formula to Lipschitz maps {\phi: \Omega \rightarrow {\bf R}^n} that are not necessarily injective, leading to the area formula

\displaystyle  \int_{\phi(E)} \#(\phi^{-1}(y))\ d{\mathcal H}^d(y) = \frac{1}{\omega_d} \int_E J\ d\hbox{vol}^d

for such maps, but we will not prove this formula here.

From this exercise we see that {d}-dimensional Hausdorff measure does coincide to a large extent with the {d}-dimensional notion of surface area; for instance, for a simple smooth curve {\gamma: [a,b] \rightarrow {\bf R}^n} with everywhere non-vanishing derivative, the {{\mathcal H}^1} measure of {\gamma([a,b])} is equal to its classical length {|\gamma| = \int_a^b |\gamma'(t)|\ dt}. One can also handle a certain amount of singularity (e.g. piecewise smooth non-degenerate curves rather than everywhere smooth non-degenerate curves) by exploiting the countable additivity of {{\mathcal H}^1} measure, or by using the area formula alluded to earlier.

Now we see how the Hausdorff dimension varies in {d}.

Exercise 13 Let {0 \leq d < d'}, and let {E \subset {\bf R}^n} be a Borel set. Show that if {{\mathcal H}^{d'}(E)} is finite, then {{\mathcal H}^{d}(E)} is zero; equivalently, if {{\mathcal H}^d(E)} is positive, then {{\mathcal H}^{d'}} is infinite.

Example 1 Let {0 \leq d \leq n} be integers. The unit ball {B^d(0,1) \subset {\bf R}^d \subset {\bf R}^n} has a {d}-dimensional Hausdorff measure of {1} (by Lemma 3), and so it has zero {d'}-dimensional Hausdorff dimensional measure for {d' > d} and infinite {d'}-dimensional measure for {d' < d}.

On the other hand, we know from Exercise 11 that {{\mathcal H}^0(E)} is positive for any non-empty set {E}, and that {{\mathcal H}^d(E) = 0} for every {d > n}. We conclude (from the least upper bound property of the reals) that for any Borel set {E \subset {\bf R}^n}, there exists a unique number in {[0,n]}, called the Hausdorff dimension {\hbox{dim}_H(E)} of {E}, such that {{\mathcal H}^d(E) = 0} for all {d > \hbox{dim}_H(E)} and {{\mathcal H}^d(E) = \infty} for all {d < \hbox{dim}_H(E)}. Note that at the critical dimension {d = \hbox{dim}_H} itself, we allow {{\mathcal H}^d(E)} to be zero, finite, or infinite, and we shall shortly see in fact that all three possibilities can occur. By convention, we give the empty set a Hausdorff dimension of {-\infty}. One can also assign Hausdorff dimension to non-Borel sets, but we shall not do so to avoid some (very minor) technicalities.

Example 2 The unit ball {B^d(0,1) \subset {\bf R}^d \subset {\bf R}^n} has Hausdorff dimension {d}, as does {{\bf R}^d} itself. Note that the former set has finite {d}-dimensional Hausdorff measure, while the latter has an infinite measure. More generally, any {d}-dimensional smooth manifold in {{\bf R}^n} has Hausdorff dimension {d}.

Exercise 14 Show that the graph {\{ (x,\sin\frac{1}{x}): 0 < x < 1 \}} has Hausdorff dimension {1}; compare this with Exercise 6.

It is clear that Hausdorff dimension is monotone: if {E \subset F} are Borel sets, then {\hbox{dim}_H(E) \leq \hbox{dim}_H(F)}. Since Hausdorff measure is countably additive, it is also not hard to see that Hausdorff dimension interacts well with countable unions:

\displaystyle  \hbox{dim}_H( \bigcup_{i=1}^\infty E_i ) = \sup_{1 \leq i \leq \infty} \hbox{dim}_H( E_i ).

Thus for instance the rationals, being a countable union of {0}-dimensional points, have Hausdorff dimension {0}, in contrast to their Minkowski dimension of {1}. On the other hand, we at least have an inequality between Hausdorff and Minkowski dimension:

Exercise 15 For any Borel set {E \subset {\bf R}^n}, show that {\hbox{dim}_H(E) \leq \underline{\hbox{dim}}_M(E) \leq\overline{\hbox{dim}}_M(E)}. (Hint: use (5). Which of the choices of {*} is most convenient to use here?)

It is instructive to compare Hausdorff dimension and Minkowski dimension as follows.

Exercise 16 Let {E} be a bounded Borel subset of {{\bf R}^n}, and let {d \geq 0}.

  • Show that {\overline{\hbox{dim}}_M(E) \leq d} if and only if, for every {\epsilon > 0} and arbitrarily small {r > 0}, one can cover {E} by finitely many balls {B(x_1,r_1),\ldots,B(x_k,r_k)} of radii {r_i=r} equal to {r} such that {\sum_{i=1}^k r_i^{d+\epsilon} \leq \epsilon}.
  • Show that {\underline{\hbox{dim}}_M(E) \leq d} if and only if, for every {\epsilon > 0} and all sufficiently small {r > 0}, one can cover {E} by finitely many balls {B(x_1,r_1),\ldots,B(x_k,r_k)} of radii {r_i=r} equal to {r} such that {\sum_{i=1}^k r_i^{d+\epsilon} \leq \epsilon}.
  • Show that {\hbox{dim}_H(E) \leq d} if and only if, for every {\epsilon > 0} and {r > 0}, one can cover {E} by countably many balls {B(x_1,r_1),\ldots} of radii {r_i \leq r} at most {r} such that {\sum_{i=1}^k r_i^{d+\epsilon} \leq \epsilon}.

The previous two exercises give ways to upper-bound the Hausdorff dimension; for instance, we see from Exercise 2 that self-similar fractals {E} of the type in that exercise (i.e. {E} is {k} translates of {r \cdot E}) have Hausdorff dimension at most {\frac{\log k}{\log 1/r}}. To lower bound the Hausdorff dimension of a set {E}, one convenient way to do so is to find a measure with a certain “dimension” property (analogous to (1)) that assigns a positive mass to {E}:

Exercise 17 Let {d \geq 0}. A Borel measure {\mu} on {{\bf R}^n} is said to be a Frostman measure of dimension at most {d} if it is compactly supported there exists a constant {C} such that {\mu(B(x,r)) \leq C r^d} for all balls {B(x,r)} of radius {0 < r < 1}. Show that if {\mu} has dimension at most {d}, then any Borel set {E} with {\mu(E) > 0} has positive {d}-dimensional Hausdorff content; in particular, {\hbox{dim}_H(E) \geq d}.

Note that this gives an alternate way to justify the fact that smooth {d}-dimensional manifolds have Hausdorff dimension {d}, since on the one hand they have Minkowski dimension {d}, and on the other hand they support a non-trivial {d}-dimensional measure, namely Lebesgue measure.

Exercise 18 Show that the Cantor set in Exercise 1(i) has Hausdorff dimension {1/2}. More generally, establish the analogue of the first part of Exercise 2 for Hausdorff measure.

Exercise 19 Construct a subset of {{\bf R}} of Hausdorff dimension {1} that has zero Lebesgue measure. (Hint: A modified Cantor set, vaguely reminiscent of Exercise 1(ii), can work here.)

A useful fact is that Exercise 17 can be reversed:

Lemma 4 (Frostman’s lemma) Let {d \geq 0}, and let {E \subset {\bf R}^n} be a compact set with {{\mathcal H}^d(E) > 0}. Then there exists a non-trivial Frostman measure of dimension at least {d} supported on {E} (thus {\mu(E) > 0} and {\mu({\bf R}^d \backslash E)=0}).

Proof: Without loss of generality we may place the compact set {E} in the half-open unit cube {[0,1)^n}. It is convenient to work dyadically. For each integer {k \geq 0}, we subdivide {[0,1)^n} into {2^{kn}} half-open cubes {Q_{k,1},\ldots,Q_{k,2^{nk}}} of sidelength {\ell(Q_{k,i}) = 2^{-k}} in the usual manner, and refer to such cubes as dyadic cubes. For each {k} and any {F \subset [0,1)^n}, we can define the dyadic Hausdorff content {h^\Delta_{d,k}(F)} to be the quantity

\displaystyle h^\Delta_{d,2^{-k}}(F) := \inf \{ \sum_j \ell( Q_{k_j,i_j} )^d: Q_{k_j,i_j} \hbox{ cover } F; k_j \geq k \}

where the {Q_{k_j,i_j}} range over all at most countable families of dyadic cubes of sidelength at most {2^{-k}} that cover {F}. By covering cubes by balls and vice versa, it is not hard to see that

\displaystyle  c h_{d,C2^{-k}}(F) \leq h^\Delta_{d,2^{-k}}(F) \leq C h_{d,c2^{-k}}(F)

for some absolute constants {c,C} depending only on {d,n}. Thus, if we define the dyadic Hausdorff measure

\displaystyle  ({\mathcal H}^d)^\Delta(F) := \lim_{k \rightarrow \infty} h^\Delta_{d,2^{-k}}(F)

then we see that the dyadic and non-dyadic Huausdorff measures are comparable:

\displaystyle  c {\mathcal H}^d(F) \leq ({\mathcal H}^d)^\Delta(F) \leq C ({\mathcal H}^d)^\Delta(F).

In particular, the quantity {\sigma := ({\mathcal H}^d)^\Delta(E)} is strictly positive.

Given any dyadic cube {Q} of length {\ell(Q) = 2^{-k}}, define the upper Frostman content {\mu^+(Q)} to be the quantity

\displaystyle  \mu^+(Q) := h^\Delta_{d,k}(E \cap Q).

Then {\mu^+([0,1)^n) \geq \sigma}. By covering {E \cap Q} by {Q}, we also have the bound

\displaystyle  \mu^+(Q) \leq \ell(Q)^d.

Finally, by the subadditivity property of Hausdorff content, if we decompose {Q} into {2^n} cubes {Q'} of sidelength {\ell(Q') = 2^{-k-1}}, we have

\displaystyle  \mu^+(Q) \leq \sum_{Q'} \mu^+(Q').

The quantity {\mu^+} behaves like a measure, but is subadditive rather than additive. Nevertheless, one can easily find another quantity {\mu(Q)} to assign to each dyadic cube such that

\displaystyle  \mu([0,1)^n) = \mu^+([0,1)^n)

and

\displaystyle  \mu(Q) \leq \mu^+(Q)

for all dyadic cubes, and such that

\displaystyle  \mu(Q) = \sum_{Q'} \mu(Q')

whenever a dyadic cube is decomposed into {2^n} sub-cubes of half the sidelength. Indeed, such a {\mu} can be constructed by a greedy algorithms starting at the largest cube {[0,1)^n} and working downward; we omit the details. One can then use this “measure” {\mu} to integrate any continuous compactly supported function on {{\bf R}^n} (by approximating such a function by one which is constant on dyadic cubes of a certain scale), and so by the Riesz representation theorem, it extends to a Radon measure {\mu} supported on {[0,1]^n}. (One could also have used the Caratheódory extension theorem at this point.) Since {\mu([0,1)^n) \geq \sigma}, {\mu} is non-trivial; since {\mu(Q) \leq \mu^+(Q) \leq \ell(Q)^d} for all dyadic cubes {Q}, it is not hard to see that {\mu} is a Frostman measure of dimension at most {d}, as desired. \Box

The study of Hausdorff dimension is then intimately tied to the study of the dimensional properties of various measures. We give some examples in the next few exercises.

Exercise 20 Let {0 < d \leq n}, and let {E \subset {\bf R}^n} be a compact set. Show that {\hbox{dim}_H(E) \geq d} if and only if, for every {0 < \epsilon < d}, there exists a compactly supported probability Borel measure {\mu} with

\displaystyle  \int_{{\bf R}^d} \int_{{\bf R}^d} \frac{1}{|x-y|^{d-\epsilon}}\ d\mu(x) d\mu(y) < \infty.

Show that this condition is also equivalent to {\mu} lying in the Sobolev space {H^{-(n-d+\epsilon)/2}({\bf R}^n)}. Thus we see a link here between Hausdorff dimension and Sobolev norms: the lower the dimension of a set, the rougher the measures that it can support, where the Sobolev scale is used to measure roughness.

Exercise 21 Let {E} be a compact subset of {{\bf R}^n}, and let {\mu} be a Borel probability measure supported on {E}. Let {0 \leq d \leq n}.

  • Suppose that for every {\epsilon > 0}, every {0 < \delta < 1/10}, and every subset {E'} of {E} with {\mu(E') \geq \frac{1}{\log^2 (1/\delta)}}, one could establish the bound {{\mathcal N}_\delta^*(E') \geq c_\epsilon (\frac{1}{\delta})^{d-\epsilon}} for {*} equal to any of {\hbox{ext}, \hbox{int}, \hbox{net}, \hbox{pack}} (the exact choice of {*} is irrelevant thanks to Exercise 4). Show that {E} has Hausdorff dimension at least {d}. (Hint: cover {E} by small balls, then round the radius of each ball to the nearest power of {2}. Now use countable additivity and the observation that sum {\sum_\delta \frac{1}{\log^2(1/\delta)}} is small when {\delta} ranges over sufficiently small powers of {2}.)
  • Show that one can replace {\mu(E') \geq \frac{1}{\log^2 (1/\delta)}} with {\mu(E') \geq \frac{1}{\log \log^2 (1/\delta)}} in the previous statement. (Hint: instead of rounding the radius to the nearest power of {2}, round instead to radii of the form {1/2^{2^{\epsilon n}}} for integers {n}.) This trick of using a hyper-dyadic range of scales rather than a dyadic range of scales is due to Bourgain. The exponent {2} in the double logarithm can be replaced by any other exponent strictly greater than {1}.

This should be compared with the task of lower bounding the lower Minkowski dimension, which only requires control on the entropy of {E} itself, rather than of large subsets {E'} of {E}. The results of this exercise are exploited to establish lower bounds on the Hausdorff dimension of Kakeya sets (and in particular, to conclude such bounds from the Kakeya maximal function conjecture, as discussed in this previous post).

Exercise 22 Let {E \subset {\bf R}^n} be a Borel set, and let {\phi: E \rightarrow {\bf R}^m} be a locally Lipschitz map. Show that {\hbox{dim}_H(\phi(E)) \leq \hbox{dim}_H(E)}, and that if {E} has zero {d}-dimensional Hausdorff measure then so does {\phi(E)}.

Exercise 23 Let {\phi: {\bf R}^n \rightarrow {\bf R}} be a smooth function, and let {g: {\bf R}^n \rightarrow {\bf R}} be a test function such that {|\nabla \phi| > 0} on the support of {g}. Establish the co-area formula

\displaystyle  \int_{{\bf R}^n} g(x) |\nabla \phi(x)|\ dx = \int_{\bf R} (\int_{\phi^{-1}(t)} g(x)\ d{\mathcal H}^{n-1}(x))\ dt. \ \ \ \ \ (6)

(Hint: Subdivide the support of {g} to be small, and then apply a change of variables to make {\phi} linear, e.g. {\phi(x) = x_1}.) This formula is in fact valid for all absolutely integrable {g} and Lipschitz {\phi}, but is difficult to prove for this level of generality, requiring a version of Sard’s theorem.

The coarea formula (6) can be used to link geometric inequalities to analytic ones. For instance, the sharp isoperimetric inequality

\displaystyle  \hbox{vol}^n(\Omega)^{\frac{n-1}{n}} \leq \frac{1}{n \omega_n^{1/n}} {\mathcal H}^{n-1}(\partial \Omega),

valid for bounded open sets {\Omega} in {{\bf R}^n}, can be combined with the coarea formula (with {g := 1}) to give the sharp Sobolev inequality

\displaystyle  \| \phi\|_{L^{\frac{n}{n-1}}({\bf R}^n)} \leq \frac{1}{n \omega_n^{1/n}} \int_{{\bf R}^n} |\nabla \phi(x)|\ dx

for any test function {\phi}, the main point being that {\phi^{-1}(t) \cup \phi^{-1}(-t)} is the boundary of {\{ |\phi| \geq t \}} (one also needs to do some manipulations relating the volume of those level sets to {\| \phi\|_{L^{\frac{n}{n-1}}({\bf R}^n)}}). We omit the details.

Further discussion of Hausdorff dimension can be found in the books of Falconer, of Mattila and of Wolff, as well as in many other places.