You are currently browsing the tag archive for the ‘Hardy-Littlewood maximal inequality’ tag.

If is a locally integrable function, we define the Hardy-Littlewood maximal function by the formula

where is the ball of radius centred at , and denotes the measure of a set . The *Hardy-Littlewood maximal inequality* asserts that

for all , all , and some constant depending only on . By a standard density argument, this implies in particular that we have the *Lebesgue differentiation theorem*

for all and almost every . See for instance my lecture notes on this topic.

By combining the Hardy-Littlewood maximal inequality with the Marcinkiewicz interpolation theorem (and the trivial inequality ) we see that

for all and , and some constant depending on and .

The exact dependence of on and is still not completely understood. The standard Vitali-type covering argument used to establish (1) has an exponential dependence on dimension, giving a constant of the form for some absolute constant . Inserting this into the Marcinkiewicz theorem, one obtains a constant of the form for some (and taking bounded away from infinity, for simplicity). The dependence on is about right, but the dependence on should not be exponential.

In 1982, Stein gave an elegant argument (with full details appearing in a subsequent paper of Stein and Strömberg), based on the Calderón-Zygmund method of rotations, to eliminate the dependence of :

The argument is based on an earlier bound of Stein from 1976 on the *spherical maximal function*

where are the spherical averaging operators

and is normalised surface measure on the sphere . Because this is an uncountable supremum, and the averaging operators do not have good continuity properties in , it is not *a priori* obvious that is even a measurable function for, say, locally integrable ; but we can avoid this technical issue, at least initially, by restricting attention to continuous functions . The Stein maximal theorem for the spherical maximal function then asserts that if and , then we have

for all (continuous) . We will sketch a proof of this theorem below the fold. (Among other things, one can use this bound to show the pointwise convergence of the spherical averages for any when and , although we will not focus on this application here.)

The condition can be seen to be necessary as follows. Take to be any fixed bump function. A brief calculation then shows that decays like as , and hence does not lie in unless . By taking to be a rescaled bump function supported on a small ball, one can show that the condition is necessary even if we replace with a compact region (and similarly restrict the radius parameter to be bounded). The condition however is not quite necessary; the result is also true when , but this turned out to be a more difficult result, obtained first by Bourgain, with a simplified proof (based on the local smoothing properties of the wave equation) later given by Muckenhaupt-Seeger-Sogge.

The Hardy-Littlewood maximal operator , which involves averaging over balls, is clearly related to the spherical maximal operator, which averages over spheres. Indeed, by using polar co-ordinates, one easily verifies the pointwise inequality

for any (continuous) , which intuitively reflects the fact that one can think of a ball as an average of spheres. Thus, we see that the spherical maximal inequality (3) implies the Hardy-Littlewood maximal inequality (2) with the same constant . (This implication is initially only valid for continuous functions, but one can then extend the inequality (2) to the rest of by a standard limiting argument.)

At first glance, this observation does not immediately establish Theorem 1 for two reasons. Firstly, Stein’s spherical maximal theorem is restricted to the case when and ; and secondly, the constant in that theorem still depends on dimension . The first objection can be easily disposed of, for if , then the hypotheses and will automatically be satisfied for sufficiently large (depending on ); note that the case when is bounded (with a bound depending on ) is already handled by the classical maximal inequality (2).

We still have to deal with the second objection, namely that constant in (3) depends on . However, here we can use the method of rotations to show that the constants can be taken to be non-increasing (and hence bounded) in . The idea is to view high-dimensional spheres as an average of rotated low-dimensional spheres. We illustrate this with a demonstration that , in the sense that any bound of the form

for the -dimensional spherical maximal function, implies the same bound

for the -dimensional spherical maximal function, with exactly the same constant . For any direction , consider the averaging operators

for any continuous , where

where is some orthogonal transformation mapping the sphere to the sphere ; the exact choice of orthogonal transformation is irrelevant due to the rotation-invariance of surface measure on the sphere . A simple application of Fubini’s theorem (after first rotating to be, say, the standard unit vector ) using (4) then shows that

uniformly in . On the other hand, by viewing the -dimensional sphere as an average of the spheres , we have the identity

indeed, one can deduce this from the uniqueness of Haar measure by noting that both the left-hand side and right-hand side are invariant means of on the sphere . This implies that

and thus by Minkowski’s inequality for integrals, we may deduce (5) from (6).

Remark 1Unfortunately, the method of rotations does not work to show that the constant for the weak inequality (1) is independent of dimension, as the weak quasinorm is not a genuine norm and does not obey the Minkowski inequality for integrals. Indeed, the question of whether in (1) can be taken to be independent of dimension remains open. The best known positive result is due to Stein and Strömberg, who showed that one can take for some absolute constant , by comparing the Hardy-Littlewood maximal function with the heat kernel maximal functionThe abstract semigroup maximal inequality of Dunford and Schwartz (discussed for instance in these lecture notes of mine) shows that the heat kernel maximal function is of weak-type with a constant of , and this can be used, together with a comparison argument, to give the Stein-Strömberg bound. In the converse direction, it is a recent result of Aldaz that if one replaces the balls with cubes, then the weak constant must go to infinity as .

Let be a compact interval of positive length (thus ). Recall that a function is said to be *differentiable* at a point if the limit

exists. In that case, we call the *strong derivative*, *classical derivative*, or just *derivative* for short, of at . We say that is *everywhere differentiable*, or differentiable for short, if it is differentiable at all points , and *differentiable almost everywhere* if it is differentiable at almost every point . If is differentiable everywhere and its derivative is continuous, then we say that is *continuously differentiable*.

Remark 1Much later in this sequence, when we cover the theory of distributions, we will see the notion of aweak derivativeordistributional derivative, which can be applied to a much rougher class of functions and is in many ways more suitable than the classical derivative for doing “Lebesgue” type analysis (i.e. analysis centred around the Lebesgue integral, and in particular allowing functions to be uncontrolled, infinite, or even undefined on sets of measure zero). However, for now we will stick with the classical approach to differentiation.

Exercise 1If is everywhere differentiable, show that is continuous and is measurable. If is almost everywhere differentiable, show that the (almost everywhere defined) function is measurable (i.e. it is equal to an everywhere defined measurable function on outside of a null set), but give an example to demonstrate that need not be continuous.

Exercise 2Give an example of a function which is everywhere differentiable, but not continuously differentiable. (Hint:choose an that vanishes quickly at some point, say at the origin , but which also oscillates rapidly near that point.)

In single-variable calculus, the operations of integration and differentiation are connected by a number of basic theorems, starting with Rolle’s theorem.

Theorem 1 (Rolle’s theorem)Let be a compact interval of positive length, and let be a differentiable function such that . Then there exists such that .

*Proof:* By subtracting a constant from (which does not affect differentiability or the derivative) we may assume that . If is identically zero then the claim is trivial, so assume that is non-zero somewhere. By replacing with if necessary, we may assume that is positive somewhere, thus . On the other hand, as is continuous and is compact, must attain its maximum somewhere, thus there exists such that for all . Then must be positive and so cannot equal either or , and thus must lie in the interior. From the right limit of (1) we see that , while from the left limit we have . Thus and the claim follows.

Remark 2Observe that the same proof also works if is only differentiable in the interior of the interval , so long as it is continuous all the way up to the boundary of .

Exercise 3Give an example to show that Rolle’s theorem can fail if is merely assumed to be almost everywhere differentiable, even if one adds the additional hypothesis that is continuous. This example illustrates that everywhere differentiability is a significantly stronger property than almost everywhere differentiability. We will see further evidence of this fact later in these notes; there are many theorems that assert in their conclusion that a function is almost everywhere differentiable, but few that manage to concludeeverywheredifferentiability.

Remark 3It is important to note that Rolle’s theorem only works in the real scalar case when is real-valued, as it relies heavily on the least upper bound property for the domain . If, for instance, we consider complex-valued scalar functions , then the theorem can fail; for instance, the function defined by vanishes at both endpoints and is differentiable, but its derivative is never zero. (Rolle’s theorem does imply that the real and imaginary parts of the derivative both vanish somewhere, but the problem is that they don’tsimultaneouslyvanish at the same point.) Similar remarks to functions taking values in a finite-dimensional vector space, such as .

One can easily amplify Rolle’s theorem to the mean value theorem:

Corollary 2 (Mean value theorem)Let be a compact interval of positive length, and let be a differentiable function. Then there exists such that .

*Proof:* Apply Rolle’s theorem to the function .

Remark 4As Rolle’s theorem is only applicable to real scalar-valued functions, the more general mean value theorem is also only applicable to such functions.

Exercise 4 (Uniqueness of antiderivatives up to constants)Let be a compact interval of positive length, and let and be differentiable functions. Show that for every if and only if for some constant and all .

We can use the mean value theorem to deduce one of the fundamental theorems of calculus:

Theorem 3 (Second fundamental theorem of calculus)Let be a differentiable function, such that is Riemann integrable. Then the Riemann integral of is equal to . In particular, we have whenever is continuously differentiable.

*Proof:* Let . By the definition of Riemann integrability, there exists a finite partition such that

for every choice of .

Fix this partition. From the mean value theorem, for each one can find such that

and thus by telescoping series

Since was arbitrary, the claim follows.

Remark 5Even though the mean value theorem only holds for real scalar functions, the fundamental theorem of calculus holds for complex or vector-valued functions, as one can simply apply that theorem to each component of that function separately.

Of course, we also have the other half of the fundamental theorem of calculus:

Theorem 4 (First fundamental theorem of calculus)Let be a compact interval of positive length. Let be a continuous function, and let be the indefinite integral . Then is differentiable on , with derivative for all . In particular, is continuously differentiable.

*Proof:* It suffices to show that

for all , and

for all . After a change of variables, we can write

for any and any sufficiently small , or any and any sufficiently small . As is continuous, the function converges uniformly to on as (keeping fixed). As the interval is bounded, thus converges to , and the claim follows.

Corollary 5 (Differentiation theorem for continuous functions)Let be a continuous function on a compact interval. Then we havefor all ,

for all , and thus

for all .

In these notes we explore the question of the extent to which these theorems continue to hold when the differentiability or integrability conditions on the various functions are relaxed. Among the results proven in these notes are

- The Lebesgue differentiation theorem, which roughly speaking asserts that Corollary 5 continues to hold for
*almost*every if is merely absolutely integrable, rather than continuous; - A number of
*differentiation theorems*, which assert for instance that monotone, Lipschitz, or bounded variation functions in one dimension are almost everywhere differentiable; and - The second fundamental theorem of calculus for absolutely continuous functions.

The material here is loosely based on Chapter 3 of Stein-Shakarchi. Read the rest of this entry »

Assaf Naor and I have just uploaded to the arXiv our paper “Random Martingales and localization of maximal inequalities“, to be submitted shortly. This paper investigates the best constant in generalisations of the classical Hardy-Littlewood maximal inequality

for any absolutely integrable , where is the Euclidean ball of radius centred at , and denotes the Lebesgue measure of a subset of . This inequality is fundamental to a large part of real-variable harmonic analysis, and in particular to *Calderón-Zygmund theory*. A similar inequality in fact holds with the Euclidean norm replaced by any other convex norm on .

The exact value of the constant is only known in , with a remarkable result of Melas establishing that . Classical covering lemma arguments give the exponential upper bound when properly optimised (a direct application of the Vitali covering lemma gives , but one can reduce to by being careful). In an important paper of Stein and Strömberg, the improved bound was obtained for any convex norm by a more intricate covering norm argument, and the slight improvement obtained in the Euclidean case by another argument more adapted to the Euclidean setting that relied on heat kernels. In the other direction, a recent result of Aldaz shows that in the case of the norm, and in fact in an even more recent preprint of Aubrun, the lower bound for any has been obtained in this case. However, these lower bounds do not apply in the Euclidean case, and one may still conjecture that is in fact uniformly bounded in this case.

Unfortunately, we do not make direct progress on these problems here. However, we do show that the Stein-Strömberg bound is extremely general, applying to a wide class of metric measure spaces obeying a certain “microdoubling condition at dimension “; and conversely, in such level of generality, it is essentially the best estimate possible, even with additional metric measure hypotheses on the space. Thus, if one wants to improve this bound for a specific maximal inequality, one has to use specific properties of the geometry (such as the connections between Euclidean balls and heat kernels). Furthermore, in the general setting of metric measure spaces, one has a general *localisation principle*, which roughly speaking asserts that in order to prove a maximal inequality over all scales , it suffices to prove such an inequality in a smaller range uniformly in . It is this localisation which ultimately explains the significance of the growth in the Stein-Strömberg result (there are essentially distinct scales in any range ). It also shows that if one restricts the radii to a lacunary range (such as powers of ), the best constant improvees to ; if one restricts the radii to an even sparser range such as powers of , the best constant becomes .

## Recent Comments