You are currently browsing the tag archive for the ‘Lebesgue spaces’ tag.

In harmonic analysis and PDE, one often wants to place a function f: {\bf R}^d \to {\bf C} on some domain (let’s take a Euclidean space {\bf R}^d for simplicity) in one or more function spaces in order to quantify its “size” in some sense.  Examples include

  • The Lebesgue spaces L^p of functions f whose norm \|f\|_{L^p} := (\int_{{\bf R}^d} |f|^p)^{1/p} is finite, as well as their relatives such as the weak L^p spaces L^{p,\infty} (and more generally the Lorentz spaces L^{p,q}) and Orlicz spaces such as L \log L and e^L;
  • The classical regularity spaces C^k, together with their Hölder continuous counterparts C^{k,\alpha};
  • The Sobolev spaces W^{s,p} of functions f whose norm \|f\|_{W^{s,p}} = \|f\|_{L^p} + \| |\nabla|^s f\|_{L^p} is finite (other equivalent definitions of this norm exist, and there are technicalities if s is negative or p \not \in (1,\infty)), as well as relatives such as homogeneous Sobolev spaces \dot W^{s,p}, Besov spaces B^{s,p}_q, and Triebel-Lizorkin spaces F^{s,p}_q.  (The conventions for the superscripts and subscripts here are highly variable.)
  • Hardy spaces {\mathcal H}^p, the space BMO of functions of bounded mean oscillation (and the subspace VMO of functions of vanishing mean oscillation);
  • The Wiener algebra A;
  • Morrey spaces M^p_q;
  • The space M of finite measures;
  • etc., etc.

As the above partial list indicates, there is an entire zoo of function spaces one could consider, and it can be difficult at first to see how they are organised with respect to each other.  However, one can get some clarity in this regard by drawing a type diagram for the function spaces one is trying to study.  A type diagram assigns a tuple (usually a pair) of relevant exponents to each function space.  For function spaces X on Euclidean space, two such exponents are the regularity s of the space, and the integrability p of the space.  These two quantities are somewhat fuzzy in nature (and are not easily defined for all possible function spaces), but can basically be described as follows.  We test the function space norm \|f\|_X of a modulated rescaled bump function

f(x) := A e^{i x \cdot \xi} \phi( \frac{x-x_0}{R} ) (1)

where A > 0 is an amplitude, R > 0 is a radius, \phi \in C^\infty_c({\bf R}^d) is a test function, x_0 is a position, and \xi \in {\bf R}^d is a frequency of some magnitude |\xi| \sim N.  One then studies how the norm \|f\|_X depends on the parameters A, R, N.  Typically, one has a relationship of the form

\|f\|_X \sim A N^s R^{d/p} (2)

for some exponents s, p, at least in the high-frequency case when N is large (in particular, from the uncertainty principle it is natural to require N \gtrsim 1/R, and when dealing with inhomogeneous norms it is also natural to require N \gtrsim 1).  The exponent s measures how sensitive the X norm is to oscillation, and thus controls regularity; if s is large, then oscillating functions will have large X norm, and thus functions in X will tend not to oscillate too much and thus be smooth.    Similarly, the exponent p measures how sensitive the X norm is to the function f spreading out to large scales; if p is small, then slowly decaying functions will have large norm, so that functions in X tend to decay quickly; conversely, if p is large, then singular functions will tend to have large norm, so that functions in X will tend to not have high peaks.

Note that the exponent s in (2) could be positive, zero, or negative, however the exponent p should be non-negative, since intuitively enlarging R should always lead to a larger (or at least comparable) norm.  Finally, the exponent in the A parameter should always be 1, since norms are by definition homogeneous.  Note also that the position x_0 plays no role in (1); this reflects the fact that most of the popular function spaces in analysis are translation-invariant.

The type diagram below plots the s, 1/p indices of various spaces.  The black dots indicate those spaces for which the s, 1/p indices are fixed; the blue dots are those spaces for which at least one of the s, 1/p indices are variable (and so, depending on the value chosen for these parameters, these spaces may end up in a different location on the type diagram than the typical location indicated here).

(There are some minor cheats in this diagram, for instance for the Orlicz spaces L \log L and e^L one has to adjust (1) by a logarithmic factor.   Also, the norms for the Schwartz space {\mathcal S} are not translation-invariant and thus not perfectly describable by this formalism. This picture should be viewed as a visual aid only, and not as a genuinely rigorous mathematical statement.)

The type diagram can be used to clarify some of the relationships between function spaces, such as Sobolev embedding.  For instance, when working with inhomogeneous spaces (which basically identifies low frequencies N \ll 1 with medium frequencies N \sim 1, so that one is effectively always in the regime N \gtrsim 1), then decreasing the s parameter results in decreasing the right-hand side of (1).  Thus, one expects the function space norms to get smaller (and the function spaces to get larger) if one decreases s while keeping p fixed.  Thus, for instance, W^{k,p} should be contained in W^{k-1,p}, and so forth.  Note however that this inclusion is not available for homogeneous function spaces such as \dot W^{k,p}, in which the frequency parameter N can be either much larger than 1 or much smaller than 1.

Similarly, if one is working in a compact domain rather than in {\bf R}^d, then one has effectively capped the radius parameter R to be bounded, and so we expect the function space norms to get smaller (and the function spaces to get larger) as one increases 1/p, thus for instance L^2 will be contained in L^1.  Conversely, if one is working in a discrete domain such as {\Bbb Z}^d, then the radius parameter R has now effectively been bounded from below, and the reverse should occur: the function spaces should get larger as one decreases 1/p.  (If the domain is both compact and discrete, then it is finite, and on a finite-dimensional space all norms are equivalent.)

As mentioned earlier, the uncertainty principle suggests that one has the restriction N \gtrsim 1/R.  From this and (2), we expect to be able to enlarge the function space by trading in the regularity parameter s for the integrability parameter p, keeping the dimensional quantity d/p - s fixed.  This is indeed how Sobolev embedding works.   Note in some cases one runs out of regularity before p goes all the way to infinity (thus ending up at an L^p space), while in other cases p hits infinity first.  In the latter case, one can embed the Sobolev space into a Holder space such as C^{k,\alpha}.

On continuous domains, one can send the frequency N off to infinity, keeping the amplitude A and radius R fixed.  From this and (1) we see that norms with a lower regularity s can never hope to control norms with a higher regularity s' > s, no matter what one does with the integrability parameter.   Note however that in discrete settings this obstruction disappears; when working on, say, {\bf Z}^d, then in fact one can gain as much regularity as one wishes for free, and there is no distinction between a Lebesgue space \ell^p and their Sobolev counterparts W^{k,p} in such a setting.

When interpolating between two spaces (using either the real or complex interpolation method), the interpolated space usually has regularity and integrability exponents on the line segment between the corresponding exponents of the endpoint spaces.  (This can be heuristically justified from the formula (2) by thinking about how the real or complex interpolation methods actually work.)  Typically, one can control the norm of the interpolated space by the geometric mean of the endpoint norms that is indicated by this line segment; again, this is plausible from looking at (2).

The space L^2 is self-dual.  More generally, the dual of a function space X will generally have type exponents that are the reflection of the original exponents around the L^2 origin.  Consider for instance the dual spaces H^s, H^{-s} or {\mathcal H}^1, BMO in the above diagram.

Spaces whose integrability exponent p is larger than 1 (i.e. which lie to the left of the dotted line) tend to be Banach spaces, while spaces whose integrability exponent is less than 1 are almost never Banach spaces.  (This can be justified by covering a large ball into small balls and considering how (1) would interact with the triangle inequality in this case).  The case p=1 is borderline; some spaces at this level of integrability, such as L^1, are Banach spaces, while other spaces, such as L^{1,\infty}, are not.

While the regularity s and integrability p are usually the most important exponents in a function space (because amplitude, width, and frequency are usually the most important features of a function in analysis), they do not tell the entire story.  One major reason for this is that the modulated bump functions (1), while an important class of test examples of functions, are by no means the only functions that one would wish to study.  For instance, one could also consider sums of bump functions (1) at different scales.  The behaviour of the function space norms on such spaces is often controlled by secondary exponents, such as the second exponent q that arises in Lorentz spaces, Besov spaces, or Triebel-Lizorkin spaces.  For instance, consider the function

f_M(x) := \sum_{m=1}^M 2^{-md} \phi(x/2^m), (3)

where M is a large integer, representing the number of distinct scales present in f_M.  Any function space with regularity s=0 and p=1 should assign each summand 2^{-md} \phi(x/2^m) in (3) a norm of O(1), so the norm of f_M could be as large as O(M) if one assumes the triangle inequality.  This is indeed the case for the L^1 norm, but for the weak L^1 norm, i.e. the L^{1,\infty} norm,  f_M only has size O(1).  More generally, for the Lorentz spaces L^{1,q}, f_M will have a norm of about O(M^{1/q}).   Thus we see that such secondary exponents can influence the norm of a function by an amount which is polynomial in the number of scales.  In many applications, though, the number of scales is a “logarithmic” quantity and thus of lower order interest when compared against the “polynomial” exponents such as s and p.  So the fine distinctions between, say, strong L^1 and weak L^1, are only of interest in “critical” situations in which one cannot afford to lose any logarithmic factors (this is for instance the case in much of Calderon-Zygmund theory).

We have cheated somewhat by only working in the high frequency regime.  When dealing with inhomogeneous spaces, one often has a different set of exponents for (1) in the low-frequency regime than in the high-frequency regime.  In such cases, one sometimes has to use a more complicated type diagram to  genuinely model the situation, e.g. by assigning to each space a convex set of type exponents rather than a single exponent, or perhaps having two separate type diagrams, one for the high frequency regime and one for the low frequency regime.   Such diagrams can get quite complicated, and will probably not be much use to a beginner in the subject, though in the hands of an expert who knows what he or she is doing, they can still be an effective visual aid.

Now that we have reviewed the foundations of measure theory, let us now put it to work to set up the basic theory of one of the fundamental families of function spaces in analysis, namely the L^p spaces (also known as Lebesgue spaces). These spaces serve as important model examples for the general theory of topological and normed vector spaces, which we will discuss a little bit in this lecture and then in much greater detail in later lectures. (See also my previous blog post on function spaces.)

Just as scalar quantities live in the space of real or complex numbers, and vector quantities live in vector spaces, functions f: X \to {\Bbb C} (or other objects closely related to functions, such as measures) live in function spaces. Like other spaces in mathematics (e.g. vector spaces, metric spaces, topological spaces, etc.) a function space V is not just mere sets of objects (in this case, the objects are functions), but they also come with various important structures that allow one to do some useful operations inside these spaces, and from one space to another. For example, function spaces tend to have several (though usually not all) of the following types of structures, which are usually related to each other by various compatibility conditions:

  1. Vector space structure. One can often add two functions f, g in a function space V, and expect to get another function f+g in that space V; similarly, one can multiply a function f in V by a scalar c and get another function cf in V. Usually, these operations obey the axioms of a vector space, though it is important to caution that the dimension of a function space is typically infinite. (In some cases, the space of scalars is a more complicated ring than the real or complex field, in which case we need the notion of a module rather than a vector space, but we will not use this more general notion in this course.) Virtually all of the function spaces we shall encounter in this course will be vector spaces. Because the field of scalars is real or complex, vector spaces also come with the notion of convexity, which turns out to be crucial in many aspects of analysis. As a consequence (and in marked contrast to algebra or number theory), much of the theory in real analysis does not seem to extend to other fields of scalars (in particular, real analysis fails spectacularly in the finite characteristic setting).
  2. Algebra structure. Sometimes (though not always), we also wish to multiply two functions f, g in V and get another function fg in V; when combined with the vector space structure and assuming some compatibility conditions (e.g. the distributive law), this makes V an algebra. This multiplication operation is often just pointwise multiplication, but there are other important multiplication operations on function spaces too, such as convolution. (One sometimes sees other algebraic structures than multiplication appear in function spaces, most notably derivations, but again we will not encounter those in this course. Another common algebraic operation for function spaces is conjugation or adjoint, leading to the notion of a *-algebra.)
  3. Norm structure. We often want to distinguish “large” functions in V from “small” ones, especially in analysis, in which “small” terms in an expression are routinely discarded or deemed to be acceptable errors. One way to do this is to assign a magnitude or norm \|f\|_V to each function that measures its size. Unlike the situation with scalars, where there is basically a single notion of magnitude, functions have a wide variety of useful notions of size, each measuring a different aspect (or combination of aspects) of the function, such as height, width, oscillation, regularity, decay, and so forth. Typically, each such norm gives rise to a separate function space (although sometimes it is useful to consider a single function space with multiple norms on it). We usually require the norm to be compatible with the vector space structure (and algebra structure, if present), for instance by demanding that the triangle inequality hold.
  4. Metric structure. We also want to tell whether two functions f, g in a function space V are “near together” or “far apart”. A typical way to do this is to impose a metric d: V \times V \to {\Bbb R}^+ on the space V. If both a norm \| \|_V and a vector space structure are available, there is an obvious way to do this: define the distance between two functions f, g in V to be d( f, g ) := \|f-g\|_V. (This will be the only type of metric on function spaces encountered in this course. But there are some nonlinear function spaces of importance in nonlinear analysis (e.g. spaces of maps from one manifold to another) which have no vector space structure or norm, but still have a metric.) It is often important to know if the vector space is complete with respect to the given metric; this allows one to take limits of Cauchy sequences, and (with a norm and vector space structure) sum absolutely convergent series, as well as use some useful results from point set topology such as the Baire category theorem. All of these operations are of course vital in analysis. [Compactness would be an even better property than completeness to have, but function spaces unfortunately tend be non-compact in various rather nasty ways, although there are useful partial substitutes for compactness that are available, see e.g. this blog post of mine.]
  5. Topological structure. It is often important to know when a sequence (or, occasionally, nets) of functions f_n in V “converges” in some sense to a limit f (which, hopefully, is still in V); there are often many distinct modes of convergence (e.g. pointwise convergence, uniform convergence, etc.) that one wishes to carefully distinguish from each other. Also, in order to apply various powerful topological theorems (or to justify various formal operations involving limits, suprema, etc.), it is important to know when certain subsets of V enjoy key topological properties (most notably compactness and connectedness), and to know which operations on V are continuous. For all of this, one needs a topology on V. If one already has a metric, then one of course has a topology generated by the open balls of that metric; but there are many important topologies on function spaces in analysis that do not arise from metrics. We also often require the topology to be compatible with the other structures on the function space; for instance, we usually require the vector space operations of addition and scalar multiplication to be continuous. In some cases, the topology on V extends to some natural superspace W of more general functions that contain V; in such cases, it is often important to know whether V is closed in W, so that limits of sequences in V stay in V.
  6. Functional structures. Since numbers are easier to understand and deal with than functions, it is not surprising that we often study functions f in a function space V by first applying some functional \lambda: V \to {\Bbb C} to V to identify some key numerical quantity \lambda(f) associated to f. Norms f \mapsto \|f\|_V are of course one important example of a functional; integration f \mapsto \int_X f\ d\mu provides another; and evaluation f \mapsto f(x) at a point x provides a third important class. (Note, though, that while evaluation is the fundamental feature of a function in set theory, it is often a quite minor operation in analysis; indeed, in many function spaces, evaluation is not even defined at all, for instance because the functions in the space are only defined almost everywhere!) An inner product \langle,\rangle on V (see below) also provides a large family f \mapsto \langle f, g \rangle of useful functionals. It is of particular interest to study functionals that are compatible with the vector space structure (i.e. are linear) and with the topological structure (i.e. are continuous); this will give rise to the important notion of duality on function spaces.
  7. Inner product structure. One often would like to pair a function f in a function space V with another object g (which is often, though not always, another function in the same function space V) and obtain a number \langle f, g \rangle, that typically measures the amount of “interaction” or “correlation” between f and g. Typical examples include inner products arising from integration, such as \langle f, g\rangle := \int_X f \overline{g}\ d\mu; integration itself can also be viewed as a pairing, \langle f, \mu \rangle := \int_X f\ d\mu. Of course, we usually require such inner products to be compatible with the other structures present on the space (e.g., to be compatible with the vector space structure, we usually require the inner product to be bilinear or sesquilinear). Inner products, when available, are incredibly useful in understanding the metric and norm geometry of a space, due to such fundamental facts as the Cauchy-Schwarz inequality and the parallelogram law. They also give rise to the important notion of orthogonality between functions.
  8. Group actions. We often expect our function spaces to enjoy various symmetries; we might wish to rotate, reflect, translate, modulate, or dilate our functions and expect to preserve most of the structure of the space when doing so. In modern mathematics, symmetries are usually encoded by group actions (or actions of other group-like objects, such as semigroups or groupoids; one also often upgrades groups to more structured objects such as Lie groups). As usual, we typically require the group action to preserve the other structures present on the space, e.g. one often restricts attention to group actions that are linear (to preserve the vector space structure), continuous (to preserve topological structure), unitary (to preserve inner product structure), isometric (to preserve metric structure), and so forth. Besides giving us useful symmetries to spend, the presence of such group actions allows one to apply the powerful techniques of representation theory, Fourier analysis, and ergodic theory. However, as this is a foundational real analysis class, we will not discuss these important topics much here (and in fact will not deal with group actions much at all).
  9. Order structure. In some cases, we want to utilise the notion of a function f being “non-negative”, or “dominating” another function g. One might also want to take the “max” or “supremum” of two or more functions in a function space V, or split a function into “positive” and “negative” components. Such order structures interact with the other structures on a space in many useful ways (e.g. via the Stone-Weierstrass theorem). Much like convexity, order structure is specific to the real line and is another reason why much of real analysis breaks down over other fields. (The complex plane is of course an extension of the real line and so is able to exploit the order structure of that line, usually by treating the real and imaginary components separately.)

There are of course many ways to combine various flavours of these structures together, and there are entire subfields of mathematics that are devoted to studying particularly common and useful categories of such combinations (e.g. topological vector spaces, normed vector spaces, Banach spaces, Banach algebras, von Neumann algebras, C^* algebras, Frechet spaces, Hilbert spaces, group algebras, etc.). The study of these sorts of spaces is known collectively as functional analysis. We will study some (but certainly not all) of these combinations in an abstract and general setting later in this course, but to begin with we will focus on the L^p spaces, which are very good model examples for many of the above general classes of spaces, and also of importance in many applications of analysis (such as probability or PDE).

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,782 other followers