You are currently browsing the tag archive for the ‘Lebesgue spaces’ tag.

In harmonic analysis and PDE, one often wants to place a function on some domain (let’s take a Euclidean space for simplicity) in one or more function spaces in order to quantify its “size” in some sense. Examples include

- The Lebesgue spaces of functions whose norm is finite, as well as their relatives such as the weak spaces (and more generally the Lorentz spaces ) and Orlicz spaces such as and ;
- The classical regularity spaces , together with their Hölder continuous counterparts ;
- The Sobolev spaces of functions whose norm is finite (other equivalent definitions of this norm exist, and there are technicalities if is negative or ), as well as relatives such as homogeneous Sobolev spaces , Besov spaces , and Triebel-Lizorkin spaces . (The conventions for the superscripts and subscripts here are highly variable.)
- Hardy spaces , the space BMO of functions of bounded mean oscillation (and the subspace VMO of functions of vanishing mean oscillation);
- The Wiener algebra ;
- Morrey spaces ;
- The space of finite measures;
- etc., etc.

As the above partial list indicates, there is an entire zoo of function spaces one could consider, and it can be difficult at first to see how they are organised with respect to each other. However, one can get some clarity in this regard by drawing a *type diagram* for the function spaces one is trying to study. A type diagram assigns a tuple (usually a pair) of relevant exponents to each function space. For function spaces on Euclidean space, two such exponents are the *regularity* of the space, and the *integrability* of the space. These two quantities are somewhat fuzzy in nature (and are not easily defined for all possible function spaces), but can basically be described as follows. We test the function space norm of a modulated rescaled bump function

(1)

where is an amplitude, is a radius, is a test function, is a position, and is a frequency of some magnitude . One then studies how the norm depends on the parameters . Typically, one has a relationship of the form

(2)

for some exponents , at least in the high-frequency case when is large (in particular, from the uncertainty principle it is natural to require , and when dealing with inhomogeneous norms it is also natural to require ). The exponent measures how sensitive the norm is to oscillation, and thus controls regularity; if is large, then oscillating functions will have large norm, and thus functions in will tend not to oscillate too much and thus be smooth. Similarly, the exponent measures how sensitive the norm is to the function spreading out to large scales; if is small, then slowly decaying functions will have large norm, so that functions in tend to decay quickly; conversely, if is large, then singular functions will tend to have large norm, so that functions in will tend to not have high peaks.

Note that the exponent in (2) could be positive, zero, or negative, however the exponent should be non-negative, since intuitively enlarging should always lead to a larger (or at least comparable) norm. Finally, the exponent in the parameter should always be , since norms are by definition homogeneous. Note also that the position plays no role in (1); this reflects the fact that most of the popular function spaces in analysis are translation-invariant.

The type diagram below plots the indices of various spaces. The black dots indicate those spaces for which the indices are fixed; the blue dots are those spaces for which at least one of the indices are variable (and so, depending on the value chosen for these parameters, these spaces may end up in a different location on the type diagram than the typical location indicated here).

(There are some minor cheats in this diagram, for instance for the Orlicz spaces and one has to adjust (1) by a logarithmic factor. Also, the norms for the Schwartz space are not translation-invariant and thus not perfectly describable by this formalism. This picture should be viewed as a visual aid only, and not as a genuinely rigorous mathematical statement.)

The type diagram can be used to clarify some of the relationships between function spaces, such as Sobolev embedding. For instance, when working with inhomogeneous spaces (which basically identifies low frequencies with medium frequencies , so that one is effectively always in the regime ), then decreasing the parameter results in decreasing the right-hand side of (1). Thus, one expects the function space norms to get smaller (and the function spaces to get larger) if one decreases while keeping fixed. Thus, for instance, should be contained in , and so forth. Note however that this inclusion is not available for homogeneous function spaces such as , in which the frequency parameter can be either much larger than or much smaller than .

Similarly, if one is working in a compact domain rather than in , then one has effectively capped the radius parameter to be bounded, and so we expect the function space norms to get smaller (and the function spaces to get larger) as one increases , thus for instance will be contained in . Conversely, if one is working in a discrete domain such as , then the radius parameter has now effectively been bounded from below, and the reverse should occur: the function spaces should get larger as one *decreases* . (If the domain is both compact *and* discrete, then it is finite, and on a finite-dimensional space all norms are equivalent.)

As mentioned earlier, the uncertainty principle suggests that one has the restriction . From this and (2), we expect to be able to enlarge the function space by trading in the regularity parameter for the integrability parameter , keeping the dimensional quantity fixed. This is indeed how Sobolev embedding works. Note in some cases one runs out of regularity before p goes all the way to infinity (thus ending up at an space), while in other cases p hits infinity first. In the latter case, one can embed the Sobolev space into a Holder space such as .

On continuous domains, one can send the frequency off to infinity, keeping the amplitude and radius fixed. From this and (1) we see that norms with a lower regularity can never hope to control norms with a higher regularity , no matter what one does with the integrability parameter. Note however that in discrete settings this obstruction disappears; when working on, say, , then in fact one can gain as much regularity as one wishes for free, and there is no distinction between a Lebesgue space and their Sobolev counterparts in such a setting.

When interpolating between two spaces (using either the real or complex interpolation method), the interpolated space usually has regularity and integrability exponents on the line segment between the corresponding exponents of the endpoint spaces. (This can be heuristically justified from the formula (2) by thinking about how the real or complex interpolation methods actually work.) Typically, one can control the norm of the interpolated space by the geometric mean of the endpoint norms that is indicated by this line segment; again, this is plausible from looking at (2).

The space is self-dual. More generally, the dual of a function space will generally have type exponents that are the reflection of the original exponents around the origin. Consider for instance the dual spaces or in the above diagram.

Spaces whose integrability exponent is larger than 1 (i.e. which lie to the left of the dotted line) tend to be Banach spaces, while spaces whose integrability exponent is less than 1 are almost never Banach spaces. (This can be justified by covering a large ball into small balls and considering how (1) would interact with the triangle inequality in this case). The case is borderline; some spaces at this level of integrability, such as , are Banach spaces, while other spaces, such as , are not.

While the regularity and integrability are usually the most important exponents in a function space (because amplitude, width, and frequency are usually the most important features of a function in analysis), they do not tell the entire story. One major reason for this is that the modulated bump functions (1), while an important class of test examples of functions, are by no means the only functions that one would wish to study. For instance, one could also consider sums of bump functions (1) at different scales. The behaviour of the function space norms on such spaces is often controlled by secondary exponents, such as the second exponent that arises in Lorentz spaces, Besov spaces, or Triebel-Lizorkin spaces. For instance, consider the function

, (3)

where is a large integer, representing the number of distinct scales present in . Any function space with regularity and should assign each summand in (3) a norm of O(1), so the norm of could be as large as if one assumes the triangle inequality. This is indeed the case for the norm, but for the weak norm, i.e. the norm, only has size . More generally, for the Lorentz spaces , will have a norm of about . Thus we see that such secondary exponents can influence the norm of a function by an amount which is polynomial in the number of scales. In many applications, though, the number of scales is a “logarithmic” quantity and thus of lower order interest when compared against the “polynomial” exponents such as and . So the fine distinctions between, say, strong and weak , are only of interest in “critical” situations in which one cannot afford to lose any logarithmic factors (this is for instance the case in much of Calderon-Zygmund theory).

We have cheated somewhat by only working in the high frequency regime. When dealing with inhomogeneous spaces, one often has a different set of exponents for (1) in the low-frequency regime than in the high-frequency regime. In such cases, one sometimes has to use a more complicated type diagram to genuinely model the situation, e.g. by assigning to each space a convex set of type exponents rather than a single exponent, or perhaps having two separate type diagrams, one for the high frequency regime and one for the low frequency regime. Such diagrams can get quite complicated, and will probably not be much use to a beginner in the subject, though in the hands of an expert who knows what he or she is doing, they can still be an effective visual aid.

Now that we have reviewed the foundations of measure theory, let us now put it to work to set up the basic theory of one of the fundamental families of function spaces in analysis, namely the spaces (also known as *Lebesgue spaces*). These spaces serve as important model examples for the general theory of topological and normed vector spaces, which we will discuss a little bit in this lecture and then in much greater detail in later lectures. (See also my previous blog post on function spaces.)

Just as scalar quantities live in the space of real or complex numbers, and vector quantities live in vector spaces, functions (or other objects closely related to functions, such as measures) live in *function spaces*. Like other spaces in mathematics (e.g. vector spaces, metric spaces, topological spaces, etc.) a function space is not just mere sets of objects (in this case, the objects are functions), but they also come with various important *structures* that allow one to do some useful *operations* inside these spaces, and from one space to another. For example, function spaces tend to have several (though usually not *all*) of the following types of structures, which are usually related to each other by various compatibility conditions:

**Vector space structure.**One can often add two functions in a function space , and expect to get another function in that space ; similarly, one can multiply a function in by a scalar and get another function in . Usually, these operations obey the axioms of a vector space, though it is important to caution that the dimension of a function space is typically infinite. (In some cases, the space of scalars is a more complicated ring than the real or complex field, in which case we need the notion of a module rather than a vector space, but we will not use this more general notion in this course.) Virtually all of the function spaces we shall encounter in this course will be vector spaces. Because the field of scalars is real or complex, vector spaces also come with the notion of*convexity*, which turns out to be crucial in many aspects of analysis. As a consequence (and in marked contrast to algebra or number theory), much of the theory in real analysis does not seem to extend to other fields of scalars (in particular, real analysis fails spectacularly in the finite characteristic setting).**Algebra structure.**Sometimes (though not always), we also wish to multiply two functions , in and get another function in ; when combined with the vector space structure and assuming some compatibility conditions (e.g. the distributive law), this makes an algebra. This multiplication operation is often just pointwise multiplication, but there are other important multiplication operations on function spaces too, such as convolution. (One sometimes sees other algebraic structures than multiplication appear in function spaces, most notably derivations, but again we will not encounter those in this course. Another common algebraic operation for function spaces is*conjugation*or*adjoint*, leading to the notion of a *-algebra.)**Norm structure.**We often want to distinguish “large” functions in from “small” ones, especially in analysis, in which “small” terms in an expression are routinely discarded or deemed to be acceptable errors. One way to do this is to assign a*magnitude*or*norm*to each function that measures its size. Unlike the situation with scalars, where there is basically a single notion of magnitude, functions have a wide variety of useful notions of size, each measuring a different aspect (or combination of aspects) of the function, such as height, width, oscillation, regularity, decay, and so forth. Typically, each such norm gives rise to a separate function space (although sometimes it is useful to consider a single function space with multiple norms on it). We usually require the norm to be compatible with the vector space structure (and algebra structure, if present), for instance by demanding that the triangle inequality hold.**Metric structure.**We also want to tell whether two functions f, g in a function space V are “near together” or “far apart”. A typical way to do this is to impose a metric on the space . If both a norm and a vector space structure are available, there is an obvious way to do this: define the distance between two functions in to be . (This will be the only type of metric on function spaces encountered in this course. But there are some nonlinear function spaces of importance in nonlinear analysis (e.g. spaces of maps from one manifold to another) which have no vector space structure or norm, but still have a metric.) It is often important to know if the vector space is complete with respect to the given metric; this allows one to take limits of Cauchy sequences, and (with a norm and vector space structure) sum absolutely convergent series, as well as use some useful results from point set topology such as the Baire category theorem. All of these operations are of course vital in analysis. [Compactness would be an even better property than completeness to have, but function spaces unfortunately tend be non-compact in various rather nasty ways, although there are useful partial substitutes for compactness that are available, see e.g. this blog post of mine.]**Topological structure.**It is often important to know when a sequence (or, occasionally, nets) of functions in “converges” in some sense to a limit (which, hopefully, is still in ); there are often many distinct modes of convergence (e.g. pointwise convergence, uniform convergence, etc.) that one wishes to carefully distinguish from each other. Also, in order to apply various powerful topological theorems (or to justify various formal operations involving limits, suprema, etc.), it is important to know when certain subsets of enjoy key topological properties (most notably*compactness*and*connectedness*), and to know which operations on are continuous. For all of this, one needs a topology on . If one already has a metric, then one of course has a topology generated by the open balls of that metric; but there are many important topologies on function spaces in analysis that do not arise from metrics. We also often require the topology to be compatible with the other structures on the function space; for instance, we usually require the vector space operations of addition and scalar multiplication to be continuous. In some cases, the topology on extends to some natural superspace of more general functions that contain ; in such cases, it is often important to know whether is closed in , so that limits of sequences in stay in .**Functional structures.**Since numbers are easier to understand and deal with than functions, it is not surprising that we often study functions f in a function space V by first applying some functional to V to identify some key numerical quantity associated to f.*Norms*are of course one important example of a functional;*integration*provides another; and*evaluation*at a point x provides a third important class. (Note, though, that while evaluation is the fundamental feature of a function in set theory, it is often a quite minor operation in analysis; indeed, in many function spaces, evaluation is not even defined at all, for instance because the functions in the space are only defined almost everywhere!) An*inner product*on (see below) also provides a large family of useful functionals. It is of particular interest to study functionals that are compatible with the vector space structure (i.e. are linear) and with the topological structure (i.e. are*continuous*); this will give rise to the important notion of duality on function spaces.**Inner product structure.**One often would like to pair a function f in a function space V with another object g (which is often, though not always, another function in the same function space V) and obtain a number , that typically measures the amount of “interaction” or “correlation” between f and g. Typical examples include inner products arising from integration, such as ; integration itself can also be viewed as a pairing, . Of course, we usually require such inner products to be compatible with the other structures present on the space (e.g., to be compatible with the vector space structure, we usually require the inner product to be bilinear or sesquilinear). Inner products, when available, are incredibly useful in understanding the metric and norm geometry of a space, due to such fundamental facts as the Cauchy-Schwarz inequality and the parallelogram law. They also give rise to the important notion of*orthogonality*between functions.**Group actions.**We often expect our function spaces to enjoy various*symmetries*; we might wish to rotate, reflect, translate, modulate, or dilate our functions and expect to preserve most of the structure of the space when doing so. In modern mathematics, symmetries are usually encoded by group actions (or actions of other group-like objects, such as semigroups or groupoids; one also often upgrades groups to more structured objects such as Lie groups). As usual, we typically require the group action to preserve the other structures present on the space, e.g. one often restricts attention to group actions that are linear (to preserve the vector space structure), continuous (to preserve topological structure), unitary (to preserve inner product structure), isometric (to preserve metric structure), and so forth. Besides giving us useful symmetries to spend, the presence of such group actions allows one to apply the powerful techniques of representation theory, Fourier analysis, and ergodic theory. However, as this is a foundational real analysis class, we will not discuss these important topics much here (and in fact will not deal with group actions much at all).**Order structure.**In some cases, we want to utilise the notion of a function f being “non-negative”, or “dominating” another function g. One might also want to take the “max” or “supremum” of two or more functions in a function space V, or split a function into “positive” and “negative” components. Such order structures interact with the other structures on a space in many useful ways (e.g. via the Stone-Weierstrass theorem). Much like convexity, order structure is specific to the real line and is another reason why much of real analysis breaks down over other fields. (The complex plane is of course an extension of the real line and so is able to exploit the order structure of that line, usually by treating the real and imaginary components separately.)

There are of course many ways to combine various flavours of these structures together, and there are entire subfields of mathematics that are devoted to studying particularly common and useful categories of such combinations (e.g. topological vector spaces, normed vector spaces, Banach spaces, Banach algebras, von Neumann algebras, C^* algebras, Frechet spaces, Hilbert spaces, group algebras, etc.). The study of these sorts of spaces is known collectively as *functional analysis*. We will study some (but certainly not all) of these combinations in an abstract and general setting later in this course, but to begin with we will focus on the spaces, which are very good model examples for many of the above general classes of spaces, and also of importance in many applications of analysis (such as probability or PDE).

## Recent Comments