On Thursday, Charlie Fefferman continued his lecture series on interpolation of functions. Here, he stated the main technical theorem about bundles that underlies all the results, answering the “cliffhanger” question from the last lecture, and broadly outlined the proof, except for a major technical wrinkle about “Whitney convexity” which he will discuss on Friday. Last time, Charlie introduced the notion of a bundle on a compact set , as a collection of subsets of the ring of Taylor polynomials up to some order m based at x, such that each is either empty, or is the coset of an ideal in . A section of that bundle is then a function such that the jet of F at x (i.e. the Taylor expansion of F at x up to order m) lies in . The set of all sections F of a bundle will then be denoted . We are interested in the following basic questions, given m, n, E, and :
- When is non-empty?
- What can we say about the jets of sections F of ? We know they lie in , but can they take on every value in that space?
- What is the best value of , as F ranges over sections of ?
As mentioned earlier, this problem generalises the problem of how to extend a function f on E (possibly with some derivatives prescribed) to a function F on ; the bundle is describing all the constraints that F has to satisfy. Later on, we will also discuss a more general version of these results, in which is supposed to lie in a convex set, rather than a coset. As stated, this question is too general (it includes as a special case the existence problem for arbitrary linear partial differential equations with linear boundary conditions, which is clearly a hopeless task to solve in complete generality), but Charlie mentioned that there was a special additional condition, which he called Whitney convexity, which makes everything solvable, and ends up being key to the whole argument.
As mentioned in the first lecture, if E was a finite set, then the first two questions here are trivial, and the last one is given by the finiteness theorem. However, there is a compactness problem when moving from the finite world to the infinite world, basically because the uniform limit of uniformly functions need not be (consider for instance how the one-dimensional uniformly functions converge to the merely function |x| as . However, as mentioned before, no such difficulty arises when one works in more compact spaces such as : in that case, the answer to all the above problems is that the task of finding a section on an infinite set E is equivalent to that of finding sections on each finite set S of E of a bounded size , with bounds on the resulting extensions that are uniform in S, and that the norm of the best extension on the whole set E is comparable to the sup of the norms of the best extensions on each of the sets S. A little more generally, one can replace the space with spaces such as , where is a modulus of continuity; this is the space of functions F where the derivative is not only continuous, but is continuous with modulus in the sense that for all and some M. (This generalisation will be important later.)
Anyway, let us return to questions 1-3 above. There is an obvious obstruction to solvability of Q1: if the bundle has any empty fibre, thus for at least x, then clearly the bundle admits no sections. Is this the only obstruction – or in other words, does solvability at each point imply global solvability? The answer is rather clearly no, even in the easy case m=0 and n=1. For instance, take , let be the set of all constant functions, and let consist of a single constant function, namely for . Finding a section of this bundle then amounts to finding a continuous (i.e. ) function with for , and F(0) apparently unconstrained. Each fibre is non-empty, and so the problem is solvable pointwise. But it is not solvable globally. This is because the constraints for , together with the qualitative requirement that F be continuous, forces a new constraint on F(0), namely that F(0) = +1. On the other hand, the constraints coming from force F(0)=-1. Together they force F(0) to take values in the empty set, which is absurd.
What has gone on here is that while the original bundle looked prima facie like it could support sections, the qualitative nature of the solution space showed that the constraints in that bundle actually forced sections to lie in a smaller bundle (which, in the above simple example, is the same as the original bundle but in which the fiber at 0 was replaced with the empty set). The smaller bundle did not admit sections, and so the original bundle did not.
It turns out that there is a natural way to generalise the idea of “using continuity to deduce new constraints from old” to the problem and not just the problem, known as Glaeser reduction. To explain it, let us give a more sophisticated example, now in the category of rather than . Let be a function with for , and with oscillating infinitely often between -1 and 1. (A simple example of such a function is .) Let be the graph of , with the origin adjoined. Consider the problem of extending a given function in a manner to a function defined on the entire plane. At any point on E which is not the origin, the value of F at this point is prescribed; it is equal to f. Differentiating this along x, we also see that the tangential derivative is prescribed (and may in fact be forced to live in the empty set, if f is not differentiable tangentially), but that the normal derivative is not. Thus we see that the jet is constrained to a coset of a one-dimensional ideal of the three-dimensional space , or possibly the empty set if f is not differentiable.
But what is happening at the origin (0,0)? The value of F(0,0) is of course prescribed to equal f(0,0), but what about the first derivatives ? The curve E is not differentiable at (0,0), so one cannot directly differentiate f to recover any of these first derivatives. However, we are requiring F to be not just differentiable, but continuously differentiable. Because of this, we can infer constraints on from the constraints on the tangential derivatives as x approaches 0. Indeed, it is not hard use the infinite oscillation of to conclude that if all of these tangential derivatives are prescribed, then both first derivatives of F at the origin must be prescribed also.
One can even concoct iterated examples in which the continuity is used once to deduce new constraints from old, and then used again to deduce even more constraints from the ones just established. Consider for instance the compact set , where each is the curve
and is the function defined earlier. (E is thus a collection of infinitely oscillating curves which become increasingly horizontal as they accumulate at the origin.) If one prescribes a function and tries to extend it a function F on the entire plane, then to begin with there are no constraints on the first derivatives of F at the origin, or at the accumulation points , though as before one has constraints on tangential first derivatives of F on the rest of E. But by using the continuously differentiable nature of F to take limits, one can infer that both first derivatives of F are prescribed at each of the points , and that is also prescribed. The vertical derivative is not prescribed by such a limiting process directly, but can instead be prescribed by an iteration of the limiting process, using the control on obtained from the preceding limiting process.
Now we formalise the most general version of this “limiting process” for . The key observation is the following: if , , and , then for all x sufficiently close to , the jets are all close to each other in the sense that
for all x,x’ sufficiently close to and all . (This fact is essentially just Taylor’s theorem with remainder.)
From the above observation, we conclude that if F is a section of a bundle , , and is an integer, then the jet must have the following property: for every there must exist a neighbourhood of such that for all in this neighbourhood, there exist polynomials for such that
for all and . Let us say that is a limit polynomial of if it has the above property (with k set equal to some sufficiently large integer depending only on n and m). We then define the Glaeser refinement of by declaring to be the set of all which are limit polynomials. One easily checks that is a sub-bundle of , and the above discussion shows that every section of must in fact be a section of , thus . Note that Glaeser refinement is quite a constructive process; it is possible to describe it using only a finite number of algebraic operations, together with the limit operation. It can certainly produce bundles with empty fibres, even when the original bundle had no empty fibres; this means that the Glaeser process discovered that the original constraints on F were incompatible with F being m-times continuously differentiable.
One then quickly sees that to answer any of questions 1,2,3 for a bundle, it suffices to do so for its Glaeser refinement, which can be viewed as the same bundle but with some additional constraints that have been “deduced” from the original constraints by limiting arguments. It is then natural to iterate this process, taking Glaeser refinements of Glaeser refinements (as was implicitly done in the examples discussed before), and see whether the bundle stabilises or not.
In the case of m=0 (i.e. for continuous extensions), this process stabilises immediately; a Glaeser refinement is its own Glaeser refinement, for much the same reason that the closure of a set is its own closure. For , the process can be non-trivial for a little longer, as we have already seen, but it is actually not hard to show that it still stabilises after O(1) many steps. (If you divide the bundle into strata, depending on the dimension of the fibre, the stratum with the highest dimensional fibres will stop generating new constraints fairly quickly, then the next highest stratum will stop doing so soon after that, and so forth.) Thus one quickly reduces matters to Glaeser-stable bundles – bundles which are their own Glaeser refinement.
With this reduction, Charlie could now finally give the main theorem, which more or less completely answers Q1, Q2, Q3 above.
Theorem. Fix m,n,E, and let be a Glaeser-stable bundle.
- admits sections if and only if every fibre of is non-empty.
- If admits sections, and , then every element of can arise as the jet of a section of .
- If admits sections, then there exists a finite subset S of E of cardinality O(1) such that
Informally, part 1 of this theorem asserts that after all deductions of Glaeser type have been made, there are no further obstructions to global solvability other than the pointwise ones. Part 2 is in a similar spirit, and says that after all deductions of Glaeser type have been made, no further constraints on the derivatives of the function F can be located. The third is a finiteness theorem: in order to compute the quality of the best section of a bundle (up to multiplicative constants), it suffices to do so on a set of bounded cardinality. This can in fact be done in an explicitly computable manner: the quantity , up to multiplicative constants, is equal to the best constant M such that one can find polynomials for all with coefficients at most M (when expanded around x) and such that
for all and . This M can be computed by linear programming methods. Thus the above result, combined with the computability of the Glaeser refinement process, gives about as computable and constructive a resolution to Q1, Q2, Q3 as one could hope for.
(The bound on the cardinality of S is actually rather explicit. In the m=1 case, one can take S to have cardinality at most , and this is best possible. For higher m, the best known bound is equal to an absolute constant times .)
Charlie then briefly discussed how one proves the main theorem. The idea, which already appeared earlier in this post, is to stratify the set E using the dimensions of the fibres (though Charlie indicated that the precise stratification used was slightly more complicated than this), and to induct on the number of non-trivial strata. The case of zero strata was rather vacuous, so suppose that there was at least one non-trivial stratum. One then looks at the stratum with the lowest dimensional fibres; from Glaeser stability, this is a compact subset E’ of E. Furthermore, the bundle over E’ must exhibit a kind of “continuity” (since, by Glaeser stability, every point in every fibre is a limit polynomial of the nearby fibres). From continuity and compactness one can get “uniform continuity”, which means that on E’ one can replace by for some modulus of continuity . By using the finiteness theorem for this modulus of continuity (which, as discussed before, extends without difficulty to infinite domains) we can then get everything we want regarding sections – but only on E’ instead of E. To get sections on all of E, we need to modify the sections just obtained, as they obey the right constraints on E’ but not necessarily on E. But this can be done by taking a Whitney decomposition of into various cubes; in other words, a decomposition of the complement of E’ into a collection of finitely overlapping cubes, where the distance of each cube to E’ is comparable to the diameter of that cube. On each cube, there is at least one fewer stratum than in the general case, so by the inductive hypothesis we can create sections on each cube, and by use of smooth partitions of unity we can glue things together. However, if one wants to sum up and retain control all the way up to the set , it turns out that one needs quantitative control on all the pieces being glued together. With our current setup, this is simply not available; the jets of the sections F are constrained to live in cosets of ideals, but otherwise do not obey any bounds.
To fix this, one in fact has to start all over again, working in a more general framework, in which the cosets of ideals are replaced by translates of symmetric convex bodies. As noted before, this framework is far too general to have any hope of a reasonable solution, but it turns out that all the above arguments can be carried over to this case provided that the convex bodies obey the additional property of Whitney convexity. But the time for the lecture was up, and so Charlie left off with another cliffhanger regarding the actual definition of Whitney convexity and how it affects the above arguments.