You are currently browsing the tag archive for the ‘weak solution’ tag.

Note: this post is not required reading for this course, or for the sequel course in the winter quarter.

In a Notes 2, we reviewed the classical construction of Leray of global weak solutions to the Navier-Stokes equations. We did not quite follow Leray’s original proof, in that the notes relied more heavily on the machinery of Littlewood-Paley projections, which have become increasingly common tools in modern PDE. On the other hand, we did use the same “exploiting compactness to pass to weakly convergent subsequence” strategy that is the standard one in the PDE literature used to construct weak solutions.

As I discussed in a previous post, the manipulation of sequences and their limits is analogous to a “cheap” version of nonstandard analysis in which one uses the Fréchet filter rather than an ultrafilter to construct the nonstandard universe. (The manipulation of generalised functions of Columbeau-type can also be comfortably interpreted within this sort of cheap nonstandard analysis.) Augmenting the manipulation of sequences with the right to pass to subsequences whenever convenient is then analogous to a sort of “lazy” nonstandard analysis, in which the implied ultrafilter is never actually constructed as a “completed object“, but is instead lazily evaluated, in the sense that whenever membership of a given subsequence of the natural numbers in the ultrafilter needs to be determined, one either passes to that subsequence (thus placing it in the ultrafilter) or the complement of the sequence (placing it out of the ultrafilter). This process can be viewed as the initial portion of the transfinite induction that one usually uses to construct ultrafilters (as discussed using a voting metaphor in this post), except that there is generally no need in any given application to perform the induction for any uncountable ordinal (or indeed for most of the countable ordinals also).

On the other hand, it is also possible to work directly in the orthodox framework of nonstandard analysis when constructing weak solutions. This leads to an approach to the subject which is largely equivalent to the usual subsequence-based approach, though there are some minor technical differences (for instance, the subsequence approach occasionally requires one to work with separable function spaces, whereas in the ultrafilter approach the reliance on separability is largely eliminated, particularly if one imposes a strong notion of saturation on the nonstandard universe). The subject acquires a more “algebraic” flavour, as the quintessential analysis operation of taking a limit is replaced with the “standard part” operation, which is an algebra homomorphism. The notion of a sequence is replaced by the distinction between standard and nonstandard objects, and the need to pass to subsequences disappears entirely. Also, the distinction between “bounded sequences” and “convergent sequences” is largely eradicated, particularly when the space that the sequences ranged in enjoys some compactness properties on bounded sets. Also, in this framework, the notorious non-uniqueness features of weak solutions can be “blamed” on the non-uniqueness of the nonstandard extension of the standard universe (as well as on the multiple possible ways to construct nonstandard mollifications of the original standard PDE). However, many of these changes are largely cosmetic; switching from a subsequence-based theory to a nonstandard analysis-based theory does not seem to bring one significantly closer for instance to the global regularity problem for Navier-Stokes, but it could have been an alternate path for the historical development and presentation of the subject.

In any case, I would like to present below the fold this nonstandard analysis perspective, quickly translating the relevant components of real analysis, functional analysis, and distributional theory that we need to this perspective, and then use it to re-prove Leray’s theorem on existence of global weak solutions to Navier-Stokes.

Read the rest of this entry »

I’m continuing my series of articles for the Princeton Companion to Mathematics ahead of the winter quarter here at UCLA (during which I expect this blog to become dominated by ergodic theory posts) with my article on generalised solutions to PDE. (I have three more PCM articles to release here, but they will have to wait until spring break.) This article ties in to some extent with my previous PCM article on distributions, because distributional solutions are one good example of a “generalised solution” or “weak solution” to a PDE. They are not the only such notion though; one also has variational and stationary solutions, viscosity solutions, penalised solutions, solutions outside of a singular set, and so forth. These notions of generalised solution are necessary when dealing with PDE that can exhibit singularities, shocks, oscillations, or other non-smooth behaviour. Also, in the foundational existence theory for many PDE, it has often been profitable to first construct a fairly weak solution and then use additional arguments to upgrade that solution to a stronger solution (e.g. a “classical” or “smooth” solution), rather than attempt to construct the stronger solution directly. On the other hand, there is a tradeoff between how easy it is to construct a weak solution, and how easy it is to upgrade that solution; solution concepts which are so weak that they cannot be upgraded at all seem to be significantly less useful in the subject, even if (or especially if) existence of such solutions is a near-triviality. [This is one manifestation of the somewhat whimsical “law of conservation of difficulty”: in order to prove any genuinely non-trivial result, some hard work has to be done somewhere. In particular, it is often the case that the behaviour of PDE depends quite sensitively on the exact structure of that PDE (e.g. on the sign of various key terms), and so any result that captures such behaviour must, at some point, exploit that structure in a non-trivial manner; one usually cannot get very far in PDE by relying just on general-purpose theorems that apply to all PDE, regardless of structure.]

The Companion also has a section on history of mathematics; for instance, here is Leo Corry‘s PCM article “The development of the idea of proof“, covering the period from Euclid to Frege. We take for granted nowadays that we have precise, rigorous, and standard frameworks for proving things in set theory, number theory, geometry, analysis, probability, etc., but it is worth remembering that for the majority of the history of mathematics, this was not completely the case; even Euclid’s axiomatic approach to geometry contained some implicit assumptions about topology, order, and sets which were not fully formalised until the work of Hilbert in the modern era. (Even nowadays, there are still a few parts of mathematics, such as mathematical quantum field theory, which still do not have a completely satisfactory formalisation, though hopefully the situation will improve in the future.)

[Update, Jan 4: bad link fixed.]

Archives