You are currently browsing the tag archive for the ‘scale invariance’ tag.

Title: Use basic examples to calibrate exponents

Motivation: In the more quantitative areas of mathematics, such as analysis and combinatorics, one has to frequently keep track of a large number of exponents in one’s identities, inequalities, and estimates.  For instance, if one is studying a set of N elements, then many expressions that one is faced with will often involve some power N^p of N; if one is instead studying a function f on a measure space X, then perhaps it is an L^p norm \|f\|_{L^p(X)} which will appear instead.  The exponent p involved will typically evolve slowly over the course of the argument, as various algebraic or analytic manipulations are applied.  In some cases, the exact value of this exponent is immaterial, but at other times it is crucial to have the correct value of p at hand.   One can (and should) of course carefully go through one’s arguments line by line to work out the exponents correctly, but it is all too easy to make a sign error or other mis-step at one of the lines, causing all the exponents on subsequent lines to be incorrect.  However, one can guard against this (and avoid some tedious line-by-line exponent checking) by continually calibrating these exponents at key junctures of the arguments by using basic examples of the object of study (sets, functions, graphs, etc.) as test cases.  This is a simple trick, but it lets one avoid many unforced errors with exponents, and also lets one compute more rapidly.

Quick description: When trying to quickly work out what an exponent p in an estimate, identity, or inequality should be without deriving that statement line-by-line, test that statement with a simple example which has non-trivial behaviour with respect to that exponent p, but trivial behaviour with respect to as many other components of that statement as one is able to manage.   The “non-trivial” behaviour should be parametrised by some very large or very small parameter.  By matching the dependence on this parameter on both sides of the estimate, identity, or inequality, one should recover p (or at least a good prediction as to what p should be).

General discussion: The test examples should be as basic as possible; ideally they should have trivial behaviour in all aspects except for one feature that relates to the exponent p that one is trying to calibrate, thus being only “barely” non-trivial.   When the object of study is a function, then (appropriately rescaled, or otherwise modified) bump functions are very typical test objects, as are Dirac masses, constant functions, Gaussians, or other functions that are simple and easy to compute with.  In additive combinatorics, when the object of study is a subset of a group, then subgroups, arithmetic progressions, or random sets are typical test objects.  In graph theory, typical examples of test objects include complete graphs, complete bipartite graphs, and random graphs. And so forth.

This trick is closely related to that of using dimensional analysis to recover exponents; indeed, one can view dimensional analysis as the special case of exponent calibration when using test objects which are non-trivial in one dimensional aspect (e.g. they exist at a single very large or very small length scale) but are otherwise of a trivial or “featureless” nature.   But the calibration trick is more general, as it can involve parameters (such as probabilities, angles, or eccentricities) which are not commonly associated with the physical concept of a dimension.  And personally, I find example-based calibration to be a much more satisfying (and convincing) explanation of an exponent than a calibration arising from formal dimensional analysis.

When one is trying to calibrate an inequality or estimate, one should try to pick a basic example which one expects to saturate that inequality or estimate, i.e. an example for which the inequality is close to being an equality.  Otherwise, one would only expect to obtain some partial information on the desired exponent p (e.g. a lower bound or an upper bound only).  Knowing the examples that saturate an estimate that one is trying to prove is also useful for several other reasons – for instance, it strongly suggests that any technique which is not efficient when applied to the saturating example, is unlikely to be strong enough to prove the estimate in general, thus eliminating fruitless approaches to a problem and (hopefully) refocusing one’s attention on those strategies which actually have a chance of working.

Calibration is best used for the type of quick-and-dirty calculations one uses when trying to rapidly map out an argument that one has roughly worked out already, but without precise details; in particular, I find it particularly useful when writing up a rapid prototype.  When the time comes to write out the paper in full detail, then of course one should instead carefully work things out line by line, but if all goes well, the exponents obtained in that process should match up with the preliminary guesses for those exponents obtained by calibration, which adds confidence that there are no exponent errors have been committed.

Prerequisites: Undergraduate analysis and combinatorics.

Read the rest of this entry »

We now set aside our discussion of the finite time extinction results for Ricci flow with surgery (Theorem 4 from Lecture 2), and turn instead to the main portion of Perelman’s argument, which is to establish the global existence result for Ricci flow with surgery (Theorem 2 from Lecture 2), as well as the discreteness of the surgery times (Theorem 3 from Lecture 2).

As mentioned in Lecture 1, local existence of the Ricci flow is a fairly standard application of nonlinear parabolic theory, once one uses de Turck’s trick to transform Ricci flow into an explicitly parabolic equation. The trouble is, of course, that Ricci flow can and does develop singularities (indeed, we have just spent several lectures showing that singularities must inevitably develop when certain topological hypotheses (e.g. simple connectedness) or geometric hypotheses (e.g. positive scalar curvature) occur). In principle, one can use surgery to remove the most singular parts of the manifold at every singularity time and then restart the Ricci flow, but in order to do this one needs some rather precise control on the geometry and topology of these singular regions. (In particular, there are some hypothetical bad singularity scenarios which cannot be easily removed by surgery, due to topological obstructions; a major difficulty in the Perelman program is to show that such scenarios in fact cannot occur in a Ricci flow.)

In order to analyse these singularities, Hamilton and then Perelman employed the standard nonlinear PDE technique of “blowing up” the singularity using the scaling symmetry, and then exploiting as much “compactness” as is available in order to extract an “asymptotic profile” of that singularity from a sequence of such blowups, which had better properties than the original Ricci flow. [The PDE notion of a blowing up a solution around a singularity, by the way, is vaguely analogous to the algebraic geometry notion of blowing up a variety around a singularity, though the two notions are certainly not identical.] A sufficiently good classification of all the possible asymptotic profiles will, in principle, lead to enough structural properties on general singularities to Ricci flow that one can see how to perform surgery in a manner which controls both the geometry and the topology.

However, in order to carry out this program it is necessary to obtain geometric control on the Ricci flow which does not deteriorate when one blows up the solution; in the jargon of nonlinear PDE, we need to obtain bounds on some quantity which is both coercive (it bounds the geometry) and either critical (it is essentially invariant under rescaling) or subcritical (it becomes more powerful when one blows up the solution) with respect to the scaling symmetry. The discovery of controlled quantities for Ricci flow which were simultaneously coercive and critical was Perelman’s first major breakthrough in the subject (previously known controlled quantities were either supercritical or only partially coercive); it made it possible, at least in principle, to analyse general singularities of Ricci flow and thus to begin the surgery program discussed above. (In contrast, the main reason why questions such as Navier-Stokes global regularity are so difficult is that no controlled quantity which is both coercive and critical or subcritical is known.) The mere existence of such a quantity does not by any means establish global existence of Ricci flow with surgery immediately, but it does give one a non-trivial starting point from which one can hope to make progress.

Read the rest of this entry »

It is always dangerous to venture an opinion as to why a problem is hard (cf. Clarke’s first law), but I’m going to stick my neck out on this one, because (a) it seems that there has been a lot of effort expended on this problem recently, sometimes perhaps without full awareness of the main difficulties, and (b) I would love to be proved wrong on this opinion :-) .

The global regularity problem for Navier-Stokes is of course a Clay Millennium Prize problem and it would be redundant to describe it again here. I will note, however, that it asks for existence of global smooth solutions to a Cauchy problem for a nonlinear PDE. There are countless other global regularity results of this type for many (but certainly not all) other nonlinear PDE; for instance, global regularity is known for Navier-Stokes in two spatial dimensions rather than three (this result essentially dates all the way back to Leray’s thesis in 1933!). Why is the three-dimensional Navier-Stokes global regularity problem considered so hard, when global regularity for so many other equations is easy, or at least achievable?

(For this post, I am only considering the global regularity problem for Navier-Stokes, from a purely mathematical viewpoint, and in the precise formulation given by the Clay Institute; I will not discuss at all the question as to what implications a rigorous solution (either positive or negative) to this problem would have for physics, computational fluid dynamics, or other disciplines, as these are beyond my area of expertise. But if anyone qualified in these fields wants to make a comment along these lines, by all means do so.)

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,574 other followers