Today I’d like to discuss (part of) a cute and surprising theorem of Fritz John in the area of non-linear wave equations, and specifically for the equation

$\partial_{tt} u - \Delta u = |u|^p$ (1)

where $u: {\Bbb R} \times {\Bbb R}^3 \to {\Bbb R}$ is a scalar function of one time and three spatial dimensions.

The evolution of this type of non-linear wave equation can be viewed as a “race” between the dispersive tendency of the linear wave equation

$\partial_{tt} u - \Delta u = 0$ (2)

and the positive feedback tendencies of the nonlinear ODE

$\partial_{tt} u = |u|^p$. (3)

More precisely, solutions to (2) tend to decay in time as $t \to +\infty$, as can be seen from the presence of the $\frac{1}{t}$ term in the explicit formula

$u(t,x) = \frac{1}{4\pi t} \int_{|y-x|=t} \partial_t u(0,y)\ dS(y) + \partial_t[\frac{1}{4\pi t} \int_{|y-x|=t} u(0,y)\ dS(y)],$ (4)

for such solutions in terms of the initial position $u(0,y)$ and initial velocity $\partial_t u(0,y)$, where $t > 0$, $x \in {\Bbb R}^3$, and dS is the area element of the sphere $\{ y \in {\Bbb R}^3: |y-x|=t \}$. (For this post I will ignore the technical issues regarding how smooth the solution has to be in order for the above formula to be valid.) On the other hand, solutions to (3) tend to blow up in finite time from data with positive initial position and initial velocity, even if this data is very small, as can be seen by the family of solutions

$u_T(t,x) := c (T-t)^{-2/(p-1)}$

for $T > 0$, $0 < t < T$, and $x \in {\Bbb R}^3$, where c is the positive constant $c := (\frac{2(p+1)}{(p-1)^2})^{1/(p-1)}$. For T large, this gives a family of solutions which starts out very small at time zero, but still manages to go to infinity in finite time.

The equation (1) can be viewed as a combination of equations (2) and (3) and should thus inherit a mix of the behaviours of both its “parents”. As a general rule, when the initial data $u(0,\cdot), \partial_t u(0,\cdot)$ of solution is small, one expects the dispersion to “win” and send the solution to zero as $t \to \infty$, because the nonlinear effects are weak; conversely, when the initial data is large, one expects the nonlinear effects to “win” and cause blowup, or at least large amounts of instability. This division is particularly pronounced when p is large (since then the nonlinearity is very strong for large data and very weak for small data), but not so much for p small (for instance, when p=1, the equation becomes essentially linear, and one can easily show that blowup does not occur from reasonable data.)

The theorem of John formalises this intuition, with a remarkable threshold value for p:

Theorem. Let $1 < p < \infty$.

1. If $p < 1+\sqrt{2}$, then there exist solutions which are arbitrarily small (both in size and in support) and smooth at time zero, but which blow up in finite time.
2. If $p > 1+\sqrt{2}$, then for every initial data which is sufficiently small in size and support, and sufficiently smooth, one has a global solution (which goes to zero uniformly as $t \to \infty$).

[At the critical threshold $p = 1 + \sqrt{2}$ one also has blowup from arbitrarily small data, as was shown subsequently by Schaeffer.]

The ostensible purpose of this post is to try to explain why the curious exponent $1+\sqrt{2}$ should make an appearance here, by sketching out the proof of part 1 of John’s theorem (I will not discuss part 2 here); but another reason I am writing this post is to illustrate how to make quick “back-of-the-envelope” calculations in harmonic analysis and PDE which can obtain the correct numerology for such a problem much faster than a fully rigorous approach. These calculations can be a little tricky to handle properly at first, but with practice they can be done very swiftly.

The first step, which is standard in nonlinear evolution equations, is to rewrite the differential equation (1) as an integral equation. Just as the basic ODE

$\partial_t u = F$

can be rewritten via the fundamental theorem of calculus in the integral form

$u(t) = u(0) + \int_0^t F(s)\ ds,$

it turns out that the inhomogeneous wave equation

$\partial_{tt} u - \Delta u = F$

can be rewritten via the fundamental solution (4) of the homogeneous equation (together with Duhamel’s principle) in the integral form

$u(t,x) = u_{\rm lin}(t,x) + \frac{1}{4\pi} \int_0^t \int_{|y-x|=|t-s|} \frac{1}{t-s} F(s,y)\ dS(y) ds$

where $u_{\rm lin}$ is the solution to the homogeneous wave equation (2) with initial position $u(0,x)$ and initial velocity $\partial_t u(0,x)$ (and is given using (4)). [I plan to write more about this formula in a later post, but today I will just treat it as a miraculous identity. I will note however that the formula generalises Newton’s formula $u(x) = \frac{1}{4\pi} \int_{{\Bbb R}^3} \frac{1}{|x-y|} F(y)\ dy$ for the standard solution to Poisson’s equation $-\Delta u = F$.]

Using the fundamental solution, the nonlinear wave equation (1) can be rewritten in integral form as

$u(t,x) = u_{\rm lin}(t,x) + \frac{1}{4\pi} \int_0^t \int_{|y-x|=|t-s|} \frac{1}{t-s} |u(s,y)|^p\ dS(y) ds.$ (5)

[Strictly speaking, one needs to first show that the solution exists and is sufficiently smooth before (5) can be rigorously applied, but this turns out to be a routine technical detail and I will not discuss it here.]

John’s argument now exploits a remarkable feature of the fundamental solution of the three-dimensional wave equation, namely that it is non-negative; combining this with the non-negativity of the forcing term $|u|^p$, we see that the integral in (5), that represents the cumulative effect of the nonlinearity, is always non-negative. Thus we have the pointwise inequality

$u(t,x) \geq u_{\rm lin}(t,x),$ (6)

but also we see that any lower bound for u of the form $u(t,x) \geq v(t,x)$ can be immediately bootstrapped via (5) to a new lower bound

$u(t,x) \geq u_{\rm lin}(t,x) + \frac{1}{4\pi} \int_0^t \int_{|y-x|=|t-s|} \frac{1}{t-s} |v(s,y)|^p\ dS(y) ds$. (7)

This gives a way to iteratively give lower bounds on a solution u, by starting with the lower bound (5) (and computing $u_{\rm lin}(t,x)$ explicitly using (4)) and then feeding this bound repeatedly into (7) to see what one gets. (This iteration procedure is closely related to the method of Picard iteration for constructing solutions to nonlinear ODE or PDE, which is still widely used today in the modern theory.)

What will transpire is that this iterative process will yield successively larger lower bounds when $p < 1 + \sqrt{2}$, but will yield successively smaller lower bounds when $p > 1 + \sqrt{2}$; this is the main driving force behind John’s theorem. (To actually establish blowup in finite time when $p < 1 + \sqrt{2}$, there is an auxiliary step that uses energy inequalities to show that once the solution gets sufficiently large, it will be guaranteed to develop singularities within a finite amount of additional time. To establish global solutions when $p > 1 + \sqrt{2}$, one needs to show that the lower bounds constructed by this scheme in fact converge to the actual solution, and establish uniform control on all of these lower bounds.)

The remaining task is a computational one, to evaluate the various lower bounds for u arising from (6) and (7) from some given initial data. In principle, this is just an application of undergraduate several variable calculus, but if one sets about working out the relevant integrals exactly (using polar coordinates, etc.), the computations quickly become tediously complicated. But we don’t actually need exact, closed-form expressions for these integrals; just knowing the order of magnitude of these integrals is enough. For that task, much faster computational techniques are available.

Let’s see how. We begin with the computation of the linear solution $u_{\rm lin}(t,x)$. This is given in terms of the initial data $u(0,x), \partial_t u(0,x)$ via the formula (4). Now, for the purpose of establishing John’s theorem in the form stated above, we have the freedom to pick the initial data as we please, as long as it is smooth, small, and compactly supported. To make our life easier, we pick initial data with vanishing initial position and non-negative initial velocity, thus $u(0,x) = 0$ and $\partial_t u(0,x) \geq 0$; this eliminates the pesky partial derivative in (4) and makes $u_{\rm lin}$ non-negative. More concretely, let us take

$\partial_t u(0,x) = \varepsilon \psi( x / \varepsilon )$

for some fixed non-negative bump function $\psi$ (the exact form is not relevant) and some small $\varepsilon > 0$, thus the initial velocity has very small amplitude and width. To simplify the notation we shall work with macroscopic values of $\varepsilon$, thus $\varepsilon \sim 1$, but it will be not hard to see that the arguments below also work for very small $\varepsilon$ (though of course the smaller $\varepsilon$ is, the longer it will take for blowup to occur).

As I said before, we only need an order of magnitude computation. Let us reflect this by describing the initial velocity $\partial_t u(0,x)$ in fuzzier notation:

$\partial_t u(0,x) \sim 1 \hbox{ when } x = O(1)$.

Geometrically, $\partial_t u$ has “height” $\sim 1$ on a ball of radius O(1) centred at the origin. We will retain this sort of fuzzy notation throughout the rest of the argument; it is not fully rigorous, but we can always go back and make the computations formal (and much lengthier) after we have performed the quick informal calculations to show the way ahead.

Thus we see from (4) that the linear solution $u_{\rm lin}(t,x)$ can be expressed somewhat fuzzily in the form

$u_{\rm lin}(t,x) \sim \frac{1}{t} \int_{|y-x|=t; y = O(1)} 1 dS(y)$.

Note that the factor $\frac{1}{4\pi}$ can be discarded for the purposes of order of magnitude computation. Geometrically, the integral is measuring the area of the portion of the sphere $\{|y-x|=t\}$ which intersects the ball $\{y = O(1)\}$. A little bit of geometric visualisation will reveal that for large times $t \gg 1$, this portion of the sphere will vanish unless $|x|=t+O(1)$, in which case it is a spherical cap of diameter O(1), and thus area O(1). Thus we are led to the back-of-the-envelope computation

$u_{\rm lin}(t,x) \sim \frac{1}{t} \hbox{ when } |x|=t+O(1) \hbox { and } t \gg 1$

with $u_{\rm lin}(t,x)$ zero when $|x| \neq t + O(1)$. (This vanishing outside of a neighbourhood of the light cone $\{ |x|=t \}$ is a manifestation of the sharp Huygens principle.)

In particular, from (6) we obtain the initial lower bound

$u(t,x) \gg \frac{1}{t} \hbox{ when } |x|=t+O(1) \hbox { and } t \gg 1$.

If we then insert this bound into (7) and discard the linear term $u_{\rm lin}$ (which we already know to be positive, and which we have already “used up” in some sense) we obtain the lower bound

$u(t,x) \gg \int_0^t \int_{|y-x|=|t-s|; |y|=s+O(1); s \gg 1} \frac{1}{t-s} \frac{1}{s^p}\ dS(y) ds.$

This is a moderately scary looking integral. But we can get a handle on it by first looking at it geometrically. For a fixed point (t,x) in spacetime, the region of integration is the intersection of a backwards light cone $\{ (s,y): 0 \leq s \leq t; |y-x|=|t-s| \}$ with a thickened forwards light cone $\{ (s,y): |y| = s + O(1); s \gg 1 \}$. If |x| is much larger than t, then these cones will not intersect. If |x| is close to t, the intersection looks complicated, so let us consider the spacelike case when |x| is much less than t, say $|x| \leq t/2$; we also continue working in the asymptotic regime $t \gg 1$. In this case, a bit of geometry or algebra shows that the intersection of the two light cones is a two-dimensional ellipsoid in spacetime of radii $\sim t$ (in particular, its surface area is $\sim t^2$), and living at times s in the interior of ${}[0,t]$, thus s and t-s are both comparable to t. Thickening the forward cone, it is then geometrically intuitive that the intersection of the backwards light cone with the thickened forwards light cone is an angled strip around that ellipse of thickness $\sim 1$; thus the total measure of this strip is roughly $\sim t^2$. Meanwhile, since s and t-s are both comparable to t, the integrand is of magnitude $\sim \frac{1}{t} \frac{1}{t^p}$. Putting all of this together, we conclude that

$u(t,x) \gg t^2 \frac{1}{t} \frac{1}{t^p} = t^{1-p}$ (8)

whenever we are in the interior cone region $\{ (t,x): t \gg 1; |x| \leq t/2 \}$.

To summarise so far, the linear evolution filled out the light cone $\{ (t,x): t \gg 1; |x| = t + O(1) \}$ with a decay $t^{-1}$, and then the nonlinearity caused a secondary wave that filled out the interior region $\{ (t,x): t \gg 1; |x| < t/2 \}$ with a decay $t^{1-p}$. We now compute the tertiary wave by inserting the secondary wave bound back into (7), to get

$u(t,x) \gg \int_0^t \int_{|y-x|=|t-s|; |y| < s/2; s \gg 1} \frac{1}{t-s} \frac{1}{t^{p(1-p)}}\ dS(y) ds.$

Let us continue working in an interior region, say $\{ (t,x): t \gg 1; |x| < t/4 \}$. The region of integration is the intersection of the backwards light cone $\{ (s,y): 0 \leq s \leq t; |y-x|=t-s \}$ with an interior region $\{ (s,t): s \gg 1; |y| < s/2 \}$. A brief sketch of the situation reveals that this intersection basically consists of the portion of the backwards light cone in which s is comparable in size to t. In particular, this intersection has a three-dimensional measure of $\sim t^3$, and on the bulk of this intersection, s and t-s are both comparable to t. So we obtain a lower bound

$u(t,x) \gg t^3 \frac{1}{t} \frac{1}{t^{p(1-p)}} = t^{1-p} t^{2 - (p-1)^2}$ (9)

whenever $t \gg 1$ and $|x| < t/4$.

Now we finally see where the condition $p < 1 + \sqrt{2}$ will come in; if this condition is true, then $2 - (p-1)^2$ is positive, and so the tertiary wave is stronger than the secondary wave, and also situated in essentially the same location of spacetime. This is the beginning of a positive feedback loop; the quaternary wave will be even stronger still, and so on and so forth. Indeed, it is not hard to show that if $p < 1 + \sqrt{2}$, then for any constant A, one will have a lower bound of the form $u(t,x) \gg t^A$ in the interior of the light cone. This does not quite demonstrate blowup per se – merely superpolynomial growth instead – but actually one can amplify this growth into blowup with a little bit more effort (e.g. integrating (1) in space to eliminate the Laplacian term and investigating the dynamics of the spatial integral $\int_{{\Bbb R}^3} u(t,x)\ dx$, taking advantage of finite speed of propagation for this equation, which limits the support of u to the cone $\{ |x| \leq t + O(1) \}$). A refinement of these arguments, taking into account more of the components of the various waves in the iteration, also gives blowup for the endpoint $p = 1 + \sqrt{2}$.

In the other direction, if $p > 1 + \sqrt{2}$, the tertiary wave appears to be smaller than the secondary wave (though to fully check this, one has to compute a number of other components of these waves which we have discarded in the above computations). This sets up a negative feedback loop, with each new wave in the iteration scheme being smaller or decaying faster than the previous, and thus suggests global existence of the solution, at least when the size of the initial data (which was represented by $\varepsilon$) was sufficiently small. This heuristic prediction can be made rigorous by controlling these iterates in various function space norms that capture these sorts of decay, but I will not detail them here.

[More generally, any analysis of a semilinear equation that requires one to compute the tertiary wave tends to give conditions on the exponents which are quadratic in nature; if the quaternary wave was involved also, then cubic constraints might be involved, and so forth. In this particular case, an analysis of the primary and secondary waves alone (which would lead just to linear constraints on p) are not enough, because these waves live in very different regions of spacetime and so do not fully capture the feedback mechanism.]

[Update, Oct 27: typo corrected.]

[Update, Nov 6: typo corrected.]