You are currently browsing the tag archive for the ‘asymptotics’ tag.

Today I’d like to discuss (part of) a cute and surprising theorem of Fritz John in the area of non-linear wave equations, and specifically for the equation

\partial_{tt} u - \Delta u = |u|^p (1)

where u: {\Bbb R} \times {\Bbb R}^3 \to {\Bbb R} is a scalar function of one time and three spatial dimensions.

The evolution of this type of non-linear wave equation can be viewed as a “race” between the dispersive tendency of the linear wave equation

\partial_{tt} u - \Delta u = 0 (2)

and the positive feedback tendencies of the nonlinear ODE

\partial_{tt} u = |u|^p. (3)

More precisely, solutions to (2) tend to decay in time as t \to +\infty, as can be seen from the presence of the \frac{1}{t} term in the explicit formula

u(t,x) =  \frac{1}{4\pi t} \int_{|y-x|=t} \partial_t u(0,y)\ dS(y) + \partial_t[\frac{1}{4\pi t} \int_{|y-x|=t} u(0,y)\ dS(y)], (4)

for such solutions in terms of the initial position u(0,y) and initial velocity \partial_t u(0,y), where t > 0, x \in {\Bbb R}^3, and dS is the area element of the sphere \{ y \in {\Bbb R}^3: |y-x|=t \}. (For this post I will ignore the technical issues regarding how smooth the solution has to be in order for the above formula to be valid.) On the other hand, solutions to (3) tend to blow up in finite time from data with positive initial position and initial velocity, even if this data is very small, as can be seen by the family of solutions

u_T(t,x) := c (T-t)^{-2/(p-1)}

for T > 0, 0 < t < T, and x \in {\Bbb R}^3, where c is the positive constant c := (\frac{2(p+1)}{(p-1)^2})^{1/(p-1)}. For T large, this gives a family of solutions which starts out very small at time zero, but still manages to go to infinity in finite time.

The equation (1) can be viewed as a combination of equations (2) and (3) and should thus inherit a mix of the behaviours of both its “parents”. As a general rule, when the initial data u(0,\cdot), \partial_t u(0,\cdot) of solution is small, one expects the dispersion to “win” and send the solution to zero as t \to \infty, because the nonlinear effects are weak; conversely, when the initial data is large, one expects the nonlinear effects to “win” and cause blowup, or at least large amounts of instability. This division is particularly pronounced when p is large (since then the nonlinearity is very strong for large data and very weak for small data), but not so much for p small (for instance, when p=1, the equation becomes essentially linear, and one can easily show that blowup does not occur from reasonable data.)

The theorem of John formalises this intuition, with a remarkable threshold value for p:

Theorem. Let 1 < p < \infty.

  1. If p < 1+\sqrt{2}, then there exist solutions which are arbitrarily small (both in size and in support) and smooth at time zero, but which blow up in finite time.
  2. If p > 1+\sqrt{2}, then for every initial data which is sufficiently small in size and support, and sufficiently smooth, one has a global solution (which goes to zero uniformly as t \to \infty).

[At the critical threshold p = 1 + \sqrt{2} one also has blowup from arbitrarily small data, as was shown subsequently by Schaeffer.]

The ostensible purpose of this post is to try to explain why the curious exponent 1+\sqrt{2} should make an appearance here, by sketching out the proof of part 1 of John’s theorem (I will not discuss part 2 here); but another reason I am writing this post is to illustrate how to make quick “back-of-the-envelope” calculations in harmonic analysis and PDE which can obtain the correct numerology for such a problem much faster than a fully rigorous approach. These calculations can be a little tricky to handle properly at first, but with practice they can be done very swiftly.

Read the rest of this entry »