The summer continues to allow some clearing of the backlog of projects accumulated during the academic year: Helge Holden, Kenneth Karlsen, Nils Risebro, and myself have uploaded to the arXiv the paper “Operator splitting for the KdV equation“, submitted to Math. Comp.. This paper is concerned with accurate numerical schemes for solving initial value problems for the Korteweg-de Vries equation

(1)

though the analysis here would be valid for a wide range of other semilinear dispersive models as well. In principle, these equations, which are completely integrable, can be solved exactly by the inverse scattering method, but fast and accurate implementations of this method are still not as satisfactory as one would like. On the other hand, the linear Korteweg-de Vries equation

(2)

can be solved exactly (with accurate and fast numerics) via the (fast) Fourier transform, while the (inviscid) Burgers equation

(3)

can also be solved exactly (and quickly) by the method of characteristics. Since the KdV equation is in some sense a combination of the equations (2) and (3), it is then reasonable to hope that some combination of the solution schemes for (2) and (3) can be used to solve (1), at least in some approximate sense.

One way to do this is by the method of *operator splitting*. Observe from the formal approximation (where should be thought of as small, and is some matrix or linear operator), that one has

, (4)

[we do not assume A and B to commute here] and thus we formally have

(5)

if for some fixed time T (thus ). As a consequence, if one wants to solve the linear ODE

(1′)

for time , one can achieve an approximate solution (accurate to order ) by alternating times between evolving the ODE

(2′)

for time , and evolving the ODE

(3′)

for time , starting with the initial data .

It turns out that this scheme can be formalised, and furthermore generalised to nonlinear settings such as those for the KdV equation (1). More precisely, we show that if for some , then one can solve (1) to accuracy in norm for any fixed time by alternating between evolving (2) and (3) for times (this scheme is known as *Godunov splitting*).

Actually, one can obtain faster convergence by modifying the scheme, at the cost of requiring higher regularity on the data; the situation is similar to that of numerical integration (or quadrature), in which the midpoint rule or Simpson’s rule provide more accuracy than the Riemann integral if the integrand is smooth. For instance, one has the variant

(6)

of (5), which can be seen by expansion to second order in (or by playing around with the Baker-Campbell-Hausdorff formula). For KdV, we can rigorously show that the analogous scheme (known as *Strang splitting*) involving the indicated combination of evolutions of (2) and (3) will also converge to accuracy in norm, provided that and .

Our main tools are the energy method (i.e. computations of how -norm type energies of the solution evolve in time via differentiation and integration by parts), and the use of two time variables rather than one. To describe the second method, let us give a PDE-style proof of (4) (assuming that A, B are finite-dimensional matrices and u is vector-valued to avoid the technical complications associated with infinite dimensional vector spaces). We introduce two time variables and consider the PDE problem

.

Thus, one first solves the equation (3′) along the t axis with initial data , and then solves the equation (2′) in the direction, thus leading to the explicit solution . In particular, the left-hand side of (4) is .

Now we look at the evolution of u along the diagonal axis. Observe from the chain rule that

(7)

where . On the other hand, F itself solves an ODE in the direction:

We thus have for small t, and then by Duhamel’s formula and (7) we have , which implies (4) as claimed. Note that this analysis also recovers the classical formula in the special case that A and B commute.

## 8 comments

Comments feed for this article

28 June, 2009 at 12:38 pm

demagogueWhat a wonderful synthesis of ideas!

On a philosophical note though, isn’t it strange though that there aren’t better numerical methods available to solve completely integrable differential equations, given their infinite dimensional symmetries? Surely, given the way integrability was discovered by numerical work (Fermi-Pasta-Ulam, Kruskal-Zabusky, and in an even cruder sense, even John Scott Russell) there should be more numerical work in this field?

Although not an expert by any means, I know there has been a burgeoning field of numerical analysis which respects any given symmetries (in all sorts of problems) in recent years. I don’t know the details but most methods boil down to adapting standard numerically stable algorithms to a suitable notion of “orbit space”.

In a sense, this is “obvious”. If an analytically intractible problem has an ‘exact subpart’ then why throw it away? But the challenges in practice are difficult.

Here is a very interesting special issue of the Journal of Physics A dedicated to the topic of Geometric Numerical Integration of Differential Equations

http://www.iop.org/EJ/toc/0305-4470/39/19

28 June, 2009 at 5:07 pm

Terence TaoDear demagogue,

There is a substantial literature on numerical solutions on KdV, though this is more of my co-authors’ area of expertise than mine. In addition to operator splitting, Galerkin-based schemes appear to be rather popular. In contrast, the inverse scattering transform, despite being so fundamental to the theoretical solvability of KdV, seems to be very rarely used in the numerical solvability of this equation (except when dealing with very special solutions, e.g. multisoliton solutions), perhaps because there is no FFT-like way to quickly invert the scattering transform known currently. Similarly, the multiple conservation laws enjoyed by KdV do not seem to be exploited too much in the numerical schemes that are actually used in practice. (Operator splitting will preserve the L^2 norm, because both of the component equations preserve this norm, and also preserves the mass, but it does not respect the other symmetries of the equation.)

Dear asinop, I am not exactly sure what type of equations you would be seeking to solve here. The energy method (which underlies our approach here) can be applied to quite general background metrics, though if the equations become quasilinear or fully nonlinear rather than semilinear then one would need to proceed rather carefully in order not to lose so many derivatives that one can no longer close a continuity argument. If the background is evolving with the solution (as is the case with Ricci flow or the Einstein equations, for instance) then I suspect the method would not work well at all (though there is no obvious operator splitting for such equations in any case).

28 June, 2009 at 2:31 pm

asinopIf you want to solve similar equations on different Riemannian manifolds instead of $[0,\infty)\times R$ with very small expansion (Cheeger constant,) would operator splitting part still work?

7 October, 2009 at 3:31 am

yasser2339Dear prof. Tao

i want to know more about using splitting methods

how can i solve Recatti equation using operator splitting method

thanks

Yasser

15 December, 2009 at 2:58 pm

StudentDear Professor Tao:

You wrote “the nonlinear Burgers equation generically develops shocks in finite time while the linear Airy equation preserves all Sobolev norms.”

If I understand correctly then, there must be some threshold of the power of fractional Laplacian between 1 (Burger) and 3/2 (Airy) on which the blowup depends. Is that true, and if so, what is the fractional power; e.g. 5/4?

16 December, 2009 at 4:20 pm

Terence TaoIt seems that the Benjamin-Ono equation is the critical threshold between Burgers and KdV . Benjamin-Ono does not develop singularities for real data, but is badly behaved for complex data; the dispersive effect of the linear part and the singularity-forming effect of the nonlinear part of the equation appear to be in almost perfect balance.

16 December, 2009 at 5:23 pm

StudentDear Professor Tao:

Thank you very much for your response.

I found that you actually “Global well-posedness of the Benjamin-Ono Equation in H^{1}(R).” I will read that with much interest.

If I may, I have another question about another paper that I cannot ask elsewhere (I cannot find your blog about it).

In “Ill-posedness for one-dimensional wave maps at the critical regularity,” you introduce null-coordinates

u = x + t

and

v = x – t

so that

\partial_{u} = 1/2(\partial_{x} + \partial_{t})

\partial_{v} = 1/2(\partial_{x} – \partial_{t})

and you continue to construct \phi(t, x) which is a constant in three of the five regions, etc.

This is a very elegant simple proof that seems to depend on that common decomposition of D’Alembert operator. Hence, I suppose this method, at least in its direct form, is not applicable to other PDEs without the D’Alembert operator like Burger’s. Or is there some modification of this method for other PDEs without the D’Alembert operator? If any, a reference will be very much appreciated.

24 September, 2011 at 12:55 am

afflatus66Dear Prof. Tao,

If the problem is about periodic KdV and NLS in 1D torus and with step function as initial data. How do think about the convergence issue using operator splitting method ?J.Bourgain has already proved the well-poseness for nonlinear Schrodinger equation and KdV equation in periodic domain. And we know for smooth initial data the splitting method has good convergence property. The geometric structure of numerical solution generated by splitting method for smooth initial data has also been considered. Can we use smooth function to approximate step function as initial data to get the convergent property of the situation using step function as initial data ?