Camil Muscalu, Christoph Thiele and I have just uploaded to the arXiv our joint paper, “Multi-linear multipliers associated to simplexes of arbitrary length“, submitted to Analysis & PDE. This paper grew out of our project from many years ago to attempt to prove the nonlinear (or “scattering”) version of Carleson’s theorem on the almost everywhere convergence of Fourier series. This version is still open; our original approach was to handle the nonlinear Carleson operator by multilinear expansions in terms of the potential function V, but while the first three terms of this expansion were well behaved, the fourth term was unfortunately divergent, due to the unhelpful location of a certain minus sign. [This survey by Michael Lacey, as well as this paper of ourselves, covers some of these topics.]

However, what we did find out in this paper was that if we modified the nonlinear Carleson operator slightly, by replacing the underlying Schrödinger equation by a more general AKNS system, then for “generic” choices of this system, the problem of the ill-placed minus sign goes away, and each term in the multilinear series is, in fact, convergent (though we did not yet verify that the series actually converged, though in view of the earlier work of Christ and Kiselev on this topic, this seems likely). The verification of this convergence (at least with regard to the scattering data, rather than the more difficult analysis of the eigenfunctions) is the main result of our current paper. It builds upon our earlier estimates of the bilinear term in the expansion (which we dubbed the “biest”, as a multilingual pun). The main new idea in our earlier paper was to decompose the relevant region of frequency space \{ (\xi_1,\xi_2,\xi_3) \in {\Bbb R}^3: \xi_1 < \xi_2 < \xi_3 \} into more tractable regions, a typical one being the region in which \xi_2 was much closer to \xi_1 than to \xi_3. The contribution of each region can then be “parafactored” into a “paracomposition” of simpler operators, such as the bilinear Hilbert transform, which can be treated by standard time-frequency analysis methods. (Much as a paraproduct is a frequency-restricted version of a product, the paracompositions that arise here are frequency-restricted versions of composition.)

A similar analysis happens to work for the multilinear operators associated to the frequency region S := \{ (\xi_1,\ldots,\xi_n): \xi_1 < \ldots < \xi_n \}, but the combinatorics are more complicated; each of the component frequency regions has to be indexed by a tree (in a manner reminiscent of the well-separated pairs decomposition), and a certain key “weak Bessel inequality” becomes considerably more delicate. Our ultimate conclusion is that the multilinear operator

T(V_1,\ldots,V_n) := \int_{(\xi_1,\ldots,\xi_n) \in S} \hat V_1(\xi_1) \ldots \hat V_n(\xi_n) e^{2i (\xi_1+\ldots+\xi_n) x}\ d\xi_1 \ldots d\xi_n (1)

(which generalises the bilinear Hilbert transform and the biest) obeys Hölder-type L^p estimates (note that Hölder’s inequality related to the situation in which the (projective) simplex S is replaced by the entire frequency space {\Bbb R}^n).

For the remainder of this post, I thought I would describe the “nonlinear Carleson theorem” conjecture, which is still one of my favourite open problems, being an excellent benchmark for measuring progress in the (still nascent) field of “nonlinear Fourier analysis“, while also being of interest in its own right in scattering and spectral theory.

My starting point will be the one-dimensional time-independent Schrödinger equation

- u_{xx}(k,x) + V(x) u(k,x) = k^2 u(k,x) (2)

where V: {\Bbb R} \to {\Bbb R} is a given potential function, k \in {\Bbb R} is a frequency parameter, and u: {\Bbb R} \times {\Bbb R} \to {\Bbb C} is the wave function. This equation (after reinstating constants such as Planck’s constant \hbar, which we have normalised away) describes the instantaneous state of a quantum particle with energy k^2 in the presence of the potential V. To avoid technicalities let us assume that V is smooth and compactly supported (say in the interval {}[-R,R]) for now, though the eventual conjecture will concern potentials V that are merely square-integrable.

For each fixed frequency k, the equation (2) is a linear homogeneous second order ODE, and so has a two-dimensional space of solutions. In the free case V=0, the solution space is given by

u(k,x) = \alpha(k) e^{ikx} + \beta(k) e^{-ikx} (3)

where \alpha(k) and \beta(k) are arbitrary complex numbers; physically, these numbers represent the amplitudes of the rightward and leftward propagating components of the solution respectively.

Now suppose that V is non-zero, but is still compactly supported on an interval {}[-R,+R]. Then for a fixed frequency k, a solution to (2) will still behave like (3) in the regions x > R and x < R, where the potential vanishes; however, the amplitudes on either side of the potential may be different. Thus we would have

u(k,x) = \alpha_+(k) e^{ikx} + \beta_+(k) e^{-ikx}

for x > R and

u(k,x) = \alpha_-(k) e^{ikx} + \beta_-(k) e^{-ikx}

for x < -R. Since there is only a two-dimensional linear space of solutions, the four complex numbers \alpha_-(k), \beta_-(k), \alpha_+(k), \beta_+(k) must be related to each other by a linear relationship of the form

\begin{pmatrix} \alpha_+(k) \\ \beta_+(k) \end{pmatrix} = \overbrace{V}(k)  \begin{pmatrix} \alpha_-(k) \\ \beta_-(k) \end{pmatrix}

where \overbrace{V}(k) is a 2 \times 2 matrix depending on V and k, known as the scattering matrix of V at frequency k. (We choose this notation to deliberately invoke a resemblance to the Fourier transform \hat V(k) := \int_{-\infty}^\infty V(x) e^{-2ikx}\ dx of V; more on this later.) Physically, this matrix determines how much of an incoming wave at frequency k gets reflected by the potential, and how much gets transmitted.

What can we say about the matrix \overbrace{V}(k)? By using the Wronskian of two solutions to (2) (or by viewing (2) as a Hamiltonian flow in phase space) we can show that \overbrace{V}(k) must have determinant 1. Also, by using the observation that the solution space to (2) is closed under complex conjugation u(k,x) \mapsto \overline{u(k,x)}, one sees that each coefficient of the matrix \overbrace{V}(k) is the complex conjugate of the diagonally opposite coefficient. Combining the two, we see that \overbrace{V}(k) takes values in the Lie group

SU(1,1) := \{ \begin{pmatrix} a & \overline{b} \\ b & \overline{a} \end{pmatrix}: a,b \in {\Bbb C}, |a|^2-|b|^2 = 1 \}

(which, incidentally, is isomorphic to SL_2({\Bbb R})), thus we have

\overbrace{V}(k) =  \begin{pmatrix} a(k) & \overline{b(k)} \\ b (k) & \overline{a(k)} \end{pmatrix}

for some functions a: {\Bbb R} \to {\Bbb C} and b: {\Bbb R} \to {\Bbb C} obeying the constraint |a(k)|^2 - |b(k)|^2 = 1. (The functions \frac{1}{a(k)} and \frac{b(k)}{a(k)} are sometimes known as the transmission coefficient and reflection coefficient respectively; note that they square-sum to 1, a fact related to the law of conservation of energy.) These coefficients evolve in a beautifully simple manner if V evolves via the Korteweg-de Vries (KdV) equation V_t + V_{xxx} = 6VV_x (indeed, one has \partial_t a = 0 and \partial_t b = 8ik^3 b), being part of the fascinating subject of completely integrable systems, but that is a long story which we will not discuss here. This connection does however provide one important source of motivation for studying the scattering transform V \mapsto \overbrace{V} and its inverse.

What are the values of the coefficients a(k), b(k)? In the free case V=0, one has a(k)=1 and b(k)=0. When V is non-zero but very small, one can linearise in V (discarding all terms of order O(V^2) or higher), and obtain the approximation

a(k) \approx 1 -\frac{i}{2k}\int_{-\infty}^\infty V; \quad b(k) \approx \frac{-i}{2k} \hat V(k)

known as the Born approximation; this helps explain why we think of \overbrace{V}(k) as a nonlinear variant of the Fourier transform. A slightly more precise approximation, known as the WKB approximation, is

a(k) \approx e^{-\frac{i}{2k}\int_{-\infty}^\infty V}; \quad b(k) \approx \frac{-i}{2k} e^{-\frac{i}{2k}\int_{-\infty}^\infty V} \int_{-\infty}^{\infty} V(x) e^{-2ikx + \frac{i}{k} \int_{-\infty}^x V}\ dx.

(One can avoid the additional technicalities caused by the WKB phase correction by working with the Dirac equation instead of the Schrödinger; this formulation is in fact cleaner in many respects, but we shall stick with the more traditional Schrödinger formulation here. More generally, one can consider analogous scattering transforms for AKNS systems.) One can in fact expand a(k) and b(k) as a formal power series of multilinear integrals in V (distorted slightly by the WKB phase correction e^{\frac{i}{k} \int_{-\infty}^x V}), whose terms resemble the multilinear expression (1) except for some (crucial) sign changes and some WKB phase corrections. It is relatively easy to show that this multilinear series is absolutely convergent for every k when the potential V is absolutely integrable (this is the nonlinear analogue to the obvious fact that the Fourier integral \hat V(k) = \int_{-\infty}^\infty V(k) e^{-2ikx} is absolutely convergent when V is absolutely integrable; it can also be deduced without recourse to multilinear series by using Levinson’s theorem.) If V is not absolutely integrable, but instead lies in L^p({\Bbb R}) for some p > 1, then the series can diverge for some k; this fact is closely related to a classic result of Wigner and von Neumann that the Schrödinger operator can contain embedded pure point spectrum. However, Christ and Kiselev showed that the series is absolutely convergent for almost every k in the case 1 < p < 2 (this is a non-linear version of the Hausdorff-Young inequality). In fact they proved a stronger statement, namely that for almost every k, the eigenfunctions x \mapsto u(k,x) are bounded (and converge asymptotically to plane waves \alpha_\pm(k) e^{ikx} + \beta_\pm(k) e^{-ikx} as x \to \infty). There is an analogue of the Born and WKB approximations for these eigenfunctions, which shows that the Christ-Kiselev result is the nonlinear analogue of a classical result of Menshov, Paley and Zygmund showing the conditional convergence of the Fourier integral \int_{-\infty}^\infty V(x) e^{-2ikx}\ dx for almost every k when V \in L^p({\Bbb R}) for some 1 < p < 2.

The analogue of the Menshov-Paley-Zygmund theorem at the endpoint p=2 is the celebrated theorem of Carleson on almost everywhere convergence of Fourier series of L^2 functions. (The claim fails for p > 2, as can be seen by investigating random Fourier series, though I don’t recall the reference for this fact.) The nonlinear version of this would assert that for square-integrable potentials V, the eigenfunctions x \mapsto u(k,x) are bounded for almost every k. This is the nonlinear Carleson theorem conjecture. Unfortunately, it cannot be established by multilinear series, because of a divergence in the trilinear term of the expansion; but other methods may succeed instead. For instance, the weaker statement that the coefficients a(k) and b(k) (defined by density) are well defined and finite almost everywhere for square-integrable V (which is a nonlinear analogue of Plancherel’s theorem that the Fourier transform can be defined by density on L^2({\Bbb R})) was essentially established by Deift and Killip, using a trace formula (a nonlinear analogue to Plancherel’s formula). Also, the “dyadic” or “function field” model of the conjecture is known, by a modification of Carleson’s original argument. But the general case still seems to require more tools; for instance, we still do not have a good nonlinear Littlewood-Paley theory (except in the dyadic case), which is preventing time-frequency type arguments from being extended directly to the nonlinear setting.