You are currently browsing the tag archive for the ‘NLS’ tag.

$\displaystyle i \hbar \partial_t |\psi \rangle = H |\psi\rangle$

is the fundamental equation of motion for (non-relativistic) quantum mechanics, modeling both one-particle systems and ${N}$-particle systems for ${N>1}$. Remarkably, despite being a linear equation, solutions ${|\psi\rangle}$ to this equation can be governed by a non-linear equation in the large particle limit ${N \rightarrow \infty}$. In particular, when modeling a Bose-Einstein condensate with a suitably scaled interaction potential ${V}$ in the large particle limit, the solution can be governed by the cubic nonlinear Schrödinger equation

$\displaystyle i \partial_t \phi = \Delta \phi + \lambda |\phi|^2 \phi. \ \ \ \ \ (1)$

I recently attended a talk by Natasa Pavlovic on the rigorous derivation of this type of limiting behaviour, which was initiated by the pioneering work of Hepp and Spohn, and has now attracted a vast recent literature. The rigorous details here are rather sophisticated; but the heuristic explanation of the phenomenon is fairly simple, and actually rather pretty in my opinion, involving the foundational quantum mechanics of ${N}$-particle systems. I am recording this heuristic derivation here, partly for my own benefit, but perhaps it will be of interest to some readers.

This discussion will be purely formal, in the sense that (important) analytic issues such as differentiability, existence and uniqueness, etc. will be largely ignored.

Title: Use basic examples to calibrate exponents

Motivation: In the more quantitative areas of mathematics, such as analysis and combinatorics, one has to frequently keep track of a large number of exponents in one’s identities, inequalities, and estimates.  For instance, if one is studying a set of N elements, then many expressions that one is faced with will often involve some power $N^p$ of N; if one is instead studying a function f on a measure space X, then perhaps it is an $L^p$ norm $\|f\|_{L^p(X)}$ which will appear instead.  The exponent $p$ involved will typically evolve slowly over the course of the argument, as various algebraic or analytic manipulations are applied.  In some cases, the exact value of this exponent is immaterial, but at other times it is crucial to have the correct value of $p$ at hand.   One can (and should) of course carefully go through one’s arguments line by line to work out the exponents correctly, but it is all too easy to make a sign error or other mis-step at one of the lines, causing all the exponents on subsequent lines to be incorrect.  However, one can guard against this (and avoid some tedious line-by-line exponent checking) by continually calibrating these exponents at key junctures of the arguments by using basic examples of the object of study (sets, functions, graphs, etc.) as test cases.  This is a simple trick, but it lets one avoid many unforced errors with exponents, and also lets one compute more rapidly.

Quick description: When trying to quickly work out what an exponent p in an estimate, identity, or inequality should be without deriving that statement line-by-line, test that statement with a simple example which has non-trivial behaviour with respect to that exponent p, but trivial behaviour with respect to as many other components of that statement as one is able to manage.   The “non-trivial” behaviour should be parametrised by some very large or very small parameter.  By matching the dependence on this parameter on both sides of the estimate, identity, or inequality, one should recover p (or at least a good prediction as to what p should be).

General discussion: The test examples should be as basic as possible; ideally they should have trivial behaviour in all aspects except for one feature that relates to the exponent p that one is trying to calibrate, thus being only “barely” non-trivial.   When the object of study is a function, then (appropriately rescaled, or otherwise modified) bump functions are very typical test objects, as are Dirac masses, constant functions, Gaussians, or other functions that are simple and easy to compute with.  In additive combinatorics, when the object of study is a subset of a group, then subgroups, arithmetic progressions, or random sets are typical test objects.  In graph theory, typical examples of test objects include complete graphs, complete bipartite graphs, and random graphs. And so forth.

This trick is closely related to that of using dimensional analysis to recover exponents; indeed, one can view dimensional analysis as the special case of exponent calibration when using test objects which are non-trivial in one dimensional aspect (e.g. they exist at a single very large or very small length scale) but are otherwise of a trivial or “featureless” nature.   But the calibration trick is more general, as it can involve parameters (such as probabilities, angles, or eccentricities) which are not commonly associated with the physical concept of a dimension.  And personally, I find example-based calibration to be a much more satisfying (and convincing) explanation of an exponent than a calibration arising from formal dimensional analysis.

When one is trying to calibrate an inequality or estimate, one should try to pick a basic example which one expects to saturate that inequality or estimate, i.e. an example for which the inequality is close to being an equality.  Otherwise, one would only expect to obtain some partial information on the desired exponent p (e.g. a lower bound or an upper bound only).  Knowing the examples that saturate an estimate that one is trying to prove is also useful for several other reasons – for instance, it strongly suggests that any technique which is not efficient when applied to the saturating example, is unlikely to be strong enough to prove the estimate in general, thus eliminating fruitless approaches to a problem and (hopefully) refocusing one’s attention on those strategies which actually have a chance of working.

Calibration is best used for the type of quick-and-dirty calculations one uses when trying to rapidly map out an argument that one has roughly worked out already, but without precise details; in particular, I find it particularly useful when writing up a rapid prototype.  When the time comes to write out the paper in full detail, then of course one should instead carefully work things out line by line, but if all goes well, the exponents obtained in that process should match up with the preliminary guesses for those exponents obtained by calibration, which adds confidence that there are no exponent errors have been committed.

Prerequisites: Undergraduate analysis and combinatorics.

Jim Colliander, Mark Keel, Gigliola Staffilani, Hideo Takaoka, and I have just uploaded to the arXiv the paper “Weakly turbulent solutions for the cubic defocusing nonlinear Schrödinger equation“, which we have submitted to Inventiones Mathematicae. This paper concerns the numerically observed phenomenon of weak turbulence for the periodic defocusing cubic non-linear Schrödinger equation

$-i u_t + \Delta u = |u|^2 u$ (1)

in two spatial dimensions, thus u is a function from ${\Bbb R} \times {\Bbb T}^2$ to ${\Bbb C}$.  This equation has three important conserved quantities: the mass

$M(u) = M(u(t)) := \int_{{\Bbb T}^2} |u(t,x)|^2\ dx$

the momentum

$\vec p(u) = \vec p(u(t)) = \int_{{\Bbb T}^2} \hbox{Im}( \nabla u(t,x) \overline{u(t,x)} )\ dx$

and the energy

$E(u) = E(u(t)) := \int_{{\Bbb T}^2} \frac{1}{2} |\nabla u(t,x)|^2 + \frac{1}{4} |u(t,x)|^4\ dx$.

(These conservation laws, incidentally, are related to the basic symmetries of phase rotation, spatial translation, and time translation, via Noether’s theorem.) Using these conservation laws and some standard PDE technology (specifically, some Strichartz estimates for the periodic Schrödinger equation), one can establish global wellposedness for the initial value problem for this equation in (say) the smooth category; thus for every smooth $u_0: {\Bbb T}^2 \to {\Bbb C}$ there is a unique global smooth solution $u: {\Bbb R} \times {\Bbb T}^2 \to {\Bbb C}$ to (1) with initial data $u(0,x) = u_0(x)$, whose mass, momentum, and energy remain constant for all time.

However, the mass, momentum, and energy only control three of the infinitely many degrees of freedom available to a function on the torus, and so the above result does not fully describe the dynamics of solutions over time.  In particular, the three conserved quantities inhibit, but do not fully prevent the possibility of a low-to-high frequency cascade, in which the mass, momentum, and energy of the solution remain conserved, but shift to increasingly higher frequencies (or equivalently, to finer spatial scales) as time goes to infinity.  This phenomenon has been observed numerically, and is sometimes referred to as weak turbulence (in contrast to strong turbulence, which is similar but happens within a finite time span rather than asymptotically).

To illustrate how this can happen, let us normalise the torus as ${\Bbb T}^2 = ({\Bbb R}/2\pi {\Bbb Z})^2$.  A simple example of a frequency cascade would be a scenario in which solution $u(t,x) = u(t,x_1,x_2)$ starts off at a low frequency at time zero, e.g. $u(0,x) = A e^{i x_1}$ for some constant amplitude A, and ends up at a high frequency at a later time T, e.g. $u(T,x) = A e^{i N x_1}$ for some large frequency N. This scenario is consistent with conservation of mass, but not conservation of energy or momentum and thus does not actually occur for solutions to (1).  A more complicated example would be a solution supported on two low frequencies at time zero, e.g. $u(0,x) = A e^{ix_1} + A e^{-ix_1}$, and ends up at two high frequencies later, e.g. $u(T,x) = A e^{iNx_1} + A e^{-iNx_1}$.  This scenario is consistent with conservation of mass and momentum, but not energy.  Finally, consider the scenario which starts off at $u(0,x) = A e^{i Nx_1} + A e^{iNx_2}$ and ends up at $u(T,x) = A + A e^{i(N x_1 + N x_2)}$.  This scenario is consistent with all three conservation laws, and exhibits a mild example of a low-to-high frequency cascade, in which the solution starts off at frequency N and ends up with half of its mass at the slightly higher frequency $\sqrt{2} N$, with the other half of its mass at the zero frequency.  More generally, given four frequencies $n_1, n_2, n_3, n_4 \in {\Bbb Z}^2$ which form the four vertices of a rectangle in order, one can concoct a similar scenario, compatible with all conservation laws, in which the solution starts off at frequencies $n_1, n_3$ and propagates to frequencies $n_2, n_4$.

One way to measure a frequency cascade quantitatively is to use the Sobolev norms $H^s({\Bbb T}^2)$ for $s > 1$; roughly speaking, a low-to-high frequency cascade occurs precisely when these Sobolev norms get large.  (Note that mass and energy conservation ensure that the $H^s({\Bbb T}^2)$ norms stay bounded for $0 \leq s \leq 1$.)  For instance, in the cascade from $u(0,x) = A e^{i Nx_1} + A e^{iNx_2}$ to $u(T,x) = A + A e^{i(N x_1 + N x_2)}$, the $H^s({\Bbb T}^2)$ norm is roughly $2^{1/2} A N^s$ at time zero and $2^{s/2} A N^s$ at time T, leading to a slight increase in that norm for $s > 1$.  Numerical evidence then suggests the following

Conjecture. (Weak turbulence) There exist smooth solutions $u(t,x)$ to (1) such that $\|u(t)\|_{H^s({\Bbb T}^2)}$ goes to infinity as $t \to \infty$ for any $s > 1$.

We were not able to establish this conjecture, but we have the following partial result (“weak weak turbulence”, if you will):

Theorem. Given any $\varepsilon > 0, K > 0, s > 1$, there exists a smooth solution $u(t,x)$ to (1) such that $\|u(0)\|_{H^s({\Bbb T}^2)} \leq \epsilon$ and $\|u(T)\|_{H^s({\Bbb T}^2)} > K$ for some time T.

This is in marked contrast to (1) in one spatial dimension ${\Bbb T}$, which is completely integrable and has an infinite number of conservation laws beyond the mass, energy, and momentum which serve to keep all $H^s({\Bbb T}^2)$ norms bounded in time.  It is also in contrast to the linear Schrödinger equation, in which all Sobolev norms are preserved, and to the non-periodic analogue of (1), which is conjectured to disperse to a linear solution (i.e. to scatter) from any finite mass data (see this earlier post for the current status of that conjecture).  Thus our theorem can be viewed as evidence that the 2D periodic cubic NLS does not behave at all like a completely integrable system or a linear solution, even for small data.  (An earlier result of Kuksin gives (in our notation) the weaker result that the ratio $\|u(T)\|_{H^s({\Bbb T}^2)} / \|u(0)\|_{H^s({\Bbb T}^2)}$ can be made arbitrarily large when $s > 1$, thus showing that large initial data can exhibit movement to higher frequencies; the point of our paper is that we can achieve the same for arbitrarily small data.) Intuitively, the problem is that the torus is compact and so there is no place for the solution to disperse its mass; instead, it must continually interact nonlinearly with itself, which is what eventually causes the weak turbulence.

I’ve just uploaded to the arXiv the paper “The cubic nonlinear Schrödinger equation in two dimensions with radial data“, joint with Rowan Killip and Monica Visan, and submitted to the Annals of Mathematics. This is a sequel of sorts to my paper with Monica and Xiaoyi Zhang, in which we established global well-posedness and scattering for the defocusing mass-critical nonlinear Schrödinger equation (NLS) $iu_t + \Delta u = |u|^{4/d} u$ in three and higher dimensions $d \geq 3$ assuming spherically symmetric data. (This is another example of the recently active field of critical dispersive equations, in which both coarse and fine scales are (just barely) nonlinearly active, and propagate at different speeds, leading to significant technical difficulties.)

In this paper we obtain the same result for the defocusing two-dimensional mass-critical NLS $iu_t + \Delta u= |u|^2 u$, as well as in the focusing case $iu_t + \Delta u= -|u|^2 u$ under the additional assumption that the mass of the initial data is strictly less than the mass of the ground state. (When mass equals that of the ground state, there is an explicit example, built using the pseudoconformal transformation, which shows that solutions can blow up in finite time.) In fact we can show a slightly stronger statement: for spherically symmetric focusing solutions with arbitrary mass, we can show that the first singularity that forms concentrates at least as much mass as the ground state.

My paper “Resonant decompositions and the I-method for the cubic nonlinear Schrodinger equation on ${\Bbb R}^2$“, with Jim Colliander, Mark Keel, Gigliola Staffilani, and Hideo Takaoka (aka the “I-team“), has just been uploaded to the arXiv, and submitted to DCDS-A. In this (long-delayed!) paper, we improve our previous result on the global well-posedness of the cubic non-linear defocusing Schrödinger equation

$i u_t+ \Delta u = |u|^2 u$

in two spatial dimensions, thus $u: {\Bbb R} \times {\Bbb R}^2 \to {\Bbb C}$. In that paper we used the “first generation I-method” (centred around an almost conservation law for a mollified energy $E(Iu)$) to obtain global well-posedness in $H^s({\Bbb R}^2)$ for $s > 4/7$ (improving on an earlier result of $s > 2/3$ by Bourgain). Here we use the “second generation I-method”, in which the mollified energy $E(Iu)$ is adjusted by a correction term to damp out “non-resonant interactions” and thus lead to an improved almost conservation law, and ultimately to an improvement of the well-posedness range to $s > 1/2$. (The conjectured region is $s \geq 0$; beyond that, the solution becomes unstable and even local well-posedness is not known.) A similar result (but using Morawetz estimates instead of correction terms) has recently been established by Colliander-Grillakis-Tzirakis; this attains the superior range of $s > 2/5$, but in the focusing case it does not give global existence all the way up to the ground state due to a slight inefficiency in the Morawetz estimate approach. Our method is in fact rather robust and indicates that the “first-generation” I-method can be pushed further for a large class of dispersive PDE.