You are currently browsing the tag archive for the ‘Schrodinger equation’ tag.
Consider the free Schrödinger equation in spatial dimensions, which I will normalise as
where is the unknown field and
is the spatial Laplacian. To avoid irrelevant technical issues I will restrict attention to smooth (classical) solutions to this equation, and will work locally in spacetime avoiding issues of decay at infinity (or at other singularities); I will also avoid issues involving branch cuts of functions such as
(if one wishes, one can restrict
to be even in order to safely ignore all branch cut issues). The space of solutions to (1) enjoys a number of symmetries. A particularly non-obvious symmetry is the pseudoconformal symmetry: if
solves (1), then the pseudoconformal solution
defined by
for can be seen after some computation to also solve (1). (If
has suitable decay at spatial infinity and one chooses a suitable branch cut for
, one can extend
continuously to the
spatial slice, whereupon it becomes essentially the spatial Fourier transform of
, but we will not need this fact for the current discussion.)
An analogous symmetry exists for the free wave equation in spatial dimensions, which I will write as
where is the unknown field. In analogy to pseudoconformal symmetry, we have conformal symmetry: if
solves (3), then the function
, defined in the interior
of the light cone by the formula
also solves (3).
There are also some direct links between the Schrödinger equation in dimensions and the wave equation in
dimensions. This can be easily seen on the spacetime Fourier side: solutions to (1) have spacetime Fourier transform (formally) supported on a
-dimensional hyperboloid, while solutions to (3) have spacetime Fourier transform formally supported on a
-dimensional cone. To link the two, one then observes that the
-dimensional hyperboloid can be viewed as a conic section (i.e. hyperplane slice) of the
-dimensional cone. In physical space, this link is manifested as follows: if
solves (1), then the function
defined by
solves (3). More generally, for any non-zero scaling parameter , the function
defined by
solves (3).
As an “extra challenge” posed in an exercise in one of my books (Exercise 2.28, to be precise), I asked the reader to use the embeddings (or more generally
) to explicitly connect together the pseudoconformal transformation
and the conformal transformation
. It turns out that this connection is a little bit unusual, with the “obvious” guess (namely, that the embeddings
intertwine
and
) being incorrect, and as such this particular task was perhaps too difficult even for a challenge question. I’ve been asked a couple times to provide the connection more explicitly, so I will do so below the fold.
LLet be a self-adjoint operator on a finite-dimensional Hilbert space
. The behaviour of this operator can be completely described by the spectral theorem for finite-dimensional self-adjoint operators (i.e. Hermitian matrices, when viewed in coordinates), which provides a sequence
of eigenvalues and an orthonormal basis
of eigenfunctions such that
for all
. In particular, given any function
on the spectrum
of
, one can then define the linear operator
by the formula
which then gives a functional calculus, in the sense that the map is a
-algebra isometric homomorphism from the algebra
of bounded continuous functions from
to
, to the algebra
of bounded linear operators on
. Thus, for instance, one can define heat operators
for
, Schrödinger operators
for
, resolvents
for
, and (if
is positive) wave operators
for
. These will be bounded operators (and, in the case of the Schrödinger and wave operators, unitary operators, and in the case of the heat operators with
positive, they will be contractions). Among other things, this functional calculus can then be used to solve differential equations such as the heat equation
The functional calculus can also be associated to a spectral measure. Indeed, for any vectors , there is a complex measure
on
with the property that
indeed, one can set to be the discrete measure on
defined by the formula
One can also view this complex measure as a coefficient
of a projection-valued measure on
, defined by setting
Finally, one can view as unitarily equivalent to a multiplication operator
on
, where
is the real-valued function
, and the intertwining map
is given by
so that .
It is an important fact in analysis that many of these above assertions extend to operators on an infinite-dimensional Hilbert space , so long as one one is careful about what “self-adjoint operator” means; these facts are collectively referred to as the spectral theorem. For instance, it turns out that most of the above claims have analogues for bounded self-adjoint operators
. However, in the theory of partial differential equations, one often needs to apply the spectral theorem to unbounded, densely defined linear operators
, which (initially, at least), are only defined on a dense subspace
of the Hilbert space
. A very typical situation arises when
is the square-integrable functions on some domain or manifold
(which may have a boundary or be otherwise “incomplete”), and
are the smooth compactly supported functions on
, and
is some linear differential operator. It is then of interest to obtain the spectral theorem for such operators, so that one build operators such as
or to solve equations such as (1), (2), (3), (4).
In order to do this, some necessary conditions on the densely defined operator must be imposed. The most obvious is that of symmetry, which asserts that
for all . In some applications, one also wants to impose positive definiteness, which asserts that
for all . These hypotheses are sufficient in the case when
is bounded, and in particular when
is finite dimensional. However, as it turns out, for unbounded operators these conditions are not, by themselves, enough to obtain a good spectral theory. For instance, one consequence of the spectral theorem should be that the resolvents
are well-defined for any strictly complex
, which by duality implies that the image of
should be dense in
. However, this can fail if one just assumes symmetry, or symmetry and positive definiteness. A well-known example occurs when
is the Hilbert space
,
is the space of test functions, and
is the one-dimensional Laplacian
. Then
is symmetric and positive, but the operator
does not have dense image for any complex
, since
for all test functions , as can be seen from a routine integration by parts. As such, the resolvent map is not everywhere uniquely defined. There is also a lack of uniqueness for the wave, heat, and Schrödinger equations for this operator (note that there are no spatial boundary conditions specified in these equations).
Another example occurs when ,
,
is the momentum operator
. Then the resolvent
can be uniquely defined for
in the upper half-plane, but not in the lower half-plane, due to the obstruction
for all test functions (note that the function
lies in
when
is in the lower half-plane). For related reasons, the translation operators
have a problem with either uniqueness or existence (depending on whether
is positive or negative), due to the unspecified boundary behaviour at the origin.
The key property that lets one avoid this bad behaviour is that of essential self-adjointness. Once is essentially self-adjoint, then spectral theorem becomes applicable again, leading to all the expected behaviour (e.g. existence and uniqueness for the various PDE given above).
Unfortunately, the concept of essential self-adjointness is defined rather abstractly, and is difficult to verify directly; unlike the symmetry condition (5) or the positive condition (6), it is not a “local” condition that can be easily verified just by testing on various inputs, but is instead a more “global” condition. In practice, to verify this property, one needs to invoke one of a number of a partial converses to the spectral theorem, which roughly speaking asserts that if at least one of the expected consequences of the spectral theorem is true for some symmetric densely defined operator
, then
is self-adjoint. Examples of “expected consequences” include:
- Existence of resolvents
(or equivalently, dense image for
);
- Existence of a contractive heat propagator semigroup
(in the positive case);
- Existence of a unitary Schrödinger propagator group
;
- Existence of a unitary wave propagator group
(in the positive case);
- Existence of a “reasonable” functional calculus.
- Unitary equivalence with a multiplication operator.
Thus, to actually verify essential self-adjointness of a differential operator, one typically has to first solve a PDE (such as the wave, Schrödinger, heat, or Helmholtz equation) by some non-spectral method (e.g. by a contraction mapping argument, or a perturbation argument based on an operator already known to be essentially self-adjoint). Once one can solve one of the PDEs, then one can apply one of the known converse spectral theorems to obtain essential self-adjointness, and then by the forward spectral theorem one can then solve all the other PDEs as well. But there is no getting out of that first step, which requires some input (typically of an ODE, PDE, or geometric nature) that is external to what abstract spectral theory can provide. For instance, if one wants to establish essential self-adjointness of the Laplace-Beltrami operator on a smooth Riemannian manifold
(using
as the domain space), it turns out (under reasonable regularity hypotheses) that essential self-adjointness is equivalent to geodesic completeness of the manifold, which is a global ODE condition rather than a local one: one needs geodesics to continue indefinitely in order to be able to (unitarily) solve PDEs such as the wave equation, which in turn leads to essential self-adjointness. (Note that the domains
and
in the previous examples were not geodesically complete.) For this reason, essential self-adjointness of a differential operator is sometimes referred to as quantum completeness (with the completeness of the associated Hamilton-Jacobi flow then being the analogous classical completeness).
In these notes, I wanted to record (mostly for my own benefit) the forward and converse spectral theorems, and to verify essential self-adjointness of the Laplace-Beltrami operator on geodesically complete manifolds. This is extremely standard analysis (covered, for instance, in the texts of Reed and Simon), but I wanted to write it down myself to make sure that I really understood this foundational material properly.
A recurring theme in mathematics is that of duality: a mathematical object can either be described internally (or in physical space, or locally), by describing what
physically consists of (or what kind of maps exist into
), or externally (or in frequency space, or globally), by describing what
globally interacts or resonates with (or what kind of maps exist out of
). These two fundamentally opposed perspectives on the object
are often dual to each other in various ways: performing an operation on
may transform it one way in physical space, but in a dual way in frequency space, with the frequency space description often being a “inversion” of the physical space description. In several important cases, one is fortunate enough to have some sort of fundamental theorem connecting the internal and external perspectives. Here are some (closely inter-related) examples of this perspective:
- Vector space duality A vector space
over a field
can be described either by the set of vectors inside
, or dually by the set of linear functionals
from
to the field
(or equivalently, the set of vectors inside the dual space
). (If one is working in the category of topological vector spaces, one would work instead with continuous linear functionals; and so forth.) A fundamental connection between the two is given by the Hahn-Banach theorem (and its relatives).
- Vector subspace duality In a similar spirit, a subspace
of
can be described either by listing a basis or a spanning set, or dually by a list of linear functionals that cut out that subspace (i.e. a spanning set for the orthogonal complement
. Again, the Hahn-Banach theorem provides a fundamental connection between the two perspectives.
- Convex duality More generally, a (closed, bounded) convex body
in a vector space
can be described either by listing a set of (extreme) points whose convex hull is
, or else by listing a set of (irreducible) linear inequalities that cut out
. The fundamental connection between the two is given by the Farkas lemma.
- Ideal-variety duality In a slightly different direction, an algebraic variety
in an affine space
can be viewed either “in physical space” or “internally” as a collection of points in
, or else “in frequency space” or “externally” as a collection of polynomials on
whose simultaneous zero locus cuts out
. The fundamental connection between the two perspectives is given by the nullstellensatz, which then leads to many of the basic fundamental theorems in classical algebraic geometry.
- Hilbert space duality An element
in a Hilbert space
can either be thought of in physical space as a vector in that space, or in momentum space as a covector
on that space. The fundamental connection between the two is given by the Riesz representation theorem for Hilbert spaces.
- Semantic-syntactic duality Much more generally still, a mathematical theory can either be described internally or syntactically via its axioms and theorems, or externally or semantically via its models. The fundamental connection between the two perspectives is given by the Gödel completeness theorem.
- Intrinsic-extrinsic duality A (Riemannian) manifold
can either be viewed intrinsically (using only concepts that do not require an ambient space, such as the Levi-Civita connection), or extrinsically, for instance as the level set of some defining function in an ambient space. Some important connections between the two perspectives includes the Nash embedding theorem and the theorema egregium.
- Group duality A group
can be described either via presentations (lists of generators, together with relations between them) or representations (realisations of that group in some more concrete group of transformations). A fundamental connection between the two is Cayley’s theorem. Unfortunately, in general it is difficult to build upon this connection (except in special cases, such as the abelian case), and one cannot always pass effortlessly from one perspective to the other.
- Pontryagin group duality A (locally compact Hausdorff) abelian group
can be described either by listing its elements
, or by listing the characters
(i.e. continuous homomorphisms from
to the unit circle, or equivalently elements of
). The connection between the two is the focus of abstract harmonic analysis.
- Pontryagin subgroup duality A subgroup
of a locally compact abelian group
can be described either by generators in
, or generators in the orthogonal complement
. One of the fundamental connections between the two is the Poisson summation formula.
- Fourier duality A (sufficiently nice) function
on a locally compact abelian group
(equipped with a Haar measure
) can either be described in physical space (by its values
at each element
of
) or in frequency space (by the values
at elements
of the Pontryagin dual
). The fundamental connection between the two is the Fourier inversion formula.
- The uncertainty principle The behaviour of a function
at physical scales above (resp. below) a certain scale
is almost completely controlled by the behaviour of its Fourier transform
at frequency scales below (resp. above) the dual scale
and vice versa, thanks to various mathematical manifestations of the uncertainty principle. (The Poisson summation formula can also be viewed as a variant of this principle, using subgroups instead of scales.)
- Stone/Gelfand duality A (locally compact Hausdorff) topological space
can be viewed in physical space (as a collection of points), or dually, via the
algebra
of continuous complex-valued functions on that space, or (in the case when
is compact and totally disconnected) via the boolean algebra of clopen sets (or equivalently, the idempotents of
). The fundamental connection between the two is given by the Stone representation theorem or the (commutative) Gelfand-Naimark theorem.
I have discussed a fair number of these examples in previous blog posts (indeed, most of the links above are to my own blog). In this post, I would like to discuss the uncertainty principle, that describes the dual relationship between physical space and frequency space. There are various concrete formalisations of this principle, most famously the Heisenberg uncertainty principle and the Hardy uncertainty principle – but in many situations, it is the heuristic formulation of the principle that is more useful and insightful than any particular rigorous theorem that attempts to capture that principle. Unfortunately, it is a bit tricky to formulate this heuristic in a succinct way that covers all the various applications of that principle; the Heisenberg inequality is a good start, but it only captures a portion of what the principle tells us. Consider for instance the following (deliberately vague) statements, each of which can be viewed (heuristically, at least) as a manifestation of the uncertainty principle:
- A function which is band-limited (restricted to low frequencies) is featureless and smooth at fine scales, but can be oscillatory (i.e. containing plenty of cancellation) at coarse scales. Conversely, a function which is smooth at fine scales will be almost entirely restricted to low frequencies.
- A function which is restricted to high frequencies is oscillatory at fine scales, but is negligible at coarse scales. Conversely, a function which is oscillatory at fine scales will be almost entirely restricted to high frequencies.
- Projecting a function to low frequencies corresponds to averaging out (or spreading out) that function at fine scales, leaving only the coarse scale behaviour.
- Projecting a frequency to high frequencies corresponds to removing the averaged coarse scale behaviour, leaving only the fine scale oscillation.
- The number of degrees of freedom of a function is bounded by the product of its spatial uncertainty and its frequency uncertainty (or more generally, by the volume of the phase space uncertainty). In particular, there are not enough degrees of freedom for a non-trivial function to be simulatenously localised to both very fine scales and very low frequencies.
- To control the coarse scale (or global) averaged behaviour of a function, one essentially only needs to know the low frequency components of the function (and vice versa).
- To control the fine scale (or local) oscillation of a function, one only needs to know the high frequency components of the function (and vice versa).
- Localising a function to a region of physical space will cause its Fourier transform (or inverse Fourier transform) to resemble a plane wave on every dual region of frequency space.
- Averaging a function along certain spatial directions or at certain scales will cause the Fourier transform to become localised to the dual directions and scales. The smoother the averaging, the sharper the localisation.
- The smoother a function is, the more rapidly decreasing its Fourier transform (or inverse Fourier transform) is (and vice versa).
- If a function is smooth or almost constant in certain directions or at certain scales, then its Fourier transform (or inverse Fourier transform) will decay away from the dual directions or beyond the dual scales.
- If a function has a singularity spanning certain directions or certain scales, then its Fourier transform (or inverse Fourier transform) will decay slowly along the dual directions or within the dual scales.
- Localisation operations in position approximately commute with localisation operations in frequency so long as the product of the spatial uncertainty and the frequency uncertainty is significantly larger than one.
- In the high frequency (or large scale) limit, position and frequency asymptotically behave like a pair of classical observables, and partial differential equations asymptotically behave like classical ordinary differential equations. At lower frequencies (or finer scales), the former becomes a “quantum mechanical perturbation” of the latter, with the strength of the quantum effects increasing as one moves to increasingly lower frequencies and finer spatial scales.
- Etc., etc.
- Almost all of the above statements generalise to other locally compact abelian groups than
or
, in which the concept of a direction or scale is replaced by that of a subgroup or an approximate subgroup. (In particular, as we will see below, the Poisson summation formula can be viewed as another manifestation of the uncertainty principle.)
I think of all of the above (closely related) assertions as being instances of “the uncertainty principle”, but it seems difficult to combine them all into a single unified assertion, even at the heuristic level; they seem to be better arranged as a cloud of tightly interconnected assertions, each of which is reinforced by several of the others. The famous inequality is at the centre of this cloud, but is by no means the only aspect of it.
The uncertainty principle (as interpreted in the above broad sense) is one of the most fundamental principles in harmonic analysis (and more specifically, to the subfield of time-frequency analysis), second only to the Fourier inversion formula (and more generally, Plancherel’s theorem) in importance; understanding this principle is a key piece of intuition in the subject that one has to internalise before one can really get to grips with this subject (and also with closely related subjects, such as semi-classical analysis and microlocal analysis). Like many fundamental results in mathematics, the principle is not actually that difficult to understand, once one sees how it works; and when one needs to use it rigorously, it is usually not too difficult to improvise a suitable formalisation of the principle for the occasion. But, given how vague this principle is, it is difficult to present this principle in a traditional “theorem-proof-remark” manner. Even in the more informal format of a blog post, I was surprised by how challenging it was to describe my own understanding of this piece of mathematics in a linear fashion, despite (or perhaps because of) it being one of the most central and basic conceptual tools in my own personal mathematical toolbox. In the end, I chose to give below a cloud of interrelated discussions about this principle rather than a linear development of the theory, as this seemed to more closely align with the nature of this principle.
is the fundamental equation of motion for (non-relativistic) quantum mechanics, modeling both one-particle systems and -particle systems for
. Remarkably, despite being a linear equation, solutions
to this equation can be governed by a non-linear equation in the large particle limit
. In particular, when modeling a Bose-Einstein condensate with a suitably scaled interaction potential
in the large particle limit, the solution can be governed by the cubic nonlinear Schrödinger equation
I recently attended a talk by Natasa Pavlovic on the rigorous derivation of this type of limiting behaviour, which was initiated by the pioneering work of Hepp and Spohn, and has now attracted a vast recent literature. The rigorous details here are rather sophisticated; but the heuristic explanation of the phenomenon is fairly simple, and actually rather pretty in my opinion, involving the foundational quantum mechanics of -particle systems. I am recording this heuristic derivation here, partly for my own benefit, but perhaps it will be of interest to some readers.
This discussion will be purely formal, in the sense that (important) analytic issues such as differentiability, existence and uniqueness, etc. will be largely ignored.
I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on the Schrödinger equation – the fundamental equation of motion of quantum particles, possibly in the presence of an external field. My focus here is on the relationship between the Schrödinger equation of motion for wave functions (and the closely related Heisenberg equation of motion for quantum observables), and Hamilton’s equations of motion for classical particles (and the closely related Poisson equation of motion for classical observables). There is also some brief material on semiclassical analysis, scattering theory, and spectral theory, though with only a little more than 5 pages to work with in all, I could not devote much detail to these topics. (In particular, nonlinear Schrödinger equations, a favourite topic of mine, are not covered at all.)
As I said before, I will try to link to at least one other PCM article in every post in this series. Today I would like to highlight Madhu Sudan‘s delightful article on information and coding theory, “Reliable transmission of information“.
[Update, Oct 3: typos corrected.]
[Update, Oct 9: more typos corrected.]
I’ve just uploaded to the arXiv the paper “The cubic nonlinear Schrödinger equation in two dimensions with radial data“, joint with Rowan Killip and Monica Visan, and submitted to the Annals of Mathematics. This is a sequel of sorts to my paper with Monica and Xiaoyi Zhang, in which we established global well-posedness and scattering for the defocusing mass-critical nonlinear Schrödinger equation (NLS) in three and higher dimensions
assuming spherically symmetric data. (This is another example of the recently active field of critical dispersive equations, in which both coarse and fine scales are (just barely) nonlinearly active, and propagate at different speeds, leading to significant technical difficulties.)
In this paper we obtain the same result for the defocusing two-dimensional mass-critical NLS , as well as in the focusing case
under the additional assumption that the mass of the initial data is strictly less than the mass of the ground state. (When mass equals that of the ground state, there is an explicit example, built using the pseudoconformal transformation, which shows that solutions can blow up in finite time.) In fact we can show a slightly stronger statement: for spherically symmetric focusing solutions with arbitrary mass, we can show that the first singularity that forms concentrates at least as much mass as the ground state.
My paper “Resonant decompositions and the I-method for the cubic nonlinear Schrodinger equation on “, with Jim Colliander, Mark Keel, Gigliola Staffilani, and Hideo Takaoka (aka the “I-team“), has just been uploaded to the arXiv, and submitted to DCDS-A. In this (long-delayed!) paper, we improve our previous result on the global well-posedness of the cubic non-linear defocusing Schrödinger equation
in two spatial dimensions, thus . In that paper we used the “first generation I-method” (centred around an almost conservation law for a mollified energy
) to obtain global well-posedness in
for
(improving on an earlier result of
by Bourgain). Here we use the “second generation I-method”, in which the mollified energy
is adjusted by a correction term to damp out “non-resonant interactions” and thus lead to an improved almost conservation law, and ultimately to an improvement of the well-posedness range to
. (The conjectured region is
; beyond that, the solution becomes unstable and even local well-posedness is not known.) A similar result (but using Morawetz estimates instead of correction terms) has recently been established by Colliander-Grillakis-Tzirakis; this attains the superior range of
, but in the focusing case it does not give global existence all the way up to the ground state due to a slight inefficiency in the Morawetz estimate approach. Our method is in fact rather robust and indicates that the “first-generation” I-method can be pushed further for a large class of dispersive PDE.
This is a well known problem (see for instance this survey) in the area of “quantum chaos” or “quantum unique ergodicity”; I am attracted to it both for its simplicity of statement (which I will get to eventually), and also because it focuses on one of the key weaknesses in our current understanding of the Laplacian, namely is that it is difficult with the tools we know to distinguish between eigenfunctions (exact solutions to ) and quasimodes (approximate solutions to the same equation), unless one is willing to work with generic energy levels rather than specific energy levels.
The Bunimovich stadium is the name given to any planar domain consisting of a rectangle bounded at both ends by semicircles. Thus the stadium has two flat edges (which are traditionally drawn horizontally) and two round edges, as this picture from Wikipedia shows:
Despite the simple nature of this domain, the stadium enjoys some interesting classical and quantum dynamics. The classical dynamics, or billiard dynamics on is ergodic (as shown by Bunimovich) but not uniquely ergodic. In more detail: we say the dynamics is ergodic because a billiard ball with randomly chosen initial position and velocity (as depicted above) will, over time, be uniformly distributed across the billiard (as well as in the energy surface of the phase space of the billiard). On the other hand, we say that the dynamics is not uniquely ergodic because there do exist some exceptional choices of initial position and velocity for which one does not have uniform distribution, namely the vertical trajectories in which the billiard reflects orthogonally off of the two flat edges indefinitely.
Recent Comments