You are currently browsing the monthly archive for August 2008.
This month I am at MSRI, for the programs of Ergodic Theory and Additive Combinatorics, and Analysis on Singular Spaces, that are currently ongoing here. This week I am giving three lectures on the correspondence principle, and on finitary versions of ergodic theory, for the introductory workshop in the former program. The article here is broadly describing the content of these talks (which are slightly different in theme from that announced in the abstract, due to some recent developments). [These lectures were also recorded on video and should be available from the MSRI web site within a few months.]
As many readers may already know, my good friend and fellow mathematical blogger Tim Gowers, having wrapped up work on the Princeton Companion to Mathematics (which I believe is now in press), has begun another mathematical initiative, namely a “Tricks Wiki” to act as a repository for mathematical tricks and techniques. Tim has already started the ball rolling with several seed articles on his own blog, and asked me to also contribute some articles. (As I understand it, these articles will be migrated to the Wiki in a few months, once it is fully set up, and then they will evolve with edits and contributions by anyone who wishes to pitch in, in the spirit of Wikipedia; in particular, articles are not intended to be permanently authored or signed by any single contributor.)
So today I’d like to start by extracting some material from an old post of mine on “Amplification, arbitrage, and the tensor power trick” (as well as from some of the comments), and converting it to the Tricks Wiki format, while also taking the opportunity to add a few more examples.
Title: The tensor power trick
Quick description: If one wants to prove an inequality for some non-negative quantities X, Y, but can only see how to prove a quasi-inequality
that loses a multiplicative constant C, then try to replace all objects involved in the problem by “tensor powers” of themselves and apply the quasi-inequality to those powers. If all goes well, one can show that
for all
, with a constant C which is independent of M, which implies that
as desired by taking
roots and then taking limits as
.
Jim Colliander, Mark Keel, Gigliola Staffilani, Hideo Takaoka, and I have just uploaded to the arXiv the paper “Weakly turbulent solutions for the cubic defocusing nonlinear Schrödinger equation“, which we have submitted to Inventiones Mathematicae. This paper concerns the numerically observed phenomenon of weak turbulence for the periodic defocusing cubic non-linear Schrödinger equation
(1)
in two spatial dimensions, thus u is a function from to
. This equation has three important conserved quantities: the mass
the momentum
and the energy
.
(These conservation laws, incidentally, are related to the basic symmetries of phase rotation, spatial translation, and time translation, via Noether’s theorem.) Using these conservation laws and some standard PDE technology (specifically, some Strichartz estimates for the periodic Schrödinger equation), one can establish global wellposedness for the initial value problem for this equation in (say) the smooth category; thus for every smooth there is a unique global smooth solution
to (1) with initial data
, whose mass, momentum, and energy remain constant for all time.
However, the mass, momentum, and energy only control three of the infinitely many degrees of freedom available to a function on the torus, and so the above result does not fully describe the dynamics of solutions over time. In particular, the three conserved quantities inhibit, but do not fully prevent the possibility of a low-to-high frequency cascade, in which the mass, momentum, and energy of the solution remain conserved, but shift to increasingly higher frequencies (or equivalently, to finer spatial scales) as time goes to infinity. This phenomenon has been observed numerically, and is sometimes referred to as weak turbulence (in contrast to strong turbulence, which is similar but happens within a finite time span rather than asymptotically).
To illustrate how this can happen, let us normalise the torus as . A simple example of a frequency cascade would be a scenario in which solution
starts off at a low frequency at time zero, e.g.
for some constant amplitude A, and ends up at a high frequency at a later time T, e.g.
for some large frequency N. This scenario is consistent with conservation of mass, but not conservation of energy or momentum and thus does not actually occur for solutions to (1). A more complicated example would be a solution supported on two low frequencies at time zero, e.g.
, and ends up at two high frequencies later, e.g.
. This scenario is consistent with conservation of mass and momentum, but not energy. Finally, consider the scenario which starts off at
and ends up at
. This scenario is consistent with all three conservation laws, and exhibits a mild example of a low-to-high frequency cascade, in which the solution starts off at frequency N and ends up with half of its mass at the slightly higher frequency
, with the other half of its mass at the zero frequency. More generally, given four frequencies
which form the four vertices of a rectangle in order, one can concoct a similar scenario, compatible with all conservation laws, in which the solution starts off at frequencies
and propagates to frequencies
.
One way to measure a frequency cascade quantitatively is to use the Sobolev norms for
; roughly speaking, a low-to-high frequency cascade occurs precisely when these Sobolev norms get large. (Note that mass and energy conservation ensure that the
norms stay bounded for
.) For instance, in the cascade from
to
, the
norm is roughly
at time zero and
at time T, leading to a slight increase in that norm for
. Numerical evidence then suggests the following
Conjecture. (Weak turbulence) There exist smooth solutions
to (1) such that
goes to infinity as
for any
.
We were not able to establish this conjecture, but we have the following partial result (“weak weak turbulence”, if you will):
Theorem. Given any
, there exists a smooth solution
to (1) such that
and
for some time T.
This is in marked contrast to (1) in one spatial dimension , which is completely integrable and has an infinite number of conservation laws beyond the mass, energy, and momentum which serve to keep all
norms bounded in time. It is also in contrast to the linear Schrödinger equation, in which all Sobolev norms are preserved, and to the non-periodic analogue of (1), which is conjectured to disperse to a linear solution (i.e. to scatter) from any finite mass data (see this earlier post for the current status of that conjecture). Thus our theorem can be viewed as evidence that the 2D periodic cubic NLS does not behave at all like a completely integrable system or a linear solution, even for small data. (An earlier result of Kuksin gives (in our notation) the weaker result that the ratio
can be made arbitrarily large when
, thus showing that large initial data can exhibit movement to higher frequencies; the point of our paper is that we can achieve the same for arbitrarily small data.) Intuitively, the problem is that the torus is compact and so there is no place for the solution to disperse its mass; instead, it must continually interact nonlinearly with itself, which is what eventually causes the weak turbulence.
Prodded by several comments, I have finally decided to write up some my thoughts on time management here. I actually have been drafting something about this subject for a while, but I soon realised that my own experience with time management is still very much a work in progress (you should see my backlog of papers that need writing up) and I don’t yet have a coherent or definitive philosophy on this topic (other than my advice on writing papers, for instance my page on rapid prototyping). Also, I can only talk about my own personal experiences, which probably do not generalise to all personality types or work situations, though perhaps readers may wish to contribute their own thoughts, experiences, or suggestions in the comments here. [I should also add that I don’t always follow my own advice on these matters, often to my own regret.]
I can maybe make some unorganised comments, though. Firstly, I am very lucky to have some excellent collaborators who put a lot of effort into our joint papers; many of the papers appearing recently on this blog, for instance, were to a large extent handled by co-authors. Generally, I find that papers written in collaboration take longer than singly-authored papers, but the net effort expended per author is significantly less (and the quality of writing higher). Also, I find that I can work on many joint papers in parallel (since the ball is often in another co-author’s court, or is pending some other development), but only on one single-authored paper at a time.
[For reasons having to do with the academic calendar, many more of these papers get finished during the summer than any other time of year, but many of these projects have actually been gestating for quite some time. (There should be a joint paper appearing shortly which we have been working on for about three or four years, for instance; and I have been thinking about the global regularity problem for wave maps problem on and off (mostly off) since about 2000.) So a paper being released every week does not actually correspond to a week being the time needed to conceive and then write up a paper; there is in fact quite a long pipeline of development which mostly happens out of public view.]
I have just uploaded to the arXiv the third installment of my “heatwave” project, entitled “Global regularity of wave maps V. Large data local well-posedness in the energy class“. This (rather technical) paper establishes another of the key ingredients necessary to establish the global existence of smooth wave maps from 2+1-dimensional spacetime to hyperbolic space
. Specifically, a large data local well-posedness result is established, constructing a local solution from any initial data with finite (but possibly quite large) energy, and furthermore that the solution depends continuously on the initial data in the energy topology. (This topology was constructed in my previous paper.) Once one has this result, the only remaining task is to show a “Palais-Smale property” for wave maps, in that if singularities form in the wave maps equation, then there exists a non-trivial minimal-energy blowup solution, whose orbit is almost periodic modulo the symmetries of the equation. I anticipate this to the most difficult component of the whole project, and is the subject of the fourth (and hopefully final) installment of this series.
This local result is closely related to the small energy global regularity theory developed in recent years by myself, by Krieger, and by Tataru. In particular, the complicated function spaces used in that paper (which ultimately originate from a precursor paper of Tataru). The main new difficulties here are to extend the small energy theory to large energy (by localising time suitably), and to establish continuous dependence on the data (i.e. two solutions which are initially close in the energy topology, need to stay close in that topology). The former difficulty is in principle manageable by exploiting finite speed of propagation (exploiting the fact (arising from the monotone convergence theorem) that large energy data becomes small energy data at sufficiently small spatial scales), but for technical reasons (having to do with my choice of gauge) I was not able to do this and had to deal with the large energy case directly (and in any case, a genuinely large energy theory is going to be needed to construct the minimal energy blowup solution in the next paper). The latter difficulty is in principle resolvable by adapting the existence theory to differences of solutions, rather than to individual solutions, but the nonlinear choice of gauge adds a rather tedious amount of complexity to the task of making this rigorous. (It may be that simpler gauges, such as the Coulomb gauge, may be usable here, at least in the case of the hyperbolic plane (cf. the work of Krieger), but such gauges cause additional analytic problems as they do not renormalise the nonlinearity as strongly as the caloric gauge. The paper of Tataru establishes these goals, but assumes an isometric embedding of the target manifold into a Euclidean space, which is unfortunately not available for hyperbolic space targets.)
The main technical difficulty that had to be overcome in the paper was that there were two different time variables t, s (one for the wave maps equation and one for the heat flow), and three types of PDE (hyperbolic, parabolic, and ODE) that one has to solve forward in t, forward in s, and backwards in s respectively. In order to close the argument in the large energy case, this necessitated a rather complicated iteration-type scheme, in which one solved for the caloric gauge, established parabolic regularity estimates for that gauge, propagated a “wave-tension field” by the heat flow, and then solved a wave maps type equation using that field as a forcing term. The argument can eventually be closed using mostly “off-the-shelf” function space estimates from previous papers, but is remarkably lengthy, especially when analysing differences of two solutions. (One drawback of using off-the-shelf estimates, though, is that one does not get particularly good control of the solution over extended periods of time; in particular, the spaces used here cannot detect the decay of the solution over extended periods of time (unlike, say, Strichartz spaces for
) and so will not be able to supply the long-time perturbation theory that will be needed in the next paper in this series. I believe I know how to re-engineer these spaces to achieve this, though, and the details should follow in the forthcoming paper.)
Van Vu and I have just uploaded to the arXiv our new paper, “Random matrices: Universality of ESDs and the circular law“, with an appendix by Manjunath Krishnapur (and some numerical data and graphs by Philip Wood). One of the things we do in this paper (which was our original motivation for this project) was to finally establish the endpoint case of the circular law (in both strong and weak forms) for random iid matrices , where the coefficients
are iid random variables with mean zero and unit variance. (The strong circular law says that with probability 1, the empirical spectral distribution (ESD) of the normalised eigenvalues
of the matrix
converges to the uniform distribution on the unit circle as
. The weak circular law asserts the same thing, but with convergence in probability rather than almost sure convergence; this is in complete analogy with the weak and strong law of large numbers, and in fact this law is used in the proof.) In a previous paper, we had established the same claim but under the additional assumption that the
moment
was finite for some
; this builds upon a significant body of earlier work by Mehta, Girko, Bai, Bai-Silverstein, Gotze-Tikhomirov, and Pan-Zhou, as discussed in the blog article for the previous paper.
As it turned out, though, in the course of this project we found a more general universality principle (or invariance principle) which implied our results about the circular law, but is perhaps more interesting in its own right. Observe that the statement of the circular law can be split into two sub-statements:
- (Universality for iid ensembles) In the asymptotic limit
, the ESD of the random matrix
is independent of the choice of distribution of the coefficients, so long as they are normalised in mean and variance. In particular, the ESD of such a matrix is asymptotically the same as that of a (real or complex) gaussian matrix
with the same mean and variance.
- (Circular law for gaussian matrices) In the asymptotic limit
, the ESD of a gaussian matrix
converges to the circular law.
The reason we single out the gaussian matrix ensemble is that it has a much richer algebraic structure (for instance, the real (resp. complex) gaussian ensemble is invariant under right and left multiplication by the orthogonal group O(n) (resp. the unitary group U(n))). Because of this, it is possible to compute the eigenvalue distribution very explicitly by algebraic means (for instance, using the machinery of orthogonal polynomials). In particular, the circular law for complex gaussian matrices (Statement 2 above) was established all the way back in 1967 by Mehta, using an explicit formula for the distribution of the ESD in this case due to Ginibre.
These highly algebraic techniques completely break down for more general iid ensembles, such as the Bernoulli ensemble of matrices whose entries are +1 or -1 with an equal probability of each. Nevertheless, it is a remarkable phenomenon – which has been referred to as universality in the literature, for instance in this survey by Deift – that the spectral properties of random matrices for non-algebraic ensembles are in many cases asymptotically indistinguishable in the limit from that of algebraic ensembles with the same mean and variance (i.e. Statement 1 above). One might view this as a sort of “non-Hermitian, non-commutative” analogue of the universality phenomenon represented by the central limit theorem, in which the limiting distribution of a normalised average
(1)
of an iid sequence depends only on the mean and variance of the elements of that sequence (assuming of course that these quantities are finite), and not on the underlying distribution. (The Hermitian non-commutative analogue of the CLT is known as Wigner’s semicircular law.)
Previous approaches to the circular law did not build upon the gaussian case, but instead proceeded directly, in particular controlling the ESD of a random matrix via estimates on the Stieltjes transform
(2)
of that matrix for complex numbers z. This method required a combination of delicate analysis (in particular, a bound on the least singular values of ), and algebra (in order to compute and then invert the Stieltjes transform). [As a general rule, and oversimplifying somewhat, algebra tends to be used to control main terms in a computation, while analysis is used to control error terms.]
What we discovered while working on our paper was that the algebra and analysis could be largely decoupled from each other: that one could establish a universality principle (Statement 1 above) by relying primarily on tools from analysis (most notably the bound on least singular values mentioned earlier, but also Talagrand’s concentration of measure inequality, and a universality principle for the singular value distribution of random matrices due to Dozier and Silverstein), so that the algebraic heavy lifting only needs to be done in the gaussian case (Statement 2 above) where the task is greatly simplified by all the additional algebraic structure available in that setting. This suggests a possible strategy to proving other conjectures in random matrices (for instance concerning the eigenvalue spacing distribution of random iid matrices), by first establishing universality to swap the general random matrix ensemble with an algebraic ensemble (without fully understanding the limiting behaviour of either), and then using highly algebraic tools to understand the latter ensemble. (There is now a sophisticated theory in place to deal with the latter task, but the former task – understanding universality – is still only poorly understood in many cases.)
Recent Comments