You are currently browsing the monthly archive for December 2010.

One of my favourite unsolved problems in harmonic analysis is the *restriction problem*. This problem, first posed explicitly by Elias Stein, can take many equivalent forms, but one of them is this: one starts with a smooth compact hypersurface (possibly with boundary) in , such as the unit sphere in , and equips it with surface measure . One then takes a bounded measurable function on this surface, and then computes the (inverse) Fourier transform

of the measure . As is bounded and is a finite measure, this is a bounded function on ; from the dominated convergence theorem, it is also continuous. The restriction problem asks whether this Fourier transform also decays in space, and specifically whether lies in for some . (This is a natural space to control decay because it is translation invariant, which is compatible on the frequency space side with the modulation invariance of .) By the closed graph theorem, this is the case if and only if there is an estimate of the form

for some constant that can depend on but not on . By a limiting argument, to provide such an estimate, it suffices to prove such an estimate under the additional assumption that is smooth.

Strictly speaking, the above problem should be called the *extension problem*, but it is dual to the original formulation of the *restriction problem*, which asks to find those exponents for which the Fourier transform of an function can be meaningfully restricted to a hypersurface , in the sense that the map can be continuously defined from to, say, . A duality argument shows that the exponents for which the restriction property holds are the dual exponents to the exponents for which the extension problem holds.

There are several motivations for studying the restriction problem. The problem is connected to the classical question of determining the nature of the convergence of various Fourier summation methods (and specifically, Bochner-Riesz summation); very roughly speaking, if one wishes to perform a partial Fourier transform by restricting the frequencies (possibly using a well-chosen weight) to some region (such as a ball), then one expects this operation to well behaved if the boundary of this region has good restriction (or extension) properties. More generally, the restriction problem for a surface is connected to the behaviour of Fourier multipliers whose symbols are singular at . The problem is also connected to the analysis of various linear PDE such as the Helmholtz equation, Schro\”dinger equation, wave equation, and the (linearised) Korteweg-de Vries equation, because solutions to such equations can be expressed via the Fourier transform in the form for various surfaces (the sphere, paraboloid, light cone, and cubic for the Helmholtz, Schrödinger, wave, and linearised Korteweg de Vries equation respectively). A particular family of restriction-type theorems for such surfaces, known as *Strichartz estimates*, play a foundational role in the nonlinear perturbations of these linear equations (e.g. the nonlinear Schrödinger equation, the nonlinear wave equation, and the Korteweg-de Vries equation). Last, but not least, there is a a fundamental connection between the restriction problem and the Kakeya problem, which roughly speaking concerns how tubes that point in different directions can overlap. Indeed, by superimposing special functions of the type , known as *wave packets*, and which are concentrated on tubes in various directions, one can “encode” the Kakeya problem inside the restriction problem; in particular, the conjectured solution to the restriction problem implies the conjectured solution to the Kakeya problem. Finally, the restriction problem serves as a simplified toy model for studying discrete exponential sums whose coefficients do not have a well controlled phase; this perspective was, for instance, used by Ben Green when he established Roth’s theorem in the primes by Fourier-analytic methods, which was in turn one of the main inspirations for our later work establishing arbitrarily long progressions in the primes, although we ended up using ergodic-theoretic arguments instead of Fourier-analytic ones and so did not directly use restriction theory in that paper.

The estimate (1) is trivial for and becomes harder for smaller . The geometry, and more precisely the *curvature*, of the surface , plays a key role: if contains a portion which is completely flat, then it is not difficult to concoct an for which fails to decay in the normal direction to this flat portion, and so there are no restriction estimates for any finite . Conversely, if is not infinitely flat at any point, then from the method of stationary phase, the Fourier transform can be shown to decay at a power rate at infinity, and this together with a standard method known as the argument can be used to give non-trivial restriction estimates for finite . However, these arguments fall somewhat short of obtaining the best possible exponents . For instance, in the case of the sphere , the Fourier transform is known to decay at the rate and no better as , which shows that the condition is necessary in order for (1) to hold for this surface. The *restriction conjecture for * asserts that this necessary condition is also sufficient. However, the -based argument gives only the Tomas-Stein theorem, which in this context gives (1) in the weaker range . (On the other hand, by the nature of the method, the Tomas-Stein theorem does allow the norm on the right-hand side to be relaxed to , at which point the Tomas-Stein exponent becomes best possible. The fact that the Tomas-Stein theorem has an norm on the right-hand side is particularly valuable for applications to PDE, leading in particular to the Strichartz estimates mentioned earlier.)

Over the last two decades, there was a fair amount of work in pushing past the Tomas-Stein barrier. For sake of concreteness let us work just with the restriction problem for the unit sphere in . Here, the restriction conjecture asserts that (1) holds for all , while the Tomas-Stein theorem gives only . By combining a multiscale analysis approach with some new progress on the Kakeya conjecture, Bourgain was able to obtain the first improvement on this range, establishing the restriction conjecture for . The methods were steadily refined over the years; until recently, the best result (due to myself) was that the conjecture held for all , which proceeded by analysing a “bilinear ” variant of the problem studied previously by Bourgain and by Wolff. This is essentially the limit of that method; the relevant bilinear estimate fails for . (This estimate was recently established at the endpoint by Jungjin Lee (personal communication), though this does not quite improve the range of exponents in (1) due to a logarithmic inefficiency in converting the bilinear estimate to a linear one.)

On the other hand, the full range of exponents in (1) was obtained by Bennett, Carbery, and myself (with an alternate proof later given by Guth), but only under the additional assumption of *non-coplanar interactions*. In three dimensions, this assumption was enforced by replacing (1) with the weaker trilinear (and localised) variant

where and are arbitrary, is the ball of radius in , and are compact portions of whose unit normals are never coplanar, thus there is a uniform lower bound

for some and all . If it were not for this non-coplanarity restriction, (2) would be equivalent to (1) (by setting and , with the converse implication coming from Hölder’s inequality; the loss can be removed by a lemma from a paper of mine). At the time we wrote this paper, we tried fairly hard to try to remove this non-coplanarity restriction in order to recover progress on the original restriction conjecture, but without much success.

A few weeks ago, though, Bourgain and Guth found a new way to use multiscale analysis to “interpolate” between the result of Bennett, Carbery and myself (that has optimal exponents, but requires non-coplanar interactions), with a more classical square function estimate of Córdoba that handles the coplanar case. A direct application of this interpolation method already ties with the previous best known result in three dimensions (i.e. that (1) holds for ). But it also allows for the insertion of additional input, such as the best Kakeya estimate currently known in three dimensions, due to Wolff. This enlarges the range slightly to . The method also can extend to variable-coefficient settings, and in some of these cases (where there is so much “compression” going on that no additional Kakeya estimates are available) the estimates are best possible.

As is often the case in this field, there is a lot of technical book-keeping and juggling of parameters in the formal arguments of Bourgain and Guth, but the main ideas and “numerology” can be expressed fairly readily. (In mathematics, *numerology* refers to the empirically observed relationships between various key exponents and other numerical parameters; in many cases, one can use shortcuts such as dimensional analysis or informal heuristic, to compute these exponents long before the formal argument is completely in place.) Below the fold, I would like to record this numerology for the simplest of the Bourgain-Guth arguments, namely a reproof of (1) for . This is primarily for my own benefit, but may be of interest to other experts in this particular topic. (See also my 2003 lecture notes on the restriction conjecture.)

In order to focus on the ideas in the paper (rather than on the technical details), I will adopt an informal, heuristic approach, for instance by interpreting the uncertainty principle and the pigeonhole principle rather liberally, and by focusing on main terms in a decomposition and ignoring secondary terms. I will also be somewhat vague with regard to asymptotic notation such as . Making the arguments rigorous requires a certain amount of standard but tedious effort (and is one of the main reasons why the Bourgain-Guth paper is as long as it is), which I will not focus on here.

I’ve just uploaded to the arXiv my paper “Outliers in the spectrum of iid matrices with bounded rank perturbations“, submitted to Probability Theory and Related Fields. This paper is concerned with outliers to the *circular law* for iid random matrices. Recall that if is an matrix whose entries are iid complex random variables with mean zero and variance one, then the complex eigenvalues of the normalised matrix will almost surely be distributed according to the circular law distribution in the limit . (See these lecture notes for further discussion of this law.)

The circular law is also stable under bounded rank perturbations: if is a deterministic rank matrix of polynomial size (i.e. of operator norm ), then the circular law also holds for (this is proven in a paper of myself, Van Vu, and Manjunath Krisnhapur). In particular, the bulk of the eigenvalues (i.e. of the eigenvalues) will lie inside the unit disk .

However, this leaves open the possibility for one or more *outlier* eigenvalues that lie significantly outside the unit disk; the arguments in the paper cited above give some upper bound on the number of such eigenvalues (of the form for some absolute constant ) but does not exclude them entirely. And indeed, numerical data shows that such outliers can exist for certain bounded rank perturbations.

In this paper, some results are given as to when outliers exist, and how they are distributed. The easiest case is of course when there is no bounded rank perturbation: . In that case, an old result of Bai and Yin and of Geman shows that the spectral radius of is almost surely , thus all eigenvalues will be contained in a neighbourhood of the unit disk, and so there are no significant outliers. The proof is based on the moment method.

Now we consider a bounded rank perturbation which is nonzero, but which has a bounded operator norm: . In this case, it turns out that the matrix will have outliers if the deterministic component has outliers. More specifically (and under the technical hypothesis that the entries of have bounded fourth moment), if is an eigenvalue of with , then (for large enough), will almost surely have an eigenvalue at , and furthermore these will be the only outlier eigenvalues of .

Thus, for instance, adding a bounded nilpotent low rank matrix to will not create any outliers, because the nilpotent matrix only has eigenvalues at zero. On the other hand, adding a bounded Hermitian low rank matrix will create outliers as soon as this matrix has an operator norm greater than .

When I first thought about this problem (which was communicated to me by Larry Abbott), I believed that it was quite difficult, because I knew that the eigenvalues of non-Hermitian matrices were quite unstable with respect to general perturbations (as discussed in this previous blog post), and that there were no interlacing inequalities in this case to control bounded rank perturbations (as discussed in this post). However, as it turns out I had arrived at the wrong conclusion, especially in the exterior of the unit disk in which the resolvent is actually well controlled and so there is no pseudospectrum present to cause instability. This was pointed out to me by Alice Guionnet at an AIM workshop last week, after I had posed the above question during an open problems session. Furthermore, at the same workshop, Percy Deift emphasised the point that the basic determinantal identity

for matrices and matrices was a particularly useful identity in random matrix theory, as it converted problems about large () matrices into problems about small () matrices, which was particularly convenient in the regime when and was fixed. (Percy was speaking in the context of invariant ensembles, but the point is in fact more general than this.)

From this, it turned out to be a relatively simple manner to transform what appeared to be an intractable matrix problem into quite a well-behaved matrix problem for bounded . Specifically, suppose that had rank , so that one can factor for some (deterministic) matrix and matrix . To find an eigenvalue of , one has to solve the characteristic polynomial equation

This is an determinantal equation, which looks difficult to control analytically. But we can manipulate it using (1). If we make the assumption that is outside the spectrum of (which we can do as long as is well away from the unit disk, as the unperturbed matrix has no outliers), we can divide by to arrive at

Now we apply the crucial identity (1) to rearrange this as

The crucial point is that this is now an equation involving only a determinant, rather than an one, and is thus much easier to solve. The situation is particularly simple for rank one perturbations

in which case the eigenvalue equation is now just a scalar equation

that involves what is basically a single coefficient of the resolvent . (It is also an instructive exercise to derive this eigenvalue equation directly, rather than through (1).) There is by now a very well-developed theory for how to control such coefficients (particularly for in the exterior of the unit disk, in which case such basic tools as Neumann series work just fine); in particular, one has precise enough control on these coefficients to obtain the result on outliers mentioned above.

The same method can handle some other bounded rank perturbations. One basic example comes from looking at iid matrices with a non-zero mean and variance ; this can be modeled by where is the unit vector . Here, the bounded rank perturbation has a large operator norm (equal to ), so the previous result does not directly apply. Nevertheless, the self-adjoint nature of the perturbation has a stabilising effect, and I was able to show that there is still only one outlier, and that it is at the expected location of .

If one moves away from the case of self-adjoint perturbations, though, the situation changes. Let us now consider a matrix of the form , where is a randomised version of , e.g. , where the are iid Bernoulli signs; such models were proposed recently by Rajan and Abbott as a model for neural networks in which some nodes are excitatory (and give columns with positive mean) and some are inhibitory (leading to columns with negative mean). Despite the superficial similarity with the previous example, the outlier behaviour is now quite different. Instead of having one extremely large outlier (of size ) at an essentially deterministic location, we now have a number of eigenvalues of size , scattered according to a random process. Indeed, (in the case when the entries of were real and bounded) I was able to show that the outlier point process converged (in the sense of converging -point correlation functions) to the zeroes of a random Laurent series

where are iid real Gaussians. This is basically because the coefficients of the resolvent have a Neumann series whose coefficients enjoy a central limit theorem.

On the other hand, as already observed numerically (and rigorously, in the gaussian case) by Rajan and Abbott, if one projects such matrices to have row sum zero, then the outliers all disappear. This can be explained by another appeal to (1); this projection amounts to right-multiplying by the projection matrix to the zero-sum vectors. But by (1), the non-zero eigenvalues of the resulting matrix are the same as those for . Since annihilates , we thus see that in this case the bounded rank perturbation plays no role, and the question reduces to obtaining a circular law with no outliers for . As it turns out, this can be done by invoking the machinery of Van Vu and myself that we used to prove the circular law for various random matrix models.

The first volume of my 2009 blog book, “An epsilon of room“, has now been published by the AMS, as part of the Graduate Studies in Mathematics series. (So I finally have a book whose cover is at least partially in yellow, which for some reason seems to be the traditional colour for mathematics texts.) This volume contains the material from my 245B and 245C classes, and can thus be viewed as a second text in graduate real analysis. (I plan to have one volume of the 2010 blog book to be devoted to the material for the 245A class I just taught, and would thus serve as a first text in graduate real analysis to complement this volume.)

The second volume, which covers a wide range of other topics, should also be published in the near future.

This week I am at the American Institute of Mathematics, as an organiser on a workshop on the universality phenomenon in random matrices. There have been a number of interesting discussions so far in this workshop. Percy Deift, in a lecture on universality for invariant ensembles, gave some applications of what he only half-jokingly termed “the most important identity in mathematics”, namely the formula

whenever are and matrices respectively (or more generally, and could be linear operators with sufficiently good spectral properties that make both sides equal). Note that the left-hand side is an determinant, while the right-hand side is a determinant; this formula is particularly useful when computing determinants of large matrices (or of operators), as one can often use it to transform such determinants into much smaller determinants. In particular, the asymptotic behaviour of determinants as can be converted via this formula to determinants of a fixed size (independent of ), which is often a more favourable situation to analyse. Unsurprisingly, this trick is particularly useful for understanding the asymptotic behaviour of determinantal processes.

There are many ways to prove the identity. One is to observe first that when are invertible square matrices of the same size, that and are conjugate to each other and thus clearly have the same determinant; a density argument then removes the invertibility hypothesis, and a padding-by-zeroes argument then extends the square case to the rectangular case. Another is to proceed via the spectral theorem, noting that and have the same non-zero eigenvalues.

By rescaling, one obtains the variant identity

which essentially relates the characteristic polynomial of with that of . When , a comparison of coefficients this already gives important basic identities such as and ; when is not equal to , an inspection of the coefficient similarly gives the Cauchy-Binet formula (which, incidentally, is also useful when performing computations on determinantal processes).

Thanks to this formula (and with a crucial insight of Alice Guionnet), I was able to solve a problem (on outliers for the circular law) that I had in the back of my mind for a few months, and initially posed to me by Larry Abbott; I hope to talk more about this in a future post.

Today, though, I wish to talk about another piece of mathematics that emerged from an afternoon of free-form discussion that we managed to schedule within the AIM workshop. Specifically, we hammered out a heuristic model of the *mesoscopic* structure of the eigenvalues of the Gaussian Unitary Ensemble (GUE), where is a large integer. As is well known, the probability density of these eigenvalues is given by the *Ginebre distribution*

where is Lebesgue measure on the Weyl chamber , is a constant, and the Hamiltonian is given by the formula

At the macroscopic scale of , the eigenvalues are distributed according to the Wigner semicircle law

Indeed, if one defines the *classical location* of the eigenvalue to be the unique solution in to the equation

then it is known that the random variable is quite close to . Indeed, a result of Gustavsson shows that, in the bulk region when , is distributed asymptotically as a gaussian random variable with mean and variance . Note that from the semicircular law, the factor is the mean eigenvalue spacing.

At the other extreme, at the microscopic scale of the mean eigenvalue spacing (which is comparable to in the bulk, but can be as large as at the edge), the eigenvalues are asymptotically distributed with respect to a special determinantal point process, namely the Dyson sine process in the bulk (and the Airy process on the edge), as discussed in this previous post.

Here, I wish to discuss the *mesoscopic* structure of the eigenvalues, in which one involves scales that are intermediate between the microscopic scale and the macroscopic scale , for instance in correlating the eigenvalues and in the regime for some . Here, there is a surprising phenomenon; there is quite a long-range correlation between such eigenvalues. The result of Gustavsson shows that both and behave asymptotically like gaussian random variables, but a further result from the same paper shows that the correlation between these two random variables is asymptotic to (in the bulk, at least); thus, for instance, adjacent eigenvalues and are almost perfectly correlated (which makes sense, as their spacing is much less than either of their standard deviations), but that even very distant eigenvalues, such as and , have a correlation comparable to . One way to get a sense of this is to look at the trace

This is also the sum of the diagonal entries of a GUE matrix, and is thus normally distributed with a variance of . In contrast, each of the (in the bulk, at least) has a variance comparable to . In order for these two facts to be consistent, the average correlation between pairs of eigenvalues then has to be of the order of .

Below the fold, I give a heuristic way to see this correlation, based on Taylor expansion of the convex Hamiltonian around the minimum , which gives a conceptual probabilistic model for the mesoscopic structure of the GUE eigenvalues. While this heuristic is in no way rigorous, it does seem to explain many of the features currently known or conjectured about GUE, and looks likely to extend also to other models.

Tanja Eisner and I have just uploaded to the arXiv our paper “Large values of the Gowers-Host-Kra seminorms“, submitted to Journal d’Analyse Mathematique. This paper is concerned with the properties of three closely related families of (semi)norms, indexed by a positive integer :

- The
*Gowers uniformity norms*of a (bounded, measurable, compactly supported) function taking values on a locally compact abelian group , equipped with a Haar measure ; - The Gowers uniformity norms of a function on a discrete interval ; and
- The Gowers-Host-Kra seminorms of a function on an ergodic measure-preserving system .

These norms have been discussed in depth in previous blog posts, so I will just quickly review the definition of the first norm here (the other two (semi)norms are defined similarly). The norm is defined recursively by setting

and

where . Equivalently, one has

Informally, the Gowers uniformity norm measures the extent to which (the phase of ) behaves like a polynomial of degree less than . Indeed, if and is compact with normalised Haar measure , it is not difficult to show that is at most , with equality if and only if takes the form almost everywhere, where is a polynomial of degree less than (which means that for all ).

Our first result is to show that this result is robust, uniformly over all choices of group :

Theorem 1 (-near extremisers)Let be a compact abelian group with normalised Haar measure , and let be such that and for some and . Then there exists a polynomial of degree at most such that , where is bounded by a quantity that goes to zero as for fixed .

The quantity can be described effectively (it is of polynomial size in ), but we did not seek to optimise it here. This result was already known in the case of vector spaces over a fixed finite field (where it is essentially equivalent to the assertion that the property of being a polynomial of degree at most is locally testable); the extension to general groups turns out to fairly routine. The basic idea is to use the recursive structure of the Gowers norms, which tells us in particular that if is close to one, then is close to one for most , which by induction implies that is close to for some polynomials of degree at most and for most . (Actually, it is not difficult to use cocycle equations such as (when ) to upgrade “for most ” to “for all “.) To finish the job, one would like to express the as derivatives of a polynomial of degree at most . This turns out to be equivalent to requiring that the obey the cocycle equation

where is the translate of by . (In the paper, the sign conventions are reversed, so that , in order to be compatible with ergodic theory notation, but this makes no substantial difference to the arguments or results.) However, one does not quite get this right away; instead, by using some separation properties of polynomials, one can show the weaker statement that

where the are small real constants. To eliminate these constants, one exploits the trivial cohomology of the real line. From (1) one soon concludes that the obey the -cocycle equation

and an averaging argument then shows that is a -coboundary in the sense that

for some small scalar depending on . Subtracting from then gives the claim.

Similar results and arguments also hold for the and norms, which we will not detail here.

Dimensional analysis reveals that the norm is not actually the most natural norm with which to compare the norms against. An application of Young’s convolution inequality in fact reveals that one has the inequality

where is the critical exponent , without any compactness or normalisation hypothesis on the group and the Haar measure . This allows us to extend the norm to all of . There is then a stronger inverse theorem available:

Theorem 2 (-near extremisers)Let be a locally compact abelian group, and let be such that and for some and . Then there exists a coset of a compact open subgroup of , and a polynomial of degree at most such that .

Conversely, it is not difficult to show that equality in (2) is attained when takes the form as above. The main idea of proof is to use an inverse theorem for Young’s inequality due to Fournier to reduce matters to the case that was already established. An analogous result is also obtained for the norm on an ergodic system; but for technical reasons, the methods do not seem to apply easily to the norm. (This norm is essentially equivalent to the norm up to constants, with comparable to , but when working with near-extremisers, norms that are only equivalent up to constants can have quite different near-extremal behaviour.)

In the case when is a Euclidean group , it is possible to use the sharp Young inequality of Beckner and of Brascamp-Lieb to improve (2) somewhat. For instance, when , one has

with equality attained if and only if is a gaussian modulated by a quadratic polynomial phase. This additional gain of allows one to pinpoint the threshold for the previous near-extremiser results in the case of norms. For instance, by using the Host-Kra machinery of characteristic factors for the norm, combined with an explicit and concrete analysis of the -step nilsystems generated by that machinery, we can show that

whenever is a *totally* ergodic system and is orthogonal to all linear and quadratic eigenfunctions (which would otherwise form immediate counterexamples to the above inequality), with the factor being best possible. We can also establish analogous results for the and norms (using the inverse theorem of Ben Green and myself, in place of the Host-Kra machinery), although it is not clear to us whether the threshold remains best possible in this case.

## Recent Comments