You are currently browsing the tag archive for the ‘Larry Guth’ tag.

One of my favourite unsolved problems in harmonic analysis is the restriction problem. This problem, first posed explicitly by Elias Stein, can take many equivalent forms, but one of them is this: one starts with a smooth compact hypersurface {S} (possibly with boundary) in {{\bf R}^d}, such as the unit sphere {S = S^2} in {{\bf R}^3}, and equips it with surface measure {d\sigma}. One then takes a bounded measurable function {f \in L^\infty(S,d\sigma)} on this surface, and then computes the (inverse) Fourier transform

\displaystyle  \widehat{fd\sigma}(x) = \int_S e^{2\pi i x \cdot \omega} f(\omega) d\sigma(\omega)

of the measure {fd\sigma}. As {f} is bounded and {d\sigma} is a finite measure, this is a bounded function on {{\bf R}^d}; from the dominated convergence theorem, it is also continuous. The restriction problem asks whether this Fourier transform also decays in space, and specifically whether {\widehat{fd\sigma}} lies in {L^q({\bf R}^d)} for some {q < \infty}. (This is a natural space to control decay because it is translation invariant, which is compatible on the frequency space side with the modulation invariance of {L^\infty(S,d\sigma)}.) By the closed graph theorem, this is the case if and only if there is an estimate of the form

\displaystyle  \| \widehat{f d\sigma} \|_{L^q({\bf R}^d)} \leq C_{q,d,S} \|f\|_{L^\infty(S,d\sigma)} \ \ \ \ \ (1)

for some constant {C_{q,d,S}} that can depend on {q,d,S} but not on {f}. By a limiting argument, to provide such an estimate, it suffices to prove such an estimate under the additional assumption that {f} is smooth.

Strictly speaking, the above problem should be called the extension problem, but it is dual to the original formulation of the restriction problem, which asks to find those exponents {1 \leq q' \leq \infty} for which the Fourier transform of an {L^{q'}({\bf R}^d)} function {g} can be meaningfully restricted to a hypersurface {S}, in the sense that the map {g \mapsto \hat g|_{S}} can be continuously defined from {L^{q'}({\bf R}^d)} to, say, {L^1(S,d\sigma)}. A duality argument shows that the exponents {q'} for which the restriction property holds are the dual exponents to the exponents {q} for which the extension problem holds.

There are several motivations for studying the restriction problem. The problem is connected to the classical question of determining the nature of the convergence of various Fourier summation methods (and specifically, Bochner-Riesz summation); very roughly speaking, if one wishes to perform a partial Fourier transform by restricting the frequencies (possibly using a well-chosen weight) to some region {B} (such as a ball), then one expects this operation to well behaved if the boundary {\partial B} of this region has good restriction (or extension) properties. More generally, the restriction problem for a surface {S} is connected to the behaviour of Fourier multipliers whose symbols are singular at {S}. The problem is also connected to the analysis of various linear PDE such as the Helmholtz equation, Schro\”dinger equation, wave equation, and the (linearised) Korteweg-de Vries equation, because solutions to such equations can be expressed via the Fourier transform in the form {fd\sigma} for various surfaces {S} (the sphere, paraboloid, light cone, and cubic for the Helmholtz, Schrödinger, wave, and linearised Korteweg de Vries equation respectively). A particular family of restriction-type theorems for such surfaces, known as Strichartz estimates, play a foundational role in the nonlinear perturbations of these linear equations (e.g. the nonlinear Schrödinger equation, the nonlinear wave equation, and the Korteweg-de Vries equation). Last, but not least, there is a a fundamental connection between the restriction problem and the Kakeya problem, which roughly speaking concerns how tubes that point in different directions can overlap. Indeed, by superimposing special functions of the type {\widehat{fd\sigma}}, known as wave packets, and which are concentrated on tubes in various directions, one can “encode” the Kakeya problem inside the restriction problem; in particular, the conjectured solution to the restriction problem implies the conjectured solution to the Kakeya problem. Finally, the restriction problem serves as a simplified toy model for studying discrete exponential sums whose coefficients do not have a well controlled phase; this perspective was, for instance, used by Ben Green when he established Roth’s theorem in the primes by Fourier-analytic methods, which was in turn one of the main inspirations for our later work establishing arbitrarily long progressions in the primes, although we ended up using ergodic-theoretic arguments instead of Fourier-analytic ones and so did not directly use restriction theory in that paper.

The estimate (1) is trivial for {q=\infty} and becomes harder for smaller {q}. The geometry, and more precisely the curvature, of the surface {S}, plays a key role: if {S} contains a portion which is completely flat, then it is not difficult to concoct an {f} for which {\widehat{f d\sigma}} fails to decay in the normal direction to this flat portion, and so there are no restriction estimates for any finite {q}. Conversely, if {S} is not infinitely flat at any point, then from the method of stationary phase, the Fourier transform {\widehat{d\sigma}} can be shown to decay at a power rate at infinity, and this together with a standard method known as the {TT^*} argument can be used to give non-trivial restriction estimates for finite {q}. However, these arguments fall somewhat short of obtaining the best possible exponents {q}. For instance, in the case of the sphere {S = S^{d-1} \subset {\bf R}^d}, the Fourier transform {\widehat{d\sigma}(x)} is known to decay at the rate {O(|x|^{-(d-1)/2})} and no better as {d \rightarrow \infty}, which shows that the condition {q > \frac{2d}{d-1}} is necessary in order for (1) to hold for this surface. The restriction conjecture for {S^{d-1}} asserts that this necessary condition is also sufficient. However, the {TT^*}-based argument gives only the Tomas-Stein theorem, which in this context gives (1) in the weaker range {q \geq \frac{2(d+1)}{d-1}}. (On the other hand, by the nature of the {TT^*} method, the Tomas-Stein theorem does allow the {L^\infty(S,d\sigma)} norm on the right-hand side to be relaxed to {L^2(S,d\sigma)}, at which point the Tomas-Stein exponent {\frac{2(d+1)}{d-1}} becomes best possible. The fact that the Tomas-Stein theorem has an {L^2} norm on the right-hand side is particularly valuable for applications to PDE, leading in particular to the Strichartz estimates mentioned earlier.)

Over the last two decades, there was a fair amount of work in pushing past the Tomas-Stein barrier. For sake of concreteness let us work just with the restriction problem for the unit sphere {S^2} in {{\bf R}^3}. Here, the restriction conjecture asserts that (1) holds for all {q > 3}, while the Tomas-Stein theorem gives only {q \geq 4}. By combining a multiscale analysis approach with some new progress on the Kakeya conjecture, Bourgain was able to obtain the first improvement on this range, establishing the restriction conjecture for {q > 4 - \frac{2}{15}}. The methods were steadily refined over the years; until recently, the best result (due to myself) was that the conjecture held for all {q > 3 \frac{1}{3}}, which proceeded by analysing a “bilinear {L^2}” variant of the problem studied previously by Bourgain and by Wolff. This is essentially the limit of that method; the relevant bilinear {L^2} estimate fails for {q < 3 + \frac{1}{3}}. (This estimate was recently established at the endpoint {q=3+\frac{1}{3}} by Jungjin Lee (personal communication), though this does not quite improve the range of exponents in (1) due to a logarithmic inefficiency in converting the bilinear estimate to a linear one.)

On the other hand, the full range {q>3} of exponents in (1) was obtained by Bennett, Carbery, and myself (with an alternate proof later given by Guth), but only under the additional assumption of non-coplanar interactions. In three dimensions, this assumption was enforced by replacing (1) with the weaker trilinear (and localised) variant

\displaystyle  \| \widehat{f_1 d\sigma_1} \widehat{f_2 d\sigma_2} \widehat{f_3 d\sigma_3} \|_{L^{q/3}(B(0,R))} \leq C_{q,d,S_1,S_2,S_3,\epsilon} R^\epsilon \ \ \ \ \ (2)

\displaystyle  \|f_1\|_{L^\infty(S_1,d\sigma_1)} \|f_2\|_{L^\infty(S_2,d\sigma_2)} \|f_3\|_{L^\infty(S_3,d\sigma_3)}

where {\epsilon>0} and {R \geq 1} are arbitrary, {B(0,R)} is the ball of radius {R} in {{\bf R}^3}, and {S_1,S_2,S_3} are compact portions of {S} whose unit normals {n_1(),n_2(),n_3()} are never coplanar, thus there is a uniform lower bound

\displaystyle  |n_1(\omega_1) \wedge n_2(\omega_2) \wedge n_3(\omega_3)| \geq c

for some {c>0} and all {\omega_1 \in S_1, \omega_2 \in S_2, \omega_3 \in S_3}. If it were not for this non-coplanarity restriction, (2) would be equivalent to (1) (by setting {S_1=S_2=S_3} and {f_1=f_2=f_3}, with the converse implication coming from Hölder’s inequality; the {R^\epsilon} loss can be removed by a lemma from a paper of mine). At the time we wrote this paper, we tried fairly hard to try to remove this non-coplanarity restriction in order to recover progress on the original restriction conjecture, but without much success.

A few weeks ago, though, Bourgain and Guth found a new way to use multiscale analysis to “interpolate” between the result of Bennett, Carbery and myself (that has optimal exponents, but requires non-coplanar interactions), with a more classical square function estimate of Córdoba that handles the coplanar case. A direct application of this interpolation method already ties with the previous best known result in three dimensions (i.e. that (1) holds for {q > 3 \frac{1}{3}}). But it also allows for the insertion of additional input, such as the best Kakeya estimate currently known in three dimensions, due to Wolff. This enlarges the range slightly to {q > 3.3}. The method also can extend to variable-coefficient settings, and in some of these cases (where there is so much “compression” going on that no additional Kakeya estimates are available) the estimates are best possible.

As is often the case in this field, there is a lot of technical book-keeping and juggling of parameters in the formal arguments of Bourgain and Guth, but the main ideas and “numerology” can be expressed fairly readily. (In mathematics, numerology refers to the empirically observed relationships between various key exponents and other numerical parameters; in many cases, one can use shortcuts such as dimensional analysis or informal heuristic, to compute these exponents long before the formal argument is completely in place.) Below the fold, I would like to record this numerology for the simplest of the Bourgain-Guth arguments, namely a reproof of (1) for {p > 3 \frac{1}{3}}. This is primarily for my own benefit, but may be of interest to other experts in this particular topic. (See also my 2003 lecture notes on the restriction conjecture.)

In order to focus on the ideas in the paper (rather than on the technical details), I will adopt an informal, heuristic approach, for instance by interpreting the uncertainty principle and the pigeonhole principle rather liberally, and by focusing on main terms in a decomposition and ignoring secondary terms. I will also be somewhat vague with regard to asymptotic notation such as {\ll}. Making the arguments rigorous requires a certain amount of standard but tedious effort (and is one of the main reasons why the Bourgain-Guth paper is as long as it is), which I will not focus on here.

Read the rest of this entry »

Combinatorial incidence geometry is the study of the possible combinatorial configurations between geometric objects such as lines and circles. One of the basic open problems in the subject has been the Erdős distance problem, posed in 1946:

Problem 1 (Erdős distance problem) Let {N} be a large natural number. What is the least number {\# \{ |x_i-x_j|: 1 \leq i < j \leq N \}} of distances that are determined by {N} points {x_1,\ldots,x_N} in the plane?

Erdős called this least number {g(N)}. For instance, one can check that {g(3)=1} and {g(4)=2}, although the precise computation of {g} rapidly becomes more difficult after this. By considering {N} points in arithmetic progression, we see that {g(N) \leq N-1}. By considering the slightly more sophisticated example of a {\sqrt{N} \times \sqrt{N}} lattice grid (assuming that {N} is a square number for simplicity), and using some analytic number theory, one can obtain the slightly better asymptotic bound {g(N) = O( N / \sqrt{\log N} )}.

On the other hand, lower bounds are more difficult to obtain. As observed by Erdős, an easy argument, ultimately based on the incidence geometry fact that any two circles intersect in at most two points, gives the lower bound {g(N) \gg N^{1/2}}. The exponent {1/2} has been slowly increasing over the years by a series of increasingly intricate arguments combining incidence geometry facts with other known results in combinatorial incidence geometry (most notably the Szemerédi-Trotter theorem) and also some tools from additive combinatorics; however, these methods seemed to fall quite short of getting to the optimal exponent of {1}. (Indeed, previously to last week, the best lower bound known was approximately {N^{0.8641}}, due to Katz and Tardos.)

Very recently, though, Guth and Katz have obtained a near-optimal result:

Theorem 2 One has {g(N) \gg N / \log N}.

The proof neatly combines together several powerful and modern tools in a new way: a recent geometric reformulation of the problem due to Elekes and Sharir; the polynomial method as used recently by Dvir, Guth, and Guth-Katz on related incidence geometry problems (and discussed previously on this blog); and the somewhat older method of cell decomposition (also discussed on this blog). A key new insight is that the polynomial method (and more specifically, the polynomial Ham Sandwich theorem, also discussed previously on this blog) can be used to efficiently create cells.

In this post, I thought I would sketch some of the key ideas used in the proof, though I will not give the full argument here (the paper itself is largely self-contained, well motivated, and of only moderate length). In particular I will not go through all the various cases of configuration types that one has to deal with in the full argument, but only some illustrative special cases.

To simplify the exposition, I will repeatedly rely on “pigeonholing cheats”. A typical such cheat: if I have {n} objects (e.g. {n} points or {n} lines), each of which could be of one of two types, I will assume that either all {n} of the objects are of the first type, or all {n} of the objects are of the second type. (In truth, I can only assume that at least {n/2} of the objects are of the first type, or at least {n/2} of the objects are of the second type; but in practice, having {n/2} instead of {n} only ends up costing an unimportant multiplicative constant in the type of estimates used here.) A related such cheat: if one has {n} objects {A_1,\ldots,A_n} (again, think of {n} points or {n} circles), and to each object {A_i} one can associate some natural number {k_i} (e.g. some sort of “multiplicity” for {A_i}) that is of “polynomial size” (of size {O(N^{O(1)})}), then I will assume in fact that all the {k_i} are in a fixed dyadic range {[k,2k]} for some {k}. (In practice, the dyadic pigeonhole principle can only achieve this after throwing away all but about {n/\log N} of the original {n} objects; it is this type of logarithmic loss that eventually leads to the logarithmic factor in the main theorem.) Using the notation {X \sim Y} to denote the assertion that {C^{-1} Y \leq X \leq CY} for an absolute constant {C}, we thus have {k_i \sim k} for all {i}, thus {k_i} is morally constant.

I will also use asymptotic notation rather loosely, to avoid cluttering the exposition with a certain amount of routine but tedious bookkeeping of constants. In particular, I will use the informal notation {X \lll Y} or {Y \ggg X} to denote the statement that {X} is “much less than” {Y} or {Y} is “much larger than” {X}, by some large constant factor.

See also Janos Pach’s recent reaction to the Guth-Katz paper on Kalai’s blog.

Read the rest of this entry »

One of my favourite family of conjectures (and one that has preoccupied a significant fraction of my own research) is the family of Kakeya conjectures in geometric measure theory and harmonic analysis.  There are many (not quite equivalent) conjectures in this family.  The cleanest one to state is the set conjecture:

Kakeya set conjecture: Let n \geq 1, and let E \subset {\Bbb R}^n contain a unit line segment in every direction (such sets are known as Kakeya sets or Besicovitch sets).  Then E has Hausdorff dimension and Minkowski dimension equal to n.

One reason why I find these conjectures fascinating is the sheer variety of mathematical fields that arise both in the partial results towards this conjecture, and in the applications of those results to other problems.  See for instance this survey of Wolff, my Notices article and this article of Łaba on the connections between this problem and other problems in Fourier analysis, PDE, and additive combinatorics; there have even been some connections to number theory and to cryptography.  At the other end of the pipeline, the mathematical tools that have gone into the proofs of various partial results have included:

[This list is not exhaustive.]

Very recently, I was pleasantly surprised to see yet another mathematical tool used to obtain new progress on the Kakeya conjecture, namely (a generalisation of) the famous Ham Sandwich theorem from algebraic topology.  This was recently used by Guth to establish a certain endpoint multilinear Kakeya estimate left open by the work of Bennett, Carbery, and myself.  With regards to the Kakeya set conjecture, Guth’s arguments assert, roughly speaking, that the only Kakeya sets that can fail to have full dimension are those which obey a certain “planiness” property, which informally means that the line segments that pass through a typical point in the set must be essentially coplanar. (This property first surfaced in my paper with Katz and Łaba.)  Guth’s arguments can be viewed as a partial analogue of Dvir’s arguments in the finite field setting (which I discussed in this blog post) to the Euclidean setting; in particular, both arguments rely crucially on the ability to create a polynomial of controlled degree that vanishes at or near a large number of points.  Unfortunately, while these arguments fully settle the Kakeya conjecture in the finite field setting, it appears that some new ideas are still needed to finish off the problem in the Euclidean setting.  Nevertheless this is an interesting new development in the long history of this conjecture, in particular demonstrating that the polynomial method can be successfully applied to continuous Euclidean problems (i.e. it is not confined to the finite field setting).

In this post I would like to sketch some of the key ideas in Guth’s paper, in particular the role of the Ham Sandwich theorem (or more precisely, a polynomial generalisation of this theorem first observed by Gromov).

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,988 other followers