You are currently browsing the tag archive for the ‘Larry Guth’ tag.

Just a brief post to record some notable papers in my fields of interest that appeared on the arXiv recently.

- “A sharp square function estimate for the cone in “, by Larry Guth, Hong Wang, and Ruixiang Zhang. This paper establishes an optimal (up to epsilon losses) square function estimate for the three-dimensional light cone that was essentially conjectured by Mockenhaupt, Seeger, and Sogge, which has a number of other consequences including Sogge’s local smoothing conjecture for the wave equation in two spatial dimensions, which in turn implies the (already known) Bochner-Riesz, restriction, and Kakeya conjectures in two dimensions. Interestingly, modern techniques such as polynomial partitioning and decoupling estimates are not used in this argument; instead, the authors mostly rely on an induction on scales argument and Kakeya type estimates. Many previous authors (including myself) were able to get weaker estimates of this type by an induction on scales method, but there were always significant inefficiencies in doing so; in particular knowing the sharp square function estimate at smaller scales did not imply the sharp square function estimate at the given larger scale. The authors here get around this issue by finding an even stronger estimate that implies the square function estimate, but behaves significantly better with respect to induction on scales.
- “On the Chowla and twin primes conjectures over “, by Will Sawin and Mark Shusterman. This paper resolves a number of well known open conjectures in analytic number theory, such as the Chowla conjecture and the twin prime conjecture (in the strong form conjectured by Hardy and Littlewood), in the case of function fields where the field is a prime power which is fixed (in contrast to a number of existing results in the “large ” limit) but has a large exponent . The techniques here are orthogonal to those used in recent progress towards the Chowla conjecture over the integers (e.g., in this previous paper of mine); the starting point is an algebraic observation that in certain function fields, the Mobius function behaves like a quadratic Dirichlet character along certain arithmetic progressions. In principle, this reduces problems such as Chowla’s conjecture to problems about estimating sums of Dirichlet characters, for which more is known; but the task is still far from trivial.
- “Bounds for sets with no polynomial progressions“, by Sarah Peluse. This paper can be viewed as part of a larger project to obtain quantitative density Ramsey theorems of Szemeredi type. For instance, Gowers famously established a relatively good quantitative bound for Szemeredi’s theorem that all dense subsets of integers contain arbitrarily long arithmetic progressions . The corresponding question for polynomial progressions is considered more difficult for a number of reasons. One of them is that dilation invariance is lost; a dilation of an arithmetic progression is again an arithmetic progression, but a dilation of a polynomial progression will in general not be a polynomial progression with the same polynomials . Another issue is that the ranges of the two parameters are now at different scales. Peluse gets around these difficulties in the case when all the polynomials have distinct degrees, which is in some sense the opposite case to that considered by Gowers (in particular, she avoids the need to obtain quantitative inverse theorems for high order Gowers norms; which was recently obtained in this integer setting by Manners but with bounds that are probably not strong enough to for the bounds in Peluse’s results, due to a degree lowering argument that is available in this case). To resolve the first difficulty one has to make all the estimates rather uniform in the coefficients of the polynomials , so that one can still run a density increment argument efficiently. To resolve the second difficulty one needs to find a quantitative concatenation theorem for Gowers uniformity norms. Many of these ideas were developed in previous papers of Peluse and Peluse-Prendiville in simpler settings.
- “On blow up for the energy super critical defocusing non linear Schrödinger equations“, by Frank Merle, Pierre Raphael, Igor Rodnianski, and Jeremie Szeftel. This paper (when combined with two companion papers) resolves a long-standing problem as to whether finite time blowup occurs for the defocusing supercritical nonlinear Schrödinger equation (at least in certain dimensions and nonlinearities). I had a previous paper establishing a result like this if one “cheated” by replacing the nonlinear Schrodinger equation by a system of such equations, but remarkably they are able to tackle the original equation itself without any such cheating. Given the very analogous situation with Navier-Stokes, where again one can create finite time blowup by “cheating” and modifying the equation, it does raise hope that finite time blowup for the incompressible Navier-Stokes and Euler equations can be established… In fact the connection may not just be at the level of analogy; a surprising key ingredient in the proofs here is the observation that a certain blowup ansatz for the nonlinear Schrodinger equation is governed by solutions to the (compressible) Euler equation, and finite time blowup examples for the latter can be used to construct finite time blowup examples for the former.

Given any finite collection of elements in some Banach space , the triangle inequality tells us that

However, when the all “oscillate in different ways”, one expects to improve substantially upon the triangle inequality. For instance, if is a Hilbert space and the are mutually orthogonal, we have the Pythagorean theorem

For sake of comparison, from the triangle inequality and Cauchy-Schwarz one has the general inequality

for any finite collection in any Banach space , where denotes the cardinality of . Thus orthogonality in a Hilbert space yields “square root cancellation”, saving a factor of or so over the trivial bound coming from the triangle inequality.

More generally, let us somewhat informally say that a collection exhibits *decoupling in * if one has the Pythagorean-like inequality

for any , thus one obtains almost the full square root cancellation in the norm. The theory of *almost orthogonality* can then be viewed as the theory of decoupling in Hilbert spaces such as . In spaces for one usually does not expect this sort of decoupling; for instance, if the are disjointly supported one has

and the right-hand side can be much larger than when . At the opposite extreme, one usually does not expect to get decoupling in , since one could conceivably align the to all attain a maximum magnitude at the same location with the same phase, at which point the triangle inequality in becomes sharp.

However, in some cases one can get decoupling for certain . For instance, suppose we are in , and that are *bi-orthogonal* in the sense that the products for are pairwise orthogonal in . Then we have

giving decoupling in . (Similarly if each of the is orthogonal to all but of the other .) A similar argument also gives decoupling when one has tri-orthogonality (with the mostly orthogonal to each other), and so forth. As a slight variant, Khintchine’s inequality also indicates that decoupling should occur for any fixed if one multiplies each of the by an independent random sign .

In recent years, Bourgain and Demeter have been establishing *decoupling theorems* in spaces for various key exponents of , in the “restriction theory” setting in which the are Fourier transforms of measures supported on different portions of a given surface or curve; this builds upon the earlier decoupling theorems of Wolff. In a recent paper with Guth, they established the following decoupling theorem for the curve parameterised by the polynomial curve

For any ball in , let denote the weight

which should be viewed as a smoothed out version of the indicator function of . In particular, the space can be viewed as a smoothed out version of the space . For future reference we observe a fundamental self-similarity of the curve : any arc in this curve, with a compact interval, is affinely equivalent to the standard arc .

Theorem 1 (Decoupling theorem)Let . Subdivide the unit interval into equal subintervals of length , and for each such , let be the Fourier transformof a finite Borel measure on the arc , where . Then the exhibit decoupling in for any ball of radius .

Orthogonality gives the case of this theorem. The bi-orthogonality type arguments sketched earlier only give decoupling in up to the range ; the point here is that we can now get a much larger value of . The case of this theorem was previously established by Bourgain and Demeter (who obtained in fact an analogous theorem for any curved hypersurface). The exponent (and the radius ) is best possible, as can be seen by the following basic example. If

where is a bump function adapted to , then standard Fourier-analytic computations show that will be comparable to on a rectangular box of dimensions (and thus volume ) centred at the origin, and exhibit decay away from this box, with comparable to

On the other hand, is comparable to on a ball of radius comparable to centred at the origin, so is , which is just barely consistent with decoupling. This calculation shows that decoupling will fail if is replaced by any larger exponent, and also if the radius of the ball is reduced to be significantly smaller than .

This theorem has the following consequence of importance in analytic number theory:

Corollary 2 (Vinogradov main conjecture)Let be integers, and let . Then

*Proof:* By the Hölder inequality (and the trivial bound of for the exponential sum), it suffices to treat the critical case , that is to say to show that

We can rescale this as

As the integrand is periodic along the lattice , this is equivalent to

The left-hand side may be bounded by , where and . Since

the claim now follows from the decoupling theorem and a brief calculation.

Using the Plancherel formula, one may equivalently (when is an integer) write the Vinogradov main conjecture in terms of solutions to the system of equations

but we will not use this formulation here.

A history of the Vinogradov main conjecture may be found in this survey of Wooley; prior to the Bourgain-Demeter-Guth theorem, the conjecture was solved completely for , or for and either below or above , with the bulk of recent progress coming from the *efficient congruencing* technique of Wooley. It has numerous applications to exponential sums, Waring’s problem, and the zeta function; to give just one application, the main conjecture implies the predicted asymptotic for the number of ways to express a large number as the sum of fifth powers (the previous best result required fifth powers). The Bourgain-Demeter-Guth approach to the Vinogradov main conjecture, based on decoupling, is ostensibly very different from the efficient congruencing technique, which relies heavily on the arithmetic structure of the program, but it appears (as I have been told from second-hand sources) that the two methods are actually closely related, with the former being a sort of “Archimedean” version of the latter (with the intervals in the decoupling theorem being analogous to congruence classes in the efficient congruencing method); hopefully there will be some future work making this connection more precise. One advantage of the decoupling approach is that it generalises to non-arithmetic settings in which the set that is drawn from is replaced by some other similarly separated set of real numbers. (A random thought – could this allow the Vinogradov-Korobov bounds on the zeta function to extend to Beurling zeta functions?)

Below the fold we sketch the Bourgain-Demeter-Guth argument proving Theorem 1.

I thank Jean Bourgain and Andrew Granville for helpful discussions.

One of my favourite unsolved problems in harmonic analysis is the *restriction problem*. This problem, first posed explicitly by Elias Stein, can take many equivalent forms, but one of them is this: one starts with a smooth compact hypersurface (possibly with boundary) in , such as the unit sphere in , and equips it with surface measure . One then takes a bounded measurable function on this surface, and then computes the (inverse) Fourier transform

of the measure . As is bounded and is a finite measure, this is a bounded function on ; from the dominated convergence theorem, it is also continuous. The restriction problem asks whether this Fourier transform also decays in space, and specifically whether lies in for some . (This is a natural space to control decay because it is translation invariant, which is compatible on the frequency space side with the modulation invariance of .) By the closed graph theorem, this is the case if and only if there is an estimate of the form

for some constant that can depend on but not on . By a limiting argument, to provide such an estimate, it suffices to prove such an estimate under the additional assumption that is smooth.

Strictly speaking, the above problem should be called the *extension problem*, but it is dual to the original formulation of the *restriction problem*, which asks to find those exponents for which the Fourier transform of an function can be meaningfully restricted to a hypersurface , in the sense that the map can be continuously defined from to, say, . A duality argument shows that the exponents for which the restriction property holds are the dual exponents to the exponents for which the extension problem holds.

There are several motivations for studying the restriction problem. The problem is connected to the classical question of determining the nature of the convergence of various Fourier summation methods (and specifically, Bochner-Riesz summation); very roughly speaking, if one wishes to perform a partial Fourier transform by restricting the frequencies (possibly using a well-chosen weight) to some region (such as a ball), then one expects this operation to well behaved if the boundary of this region has good restriction (or extension) properties. More generally, the restriction problem for a surface is connected to the behaviour of Fourier multipliers whose symbols are singular at . The problem is also connected to the analysis of various linear PDE such as the Helmholtz equation, Schro\”dinger equation, wave equation, and the (linearised) Korteweg-de Vries equation, because solutions to such equations can be expressed via the Fourier transform in the form for various surfaces (the sphere, paraboloid, light cone, and cubic for the Helmholtz, Schrödinger, wave, and linearised Korteweg de Vries equation respectively). A particular family of restriction-type theorems for such surfaces, known as *Strichartz estimates*, play a foundational role in the nonlinear perturbations of these linear equations (e.g. the nonlinear Schrödinger equation, the nonlinear wave equation, and the Korteweg-de Vries equation). Last, but not least, there is a a fundamental connection between the restriction problem and the Kakeya problem, which roughly speaking concerns how tubes that point in different directions can overlap. Indeed, by superimposing special functions of the type , known as *wave packets*, and which are concentrated on tubes in various directions, one can “encode” the Kakeya problem inside the restriction problem; in particular, the conjectured solution to the restriction problem implies the conjectured solution to the Kakeya problem. Finally, the restriction problem serves as a simplified toy model for studying discrete exponential sums whose coefficients do not have a well controlled phase; this perspective was, for instance, used by Ben Green when he established Roth’s theorem in the primes by Fourier-analytic methods, which was in turn one of the main inspirations for our later work establishing arbitrarily long progressions in the primes, although we ended up using ergodic-theoretic arguments instead of Fourier-analytic ones and so did not directly use restriction theory in that paper.

The estimate (1) is trivial for and becomes harder for smaller . The geometry, and more precisely the *curvature*, of the surface , plays a key role: if contains a portion which is completely flat, then it is not difficult to concoct an for which fails to decay in the normal direction to this flat portion, and so there are no restriction estimates for any finite . Conversely, if is not infinitely flat at any point, then from the method of stationary phase, the Fourier transform can be shown to decay at a power rate at infinity, and this together with a standard method known as the argument can be used to give non-trivial restriction estimates for finite . However, these arguments fall somewhat short of obtaining the best possible exponents . For instance, in the case of the sphere , the Fourier transform is known to decay at the rate and no better as , which shows that the condition is necessary in order for (1) to hold for this surface. The *restriction conjecture for * asserts that this necessary condition is also sufficient. However, the -based argument gives only the Tomas-Stein theorem, which in this context gives (1) in the weaker range . (On the other hand, by the nature of the method, the Tomas-Stein theorem does allow the norm on the right-hand side to be relaxed to , at which point the Tomas-Stein exponent becomes best possible. The fact that the Tomas-Stein theorem has an norm on the right-hand side is particularly valuable for applications to PDE, leading in particular to the Strichartz estimates mentioned earlier.)

Over the last two decades, there was a fair amount of work in pushing past the Tomas-Stein barrier. For sake of concreteness let us work just with the restriction problem for the unit sphere in . Here, the restriction conjecture asserts that (1) holds for all , while the Tomas-Stein theorem gives only . By combining a multiscale analysis approach with some new progress on the Kakeya conjecture, Bourgain was able to obtain the first improvement on this range, establishing the restriction conjecture for . The methods were steadily refined over the years; until recently, the best result (due to myself) was that the conjecture held for all , which proceeded by analysing a “bilinear ” variant of the problem studied previously by Bourgain and by Wolff. This is essentially the limit of that method; the relevant bilinear estimate fails for . (This estimate was recently established at the endpoint by Jungjin Lee (personal communication), though this does not quite improve the range of exponents in (1) due to a logarithmic inefficiency in converting the bilinear estimate to a linear one.)

On the other hand, the full range of exponents in (1) was obtained by Bennett, Carbery, and myself (with an alternate proof later given by Guth), but only under the additional assumption of *non-coplanar interactions*. In three dimensions, this assumption was enforced by replacing (1) with the weaker trilinear (and localised) variant

where and are arbitrary, is the ball of radius in , and are compact portions of whose unit normals are never coplanar, thus there is a uniform lower bound

for some and all . If it were not for this non-coplanarity restriction, (2) would be equivalent to (1) (by setting and , with the converse implication coming from Hölder’s inequality; the loss can be removed by a lemma from a paper of mine). At the time we wrote this paper, we tried fairly hard to try to remove this non-coplanarity restriction in order to recover progress on the original restriction conjecture, but without much success.

A few weeks ago, though, Bourgain and Guth found a new way to use multiscale analysis to “interpolate” between the result of Bennett, Carbery and myself (that has optimal exponents, but requires non-coplanar interactions), with a more classical square function estimate of Córdoba that handles the coplanar case. A direct application of this interpolation method already ties with the previous best known result in three dimensions (i.e. that (1) holds for ). But it also allows for the insertion of additional input, such as the best Kakeya estimate currently known in three dimensions, due to Wolff. This enlarges the range slightly to . The method also can extend to variable-coefficient settings, and in some of these cases (where there is so much “compression” going on that no additional Kakeya estimates are available) the estimates are best possible.

As is often the case in this field, there is a lot of technical book-keeping and juggling of parameters in the formal arguments of Bourgain and Guth, but the main ideas and “numerology” can be expressed fairly readily. (In mathematics, *numerology* refers to the empirically observed relationships between various key exponents and other numerical parameters; in many cases, one can use shortcuts such as dimensional analysis or informal heuristic, to compute these exponents long before the formal argument is completely in place.) Below the fold, I would like to record this numerology for the simplest of the Bourgain-Guth arguments, namely a reproof of (1) for . This is primarily for my own benefit, but may be of interest to other experts in this particular topic. (See also my 2003 lecture notes on the restriction conjecture.)

In order to focus on the ideas in the paper (rather than on the technical details), I will adopt an informal, heuristic approach, for instance by interpreting the uncertainty principle and the pigeonhole principle rather liberally, and by focusing on main terms in a decomposition and ignoring secondary terms. I will also be somewhat vague with regard to asymptotic notation such as . Making the arguments rigorous requires a certain amount of standard but tedious effort (and is one of the main reasons why the Bourgain-Guth paper is as long as it is), which I will not focus on here.

Combinatorial incidence geometry is the study of the possible combinatorial configurations between geometric objects such as lines and circles. One of the basic open problems in the subject has been the Erdős distance problem, posed in 1946:

Problem 1 (Erdős distance problem)Let be a large natural number. What is the least number of distances that are determined by points in the plane?

Erdős called this least number . For instance, one can check that and , although the precise computation of rapidly becomes more difficult after this. By considering points in arithmetic progression, we see that . By considering the slightly more sophisticated example of a lattice grid (assuming that is a square number for simplicity), and using some analytic number theory, one can obtain the slightly better asymptotic bound .

On the other hand, lower bounds are more difficult to obtain. As observed by Erdős, an easy argument, ultimately based on the incidence geometry fact that any two circles intersect in at most two points, gives the lower bound . The exponent has been slowly increasing over the years by a series of increasingly intricate arguments combining incidence geometry facts with other known results in combinatorial incidence geometry (most notably the Szemerédi-Trotter theorem) and also some tools from additive combinatorics; however, these methods seemed to fall quite short of getting to the optimal exponent of . (Indeed, previously to last week, the best lower bound known was approximately , due to Katz and Tardos.)

Very recently, though, Guth and Katz have obtained a near-optimal result:

The proof neatly combines together several powerful and modern tools in a new way: a recent geometric reformulation of the problem due to Elekes and Sharir; the polynomial method as used recently by Dvir, Guth, and Guth-Katz on related incidence geometry problems (and discussed previously on this blog); and the somewhat older method of cell decomposition (also discussed on this blog). A key new insight is that the polynomial method (and more specifically, the *polynomial Ham Sandwich theorem*, also discussed previously on this blog) can be used to efficiently create cells.

In this post, I thought I would sketch some of the key ideas used in the proof, though I will not give the full argument here (the paper itself is largely self-contained, well motivated, and of only moderate length). In particular I will not go through all the various cases of configuration types that one has to deal with in the full argument, but only some illustrative special cases.

To simplify the exposition, I will repeatedly rely on “pigeonholing cheats”. A typical such cheat: if I have objects (e.g. points or lines), each of which could be of one of two types, I will assume that either all of the objects are of the first type, or all of the objects are of the second type. (In truth, I can only assume that at least of the objects are of the first type, or at least of the objects are of the second type; but in practice, having instead of only ends up costing an unimportant multiplicative constant in the type of estimates used here.) A related such cheat: if one has objects (again, think of points or circles), and to each object one can associate some natural number (e.g. some sort of “multiplicity” for ) that is of “polynomial size” (of size ), then I will assume in fact that all the are in a fixed dyadic range for some . (In practice, the dyadic pigeonhole principle can only achieve this after throwing away all but about of the original objects; it is this type of logarithmic loss that eventually leads to the logarithmic factor in the main theorem.) Using the notation to denote the assertion that for an absolute constant , we thus have for all , thus is morally constant.

I will also use asymptotic notation rather loosely, to avoid cluttering the exposition with a certain amount of routine but tedious bookkeeping of constants. In particular, I will use the informal notation or to denote the statement that is “much less than” or is “much larger than” , by some large constant factor.

See also Janos Pach’s recent reaction to the Guth-Katz paper on Kalai’s blog.

One of my favourite family of conjectures (and one that has preoccupied a significant fraction of my own research) is the family of Kakeya conjectures in geometric measure theory and harmonic analysis. There are many (not quite equivalent) conjectures in this family. The cleanest one to state is the set conjecture:

Kakeya set conjecture: Let , and let contain a unit line segment in every direction (such sets are known asKakeya setsorBesicovitch sets). Then E has Hausdorff dimension and Minkowski dimension equal to n.

One reason why I find these conjectures fascinating is the sheer variety of mathematical fields that arise both in the partial results towards this conjecture, and in the applications of those results to other problems. See for instance this survey of Wolff, my Notices article and this article of Łaba on the connections between this problem and other problems in Fourier analysis, PDE, and additive combinatorics; there have even been some connections to number theory and to cryptography. At the other end of the pipeline, the mathematical tools that have gone *into* the proofs of various partial results have included:

- Maximal functions, covering lemmas, methods (Cordoba, Strömberg, Cordoba-Fefferman);
- Fourier analysis (Nagel-Stein-Wainger);
- Multilinear integration (Drury, Christ)
- Paraproducts (Katz);
- Combinatorial incidence geometry (Bourgain, Wolff);
- Multi-scale analysis (Barrionuevo, Katz-Łaba-Tao, Łaba-Tao, Alfonseca-Soria-Vargas);
- Probabilistic constructions (Bateman-Katz, Bateman);
- Additive combinatorics and graph theory (Bourgain, Katz-Łaba-Tao, Katz-Tao, Katz-Tao);
- Sum-product theorems (Bourgain-Katz-Tao);
- Bilinear estimates (Tao-Vargas-Vega);
- Perron trees (Perron, Schoenberg, Keich);
- Group theory (Katz);
- Low-degree algebraic geometry (Schlag, Tao, Mockenhaupt-Tao);
- High-degree algebraic geometry (Dvir, Saraf-Sudan);
- Heat flow monotonicity formulae (Bennett-Carbery-Tao)

[This list is not exhaustive.]

Very recently, I was pleasantly surprised to see yet another mathematical tool used to obtain new progress on the Kakeya conjecture, namely (a generalisation of) the famous Ham Sandwich theorem from algebraic topology. This was recently used by Guth to establish a certain endpoint multilinear Kakeya estimate left open by the work of Bennett, Carbery, and myself. With regards to the Kakeya set conjecture, Guth’s arguments assert, roughly speaking, that the only Kakeya sets that can fail to have full dimension are those which obey a certain “planiness” property, which informally means that the line segments that pass through a typical point in the set must be essentially coplanar. (This property first surfaced in my paper with Katz and Łaba.) Guth’s arguments can be viewed as a partial analogue of Dvir’s arguments in the finite field setting (which I discussed in this blog post) to the Euclidean setting; in particular, both arguments rely crucially on the ability to create a polynomial of controlled degree that vanishes at or near a large number of points. Unfortunately, while these arguments fully settle the Kakeya conjecture in the finite field setting, it appears that some new ideas are still needed to finish off the problem in the Euclidean setting. Nevertheless this is an interesting new development in the long history of this conjecture, in particular demonstrating that the polynomial method can be successfully applied to continuous Euclidean problems (i.e. it is not confined to the finite field setting).

In this post I would like to sketch some of the key ideas in Guth’s paper, in particular the role of the Ham Sandwich theorem (or more precisely, a polynomial generalisation of this theorem first observed by Gromov).

## Recent Comments