You are currently browsing the tag archive for the ‘restriction theorems’ tag.

The square root cancellation heuristic, briefly mentioned in the preceding set of notes, predicts that if a collection ${z_1,\dots,z_n}$ of complex numbers have phases that are sufficiently “independent” of each other, then

$\displaystyle |\sum_{j=1}^n z_j| \approx (\sum_{j=1}^n |z_j|^2)^{1/2};$

similarly, if ${f_1,\dots,f_n}$ are a collection of functions in a Lebesgue space ${L^p(X,\mu)}$ that oscillate “independently” of each other, then we expect

$\displaystyle \| \sum_{j=1}^n f_j \|_{L^p(X,\mu)} \approx \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p(X,\mu)}.$

We have already seen one instance in which this heuristic can be made precise, namely when the phases of ${z_j,f_j}$ are randomised by a random sign, so that Khintchine’s inequality (Lemma 4 from Notes 1) can be applied. There are other contexts in which a square function estimate

$\displaystyle \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p(X,\mu)} \lesssim \| \sum_{j=1}^n f_j \|_{L^p(X,\mu)}$

or a reverse square function estimate

$\displaystyle \| \sum_{j=1}^n f_j \|_{L^p(X,\mu)} \lesssim \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p(X,\mu)}$

(or both) are known or conjectured to hold. For instance, the useful Littlewood-Paley inequality implies (among other things) that for any ${1 < p < \infty}$, we have the reverse square function estimate

$\displaystyle \| \sum_{j=1}^n f_j \|_{L^p({\bf R}^d)} \lesssim_{p,d} \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p({\bf R}^d)}, \ \ \ \ \ (1)$

whenever the Fourier transforms ${\hat f_j}$ of the ${f_j}$ are supported on disjoint annuli ${\{ \xi \in {\bf R}^d: 2^{k_j} \leq |\xi| < 2^{k_j+1} \}}$, and we also have the matching square function estimate

$\displaystyle \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p({\bf R}^d)} \lesssim_{p,d} \| \sum_{j=1}^n f_j \|_{L^p({\bf R}^d)}$

if there is some separation between the annuli (for instance if the ${k_j}$ are ${2}$-separated). We recall the proofs of these facts below the fold. In the ${p=2}$ case, we of course have Pythagoras’ theorem, which tells us that if the ${f_j}$ are all orthogonal elements of ${L^2(X,\mu)}$, then

$\displaystyle \| \sum_{j=1}^n f_j \|_{L^2(X,\mu)} = (\sum_{j=1}^n \| f_j \|_{L^2(X,\mu)}^2)^{1/2} = \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^2(X,\mu)}.$

In particular, this identity holds if the ${f_j \in L^2({\bf R}^d)}$ have disjoint Fourier supports in the sense that their Fourier transforms ${\hat f_j}$ are supported on disjoint sets. For ${p=4}$, the technique of bi-orthogonality can also give square function and reverse square function estimates in some cases, as we shall also see below the fold.
In recent years, it has begun to be realised that in the regime ${p > 2}$, a variant of reverse square function estimates such as (1) is also useful, namely decoupling estimates such as

$\displaystyle \| \sum_{j=1}^n f_j \|_{L^p({\bf R}^d)} \lesssim_{p,d} (\sum_{j=1}^n \|f_j\|_{L^p({\bf R}^d)}^2)^{1/2} \ \ \ \ \ (2)$

(actually in practice we often permit small losses such as ${n^\varepsilon}$ on the right-hand side). An estimate such as (2) is weaker than (1) when ${p\geq 2}$ (or equal when ${p=2}$), as can be seen by starting with the triangle inequality

$\displaystyle \| \sum_{j=1}^n |f_j|^2 \|_{L^{p/2}({\bf R}^d)} \leq \sum_{j=1}^n \| |f_j|^2 \|_{L^{p/2}({\bf R}^d)},$

and taking the square root of both side to conclude that

$\displaystyle \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p({\bf R}^d)} \leq (\sum_{j=1}^n \|f_j\|_{L^p({\bf R}^d)}^2)^{1/2}. \ \ \ \ \ (3)$

However, the flip side of this weakness is that (2) can be easier to prove. One key reason for this is the ability to iterate decoupling estimates such as (2), in a way that does not seem to be possible with reverse square function estimates such as (1). For instance, suppose that one has a decoupling inequality such as (2), and furthermore each ${f_j}$ can be split further into components ${f_j= \sum_{k=1}^m f_{j,k}}$ for which one has the decoupling inequalities

$\displaystyle \| \sum_{k=1}^m f_{j,k} \|_{L^p({\bf R}^d)} \lesssim_{p,d} (\sum_{k=1}^m \|f_{j,k}\|_{L^p({\bf R}^d)}^2)^{1/2}.$

Then by inserting these bounds back into (2) we see that we have the combined decoupling inequality

$\displaystyle \| \sum_{j=1}^n\sum_{k=1}^m f_{j,k} \|_{L^p({\bf R}^d)} \lesssim_{p,d} (\sum_{j=1}^n \sum_{k=1}^m \|f_{j,k}\|_{L^p({\bf R}^d)}^2)^{1/2}.$

This iterative feature of decoupling inequalities means that such inequalities work well with the method of induction on scales, that we introduced in the previous set of notes.
In fact, decoupling estimates share many features in common with restriction theorems; in addition to induction on scales, there are several other techniques that first emerged in the restriction theory literature, such as wave packet decompositions, rescaling, and bilinear or multilinear reductions, that turned out to also be well suited to proving decoupling estimates. As with restriction, the curvature or transversality of the different Fourier supports of the ${f_j}$ will be crucial in obtaining non-trivial estimates.
Strikingly, in many important model cases, the optimal decoupling inequalities (except possibly for epsilon losses in the exponents) are now known. These estimates have in turn had a number of important applications, such as establishing certain discrete analogues of the restriction conjecture, or the first proof of the main conjecture for Vinogradov mean value theorems in analytic number theory.
These notes only serve as a brief introduction to decoupling. A systematic exploration of this topic can be found in this recent text of Demeter.
Read the rest of this entry »

This set of notes focuses on the restriction problem in Fourier analysis. Introduced by Elias Stein in the 1970s, the restriction problem is a key model problem for understanding more general oscillatory integral operators, and which has turned out to be connected to many questions in geometric measure theory, harmonic analysis, combinatorics, number theory, and PDE. Only partial results on the problem are known, but these partial results have already proven to be very useful or influential in many applications.
We work in a Euclidean space ${{\bf R}^d}$. Recall that ${L^p({\bf R}^d)}$ is the space of ${p^{th}}$-power integrable functions ${f: {\bf R}^d \rightarrow {\bf C}}$, quotiented out by almost everywhere equivalence, with the usual modifications when ${p=\infty}$. If ${f \in L^1({\bf R}^d)}$ then the Fourier transform ${\hat f: {\bf R}^d \rightarrow {\bf C}}$ will be defined in this course by the formula

$\displaystyle \hat f(\xi) := \int_{{\bf R}^d} f(x) e^{-2\pi i x \cdot \xi}\ dx. \ \ \ \ \ (1)$

From the dominated convergence theorem we see that ${\hat f}$ is a continuous function; from the Riemann-Lebesgue lemma we see that it goes to zero at infinity. Thus ${\hat f}$ lies in the space ${C_0({\bf R}^d)}$ of continuous functions that go to zero at infinity, which is a subspace of ${L^\infty({\bf R}^d)}$. Indeed, from the triangle inequality it is obvious that

$\displaystyle \|\hat f\|_{L^\infty({\bf R}^d)} \leq \|f\|_{L^1({\bf R}^d)}. \ \ \ \ \ (2)$

If ${f \in L^1({\bf R}^d) \cap L^2({\bf R}^d)}$, then Plancherel’s theorem tells us that we have the identity

$\displaystyle \|\hat f\|_{L^2({\bf R}^d)} = \|f\|_{L^2({\bf R}^d)}. \ \ \ \ \ (3)$

Because of this, there is a unique way to extend the Fourier transform ${f \mapsto \hat f}$ from ${L^1({\bf R}^d) \cap L^2({\bf R}^d)}$ to ${L^2({\bf R}^d)}$, in such a way that it becomes a unitary map from ${L^2({\bf R}^d)}$ to itself. By abuse of notation we continue to denote this extension of the Fourier transform by ${f \mapsto \hat f}$. Strictly speaking, this extension is no longer defined in a pointwise sense by the formula (1) (indeed, the integral on the RHS ceases to be absolutely integrable once ${f}$ leaves ${L^1({\bf R}^d)}$; we will return to the (surprisingly difficult) question of whether pointwise convergence continues to hold (at least in an almost everywhere sense) later in this course, when we discuss Carleson’s theorem. On the other hand, the formula (1) remains valid in the sense of distributions, and in practice most of the identities and inequalities one can show about the Fourier transform of “nice” functions (e.g., functions in ${L^1({\bf R}^d) \cap L^2({\bf R}^d)}$, or in the Schwartz class ${{\mathcal S}({\bf R}^d)}$, or test function class ${C^\infty_c({\bf R}^d)}$) can be extended to functions in “rough” function spaces such as ${L^2({\bf R}^d)}$ by standard limiting arguments.
By (2), (3), and the Riesz-Thorin interpolation theorem, we also obtain the Hausdorff-Young inequality

$\displaystyle \|\hat f\|_{L^{p'}({\bf R}^d)} \leq \|f\|_{L^p({\bf R}^d)} \ \ \ \ \ (4)$

for all ${1 \leq p \leq 2}$ and ${f \in L^1({\bf R}^d) \cap L^2({\bf R}^d)}$, where ${2 \leq p' \leq \infty}$ is the dual exponent to ${p}$, defined by the usual formula ${\frac{1}{p} + \frac{1}{p'} = 1}$. (One can improve this inequality by a constant factor, with the optimal constant worked out by Beckner, but the focus in these notes will not be on optimal constants.) As a consequence, the Fourier transform can also be uniquely extended as a continuous linear map from ${L^p({\bf R}^d) \rightarrow L^{p'}({\bf R}^d)}$. (The situation with ${p>2}$ is much worse; see below the fold.)
The restriction problem asks, for a given exponent ${1 \leq p \leq 2}$ and a subset ${S}$ of ${{\bf R}^d}$, whether it is possible to meaningfully restrict the Fourier transform ${\hat f}$ of a function ${f \in L^p({\bf R}^d)}$ to the set ${S}$. If the set ${S}$ has positive Lebesgue measure, then the answer is yes, since ${\hat f}$ lies in ${L^{p'}({\bf R}^d)}$ and therefore has a meaningful restriction to ${S}$ even though functions in ${L^{p'}}$ are only defined up to sets of measure zero. But what if ${S}$ has measure zero? If ${p=1}$, then ${\hat f \in C_0({\bf R}^d)}$ is continuous and therefore can be meaningfully restricted to any set ${S}$. At the other extreme, if ${p=2}$ and ${f}$ is an arbitrary function in ${L^2({\bf R}^d)}$, then by Plancherel’s theorem, ${\hat f}$ is also an arbitrary function in ${L^2({\bf R}^d)}$, and thus has no well-defined restriction to any set ${S}$ of measure zero.
It was observed by Stein (as reported in the Ph.D. thesis of Charlie Fefferman) that for certain measure zero subsets ${S}$ of ${{\bf R}^d}$, such as the sphere ${S^{d-1} := \{ \xi \in {\bf R}^d: |\xi| = 1\}}$, one can obtain meaningful restrictions of the Fourier transforms of functions ${f \in L^p({\bf R}^d)}$ for certain ${p}$ between ${1}$ and ${2}$, thus demonstrating that the Fourier transform of such functions retains more structure than a typical element of ${L^{p'}({\bf R}^d)}$:

Theorem 1 (Preliminary ${L^2}$ restriction theorem) If ${d \geq 2}$ and ${1 \leq p < \frac{4d}{3d+1}}$, then one has the estimate

$\displaystyle \| \hat f \|_{L^2(S^{d-1}, d\sigma)} \lesssim_{d,p} \|f\|_{L^p({\bf R}^d)}$

for all Schwartz functions ${f \in {\mathcal S}({\bf R}^d)}$, where ${d\sigma}$ denotes surface measure on the sphere ${S^{d-1}}$. In particular, the restriction ${\hat f|_S}$ can be meaningfully defined by continuous linear extension to an element of ${L^2(S^{d-1},d\sigma)}$.

Proof: Fix ${d,p,f}$. We expand out

$\displaystyle \| \hat f \|_{L^2(S^{d-1}, d\sigma)}^2 = \int_{S^{d-1}} |\hat f(\xi)|^2\ d\sigma(\xi).$

From (1) and Fubini’s theorem, the right-hand side may be expanded as

$\displaystyle \int_{{\bf R}^d} \int_{{\bf R}^d} f(x) \overline{f}(y) (d\sigma)^\vee(y-x)\ dx dy$

where the inverse Fourier transform ${(d\sigma)^\vee}$ of the measure ${d\sigma}$ is defined by the formula

$\displaystyle (d\sigma)^\vee(x) := \int_{S^{d-1}} e^{2\pi i x \cdot \xi}\ d\sigma(\xi).$

In other words, we have the identity

$\displaystyle \| \hat f \|_{L^2(S^{d-1}, d\sigma)}^2 = \langle f, f * (d\sigma)^\vee \rangle_{L^2({\bf R}^d)}, \ \ \ \ \ (5)$

using the Hermitian inner product ${\langle f, g\rangle_{L^2({\bf R}^d)} := \int_{{\bf R}^d} \overline{f(x)} g(x)\ dx}$. Since the sphere ${S^{d-1}}$ have bounded measure, we have from the triangle inequality that

$\displaystyle (d\sigma)^\vee(x) \lesssim_d 1. \ \ \ \ \ (6)$

Also, from the method of stationary phase (as covered in the previous class 247A), or Bessel function asymptotics, we have the decay

$\displaystyle (d\sigma)^\vee(x) \lesssim_d |x|^{-(d-1)/2} \ \ \ \ \ (7)$

for any ${x \in {\bf R}^d}$ (note that the bound already follows from (6) unless ${|x| \geq 1}$). We remark that the exponent ${-\frac{d-1}{2}}$ here can be seen geometrically from the following considerations. For ${|x|>1}$, the phase ${e^{2\pi i x \cdot \xi}}$ on the sphere is stationary at the two antipodal points ${x/|x|, -x/|x|}$ of the sphere, and constant on the tangent hyperplanes to the sphere at these points. The wavelength of this phase is proportional to ${1/|x|}$, so the phase would be approximately stationary on a cap formed by intersecting the sphere with a ${\sim 1/|x|}$ neighbourhood of the tangent hyperplane to one of the stationary points. As the sphere is tangent to second order at these points, this cap will have diameter ${\sim 1/|x|^{1/2}}$ in the directions of the ${d-1}$-dimensional tangent space, so the cap will have surface measure ${\sim |x|^{-(d-1)/2}}$, which leads to the prediction (7). We combine (6), (7) into the unified estimate

$\displaystyle (d\sigma)^\vee(x) \lesssim_d \langle x\rangle^{-(d-1)/2}, \ \ \ \ \ (8)$

where the “Japanese bracket” ${\langle x\rangle}$ is defined as ${\langle x \rangle := (1+|x|^2)^{1/2}}$. Since ${\langle x \rangle^{-\alpha}}$ lies in ${L^p({\bf R}^d)}$ precisely when ${p > \frac{d}{\alpha}}$, we conclude that

$\displaystyle (d\sigma)^\vee \in L^q({\bf R}^d) \hbox{ iff } q > \frac{d}{(d-1)/2}.$

Applying Young’s convolution inequality, we conclude (after some arithmetic) that

$\displaystyle \| f * (d\sigma)^\vee \|_{L^{p'}({\bf R}^d)} \lesssim_{p,d} \|f\|_{L^p({\bf R}^d)}$

whenever ${1 \leq p < \frac{4d}{3d+1}}$, and the claim now follows from (5) and Hölder’s inequality. $\Box$

Remark 2 By using the Hardy-Littlewood-Sobolev inequality in place of Young’s convolution inequality, one can also establish this result for ${p = \frac{4d}{3d+1}}$.

Motivated by this result, given any Radon measure ${\mu}$ on ${{\bf R}^d}$ and any exponents ${1 \leq p,q \leq \infty}$, we use ${R_\mu(p \rightarrow q)}$ to denote the claim that the restriction estimate

$\displaystyle \| \hat f \|_{L^q({\bf R}^d, \mu)} \lesssim_{d,p,q,\mu} \|f\|_{L^p({\bf R}^d)} \ \ \ \ \ (9)$

for all Schwartz functions ${f}$; if ${S}$ is a ${k}$-dimensional submanifold of ${{\bf R}^d}$ (possibly with boundary), we write ${R_S(p \rightarrow q)}$ for ${R_\mu(p \rightarrow q)}$ where ${\mu}$ is the ${k}$-dimensional surface measure on ${S}$. Thus, for instance, we trivially always have ${R_S(1 \rightarrow \infty)}$, while Theorem 1 asserts that ${R_{S^{d-1}}(p \rightarrow 2)}$ holds whenever ${1 \leq p < \frac{4d}{3d+1}}$. We will not give a comprehensive survey of restriction theory in these notes, but instead focus on some model results that showcase some of the basic techniques in the field. (I have a more detailed survey on this topic from 2003, but it is somewhat out of date.)
Read the rest of this entry »

Just a brief post to record some notable papers in my fields of interest that appeared on the arXiv recently.

• A sharp square function estimate for the cone in ${\bf R}^3$“, by Larry Guth, Hong Wang, and Ruixiang Zhang.  This paper establishes an optimal (up to epsilon losses) square function estimate for the three-dimensional light cone that was essentially conjectured by Mockenhaupt, Seeger, and Sogge, which has a number of other consequences including Sogge’s local smoothing conjecture for the wave equation in two spatial dimensions, which in turn implies the (already known) Bochner-Riesz, restriction, and Kakeya conjectures in two dimensions.   Interestingly, modern techniques such as polynomial partitioning and decoupling estimates are not used in this argument; instead, the authors mostly rely on an induction on scales argument and Kakeya type estimates.  Many previous authors (including myself) were able to get weaker estimates of this type by an induction on scales method, but there were always significant inefficiencies in doing so; in particular knowing the sharp square function estimate at smaller scales did not imply the sharp square function estimate at the given larger scale.  The authors here get around this issue by finding an even stronger estimate that implies the square function estimate, but behaves significantly better with respect to induction on scales.
• On the Chowla and twin primes conjectures over ${\mathbb F}_q[T]$“, by Will Sawin and Mark Shusterman.  This paper resolves a number of well known open conjectures in analytic number theory, such as the Chowla conjecture and the twin prime conjecture (in the strong form conjectured by Hardy and Littlewood), in the case of function fields where the field is a prime power $q=p^j$ which is fixed (in contrast to a number of existing results in the “large $q$” limit) but has a large exponent $j$.  The techniques here are orthogonal to those used in recent progress towards the Chowla conjecture over the integers (e.g., in this previous paper of mine); the starting point is an algebraic observation that in certain function fields, the Mobius function behaves like a quadratic Dirichlet character along certain arithmetic progressions.  In principle, this reduces problems such as Chowla’s conjecture to problems about estimating sums of Dirichlet characters, for which more is known; but the task is still far from trivial.
• Bounds for sets with no polynomial progressions“, by Sarah Peluse.  This paper can be viewed as part of a larger project to obtain quantitative density Ramsey theorems of Szemeredi type.  For instance, Gowers famously established a relatively good quantitative bound for Szemeredi’s theorem that all dense subsets of integers contain arbitrarily long arithmetic progressions $a, a+r, \dots, a+(k-1)r$.  The corresponding question for polynomial progressions $a+P_1(r), \dots, a+P_k(r)$ is considered more difficult for a number of reasons.  One of them is that dilation invariance is lost; a dilation of an arithmetic progression is again an arithmetic progression, but a dilation of a polynomial progression will in general not be a polynomial progression with the same polynomials $P_1,\dots,P_k$.  Another issue is that the ranges of the two parameters $a,r$ are now at different scales.  Peluse gets around these difficulties in the case when all the polynomials $P_1,\dots,P_k$ have distinct degrees, which is in some sense the opposite case to that considered by Gowers (in particular, she avoids the need to obtain quantitative inverse theorems for high order Gowers norms; which was recently obtained in this integer setting by Manners but with bounds that are probably not strong enough to for the bounds in Peluse’s results, due to a degree lowering argument that is available in this case).  To resolve the first difficulty one has to make all the estimates rather uniform in the coefficients of the polynomials $P_j$, so that one can still run a density increment argument efficiently.  To resolve the second difficulty one needs to find a quantitative concatenation theorem for Gowers uniformity norms.  Many of these ideas were developed in previous papers of Peluse and Peluse-Prendiville in simpler settings.
• On blow up for the energy super critical defocusing non linear Schrödinger equations“, by Frank Merle, Pierre Raphael, Igor Rodnianski, and Jeremie Szeftel.  This paper (when combined with two companion papers) resolves a long-standing problem as to whether finite time blowup occurs for the defocusing supercritical nonlinear Schrödinger equation (at least in certain dimensions and nonlinearities).  I had a previous paper establishing a result like this if one “cheated” by replacing the nonlinear Schrodinger equation by a system of such equations, but remarkably they are able to tackle the original equation itself without any such cheating.  Given the very analogous situation with Navier-Stokes, where again one can create finite time blowup by “cheating” and modifying the equation, it does raise hope that finite time blowup for the incompressible Navier-Stokes and Euler equations can be established…  In fact the connection may not just be at the level of analogy; a surprising key ingredient in the proofs here is the observation that a certain blowup ansatz for the nonlinear Schrodinger equation is governed by solutions to the (compressible) Euler equation, and finite time blowup examples for the latter can be used to construct finite time blowup examples for the former.

I have just uploaded to the arXiv my paper “Sharp bounds for multilinear curved Kakeya, restriction and oscillatory integral estimates away from the endpoint“, submitted to Mathematika. In this paper I return (after more than a decade’s absence) to one of my first research interests, namely the Kakeya and restriction family of conjectures. The starting point is the following “multilinear Kakeya estimate” first established in the non-endpoint case by Bennett, Carbery, and myself, and then in the endpoint case by Guth (with further proofs and extensions by Bourgain-Guth and Carbery-Valdimarsson:

Theorem 1 (Multilinear Kakeya estimate) Let ${\delta > 0}$ be a radius. For each ${j = 1,\dots,d}$, let ${\mathbb{T}_j}$ denote a finite family of infinite tubes ${T_j}$ in ${{\bf R}^d}$ of radius ${\delta}$. Assume the following axiom:

• (i) (Transversality) whenever ${T_j \in \mathbb{T}_j}$ is oriented in the direction of a unit vector ${n_j}$ for ${j =1,\dots,d}$, we have

$\displaystyle \left|\bigwedge_{j=1}^d n_j\right| \geq A^{-1}$

for some ${A>0}$, where we use the usual Euclidean norm on the wedge product ${\bigwedge^d {\bf R}^d}$.

Then, for any ${p \geq \frac{1}{d-1}}$, one has

$\displaystyle \left\| \prod_{j=1}^d \sum_{T_j \in \mathbb{T}_j} 1_{T_j} \right\|_{L^p({\bf R}^d)} \lesssim_{A,p} \delta^{\frac{d}{p}} \prod_{j \in [d]} \# \mathbb{T}_j. \ \ \ \ \ (1)$

where ${L^p({\bf R}^d)}$ are the usual Lebesgue norms with respect to Lebesgue measure, ${1_{T_j}}$ denotes the indicator function of ${T_j}$, and ${\# \mathbb{T}_j}$ denotes the cardinality of ${\mathbb{T}_j}$.

The original proof of this proceeded using a heat flow monotonicity method, which in my previous post I reinterpreted using a “virtual integration” concept on a fractional Cartesian product space. It turns out that this machinery is somewhat flexible, and can be used to establish some other estimates of this type. The first result of this paper is to extend the above theorem to the curved setting, in which one localises to a ball of radius ${O(1)}$ (and sets ${\delta}$ to be small), but allows the tubes ${T_j}$ to be curved in a ${C^2}$ fashion. If one runs the heat flow monotonicity argument, one now picks up some additional error terms arising from the curvature, but as the spatial scale approaches zero, the tubes become increasingly linear, and as such the error terms end up being an integrable multiple of the main term, at which point one can conclude by Gronwall’s inequality (actually for technical reasons we use a bootstrap argument instead of Gronwall). A key point in this approach is that one obtains optimal bounds (not losing factors of ${\delta^{-\varepsilon}}$ or ${\log^{O(1)} \frac{1}{\delta}}$), so long as one stays away from the endpoint case ${p=\frac{1}{d-1}}$ (which does not seem to be easily treatable by the heat flow methods). Previously, the paper of Bennett, Carbery, and myself was able to use an induction on scale argument to obtain a curved multilinear Kakeya estimate losing a factor of ${\log^{O(1)} \frac{1}{\delta}}$ (after optimising the argument); later arguments of Bourgain-Guth and Carbery-Valdimarsson, based on algebraic topology methods, could also obtain a curved multilinear Kakeya estimate without such losses, but only in the algebraic case when the tubes were neighbourhoods of algebraic curves of bounded degree.

Perhaps more interestingly, we are also able to extend the heat flow monotonicity method to apply directly to the multilinear restriction problem, giving the following global multilinear restriction estimate:

Theorem 2 (Multilinear restriction theorem) Let ${\frac{1}{d-1} < p \leq \infty}$ be an exponent, and let ${A \geq 2}$ be a parameter. Let ${M}$ be a sufficiently large natural number, depending only on ${d}$. For ${j \in [d]}$, let ${U_j}$ be an open subset of ${B^{d-1}(0,A)}$, and let ${h_j: U_j \rightarrow {\bf R}}$ be a smooth function obeying the following axioms:

• (i) (Regularity) For each ${j \in [d]}$ and ${\xi \in U_j}$, one has

$\displaystyle |\nabla_\xi^{\otimes m} \otimes h_j(\xi)| \leq A \ \ \ \ \ (2)$

for all ${1 \leq m \leq M}$.

• (ii) (Transversality) One has

$\displaystyle \left| \bigwedge_{j \in [d]} (-\nabla_\xi h_j(\xi_j),1) \right| \geq A^{-1}$

whenever ${\xi_j \in U_j}$ for ${j \in [d]}$.

Let ${U_{j,1/A} \subset U_j}$ be the sets

$\displaystyle U_{j,1/A} := \{ \xi \in U_j: B^{d-1}(\xi,1/A) \subset U_j \}. \ \ \ \ \ (3)$

Then one has

$\displaystyle \left\| \prod_{j \in [d]} {\mathcal E}_j f_j \right\|_{L^{2p}({\bf R}^d)} \leq A^{O(1)} \left(d-1-\frac{1}{p}\right)^{-O(1)} \prod_{j \in [d]} \|f_j \|_{L^2(U_{j,1/A})}$

for any ${f_j \in L^2(U_{j,1/A} \rightarrow {\bf C})}$, ${j \in [d]}$, extended by zero outside of ${U_{j,1/A}}$, and ${{\mathcal E}_j}$ denotes the extension operator

$\displaystyle {\mathcal E}_j f_j( x', x_d ) := \int_{U_j} e^{2\pi i (x' \xi^T + x_d h_j(\xi))} f_j(\xi)\ d\xi.$

Local versions of such estimate, in which ${L^{2p}({\bf R}^d)}$ is replaced with ${L^{2p}(B^d(0,R))}$ for some ${R \geq 2}$, and one accepts a loss of the form ${\log^{O(1)} R}$, were already established by Bennett, Carbery, and myself using an induction on scale argument. In a later paper of Bourgain-Guth these losses were removed by “epsilon removal lemmas” to recover Theorme 2, but only in the case when all the hypersurfaces involved had curvatures bounded away from zero.

There are two main new ingredients in the proof of Theorem 2. The first is to replace the usual induction on scales scheme to establish multilinear restriction by a “ball inflation” induction on scales scheme that more closely resembles the proof of decoupling theorems. In particular, we actually prove the more general family of estimates

$\displaystyle \left\| \prod_{j \in [d]} E_{r}[{\mathcal E}_j f_j] \right\|_{L^{p}({\bf R}^d)} \leq A^{O(1)} \left(d-1 - \frac{1}{p}\right)^{O(1)} r^{\frac{d}{p}} \prod_{j \in [d]} \| f_j \|_{L^2(U_{j,1/A})}^2$

where ${E_r}$ denotes the local energies

$\displaystyle E_{r}[f](x',x_d) := \int_{B^{d-1}(x',r)} |f(y',x_d)|^2\ dy'$

(actually for technical reasons it is more convenient to use a smoother weight than the strict cutoff to the disk ${B^{d-1}(x',r)}$). With logarithmic losses, it is not difficult to establish this estimate by an upward induction on ${r}$. To avoid such losses we use the heat flow monotonicity method. Here we run into the issue that the extension operators ${{\mathcal E}_j f_j}$ are complex-valued rather than non-negative, and thus would not be expected to obey many good montonicity properties. However, the local energies ${E_r[{\mathcal E}_j f_j]}$ can be expressed in terms of the magnitude squared of what is essentially the Gabor transform of ${{\mathcal E}_j f_j}$, and these are non-negative; furthermore, the dispersion relation associated to the extension operators ${{\mathcal E}_j f_j}$ implies that these Gabor transforms propagate along tubes, so that the situation becomes quite similar (up to several additional lower order error terms) to that in the multilinear Kakeya problem. (This can be viewed as a continuous version of the usual wave packet decomposition method used to relate restriction and Kakeya problems, which when combined with the heat flow monotonicity method allows for one to use a continuous version of induction on scales methods that do not concede any logarithmic factors.)

Finally, one can combine the curved multilinear Kakeya result with the multilinear restriction result to obtain estimates for multilinear oscillatory integrals away from the endpoint. Again, this sort of implication was already established in the previous paper of Bennett, Carbery, and myself, but the arguments there had some epsilon losses in the exponents; here we were able to run the argument more carefully and avoid these losses.

The following situation is very common in modern harmonic analysis: one has a large scale parameter ${N}$ (sometimes written as ${N=1/\delta}$ in the literature for some small scale parameter ${\delta}$, or as ${N=R}$ for some large radius ${R}$), which ranges over some unbounded subset of ${[1,+\infty)}$ (e.g. all sufficiently large real numbers ${N}$, or all powers of two), and one has some positive quantity ${D(N)}$ depending on ${N}$ that is known to be of polynomial size in the sense that

$\displaystyle C^{-1} N^{-C} \leq D(N) \leq C N^C \ \ \ \ \ (1)$

for all ${N}$ in the range and some constant ${C>0}$, and one wishes to obtain a subpolynomial upper bound for ${D(N)}$, by which we mean an upper bound of the form

$\displaystyle D(N) \leq C_\varepsilon N^\varepsilon \ \ \ \ \ (2)$

for all ${\varepsilon>0}$ and all ${N}$ in the range, where ${C_\varepsilon>0}$ can depend on ${\varepsilon}$ but is independent of ${N}$. In many applications, this bound is nearly tight in the sense that one can easily establish a matching lower bound

$\displaystyle D(N) \geq C_\varepsilon N^{-\varepsilon}$

in which case the property of having a subpolynomial upper bound is equivalent to that of being subpolynomial size in the sense that

$\displaystyle C_\varepsilon N^{-\varepsilon} \leq D(N) \leq C_\varepsilon N^\varepsilon \ \ \ \ \ (3)$

for all ${\varepsilon>0}$ and all ${N}$ in the range. It would naturally be of interest to tighten these bounds further, for instance to show that ${D(N)}$ is polylogarithmic or even bounded in size, but a subpolynomial bound is already sufficient for many applications.

Let us give some illustrative examples of this type of problem:

Example 1 (Kakeya conjecture) Here ${N}$ ranges over all of ${[1,+\infty)}$. Let ${d \geq 2}$ be a fixed dimension. For each ${N \geq 1}$, we pick a maximal ${1/N}$-separated set of directions ${\Omega_N \subset S^{d-1}}$. We let ${D(N)}$ be the smallest constant for which one has the Kakeya inequality

$\displaystyle \| \sum_{\omega \in \Omega_N} 1_{T_\omega} \|_{L^{\frac{d}{d-1}}({\bf R}^d)} \leq D(N),$

where ${T_\omega}$ is a ${1/N \times 1}$-tube oriented in the direction ${\omega}$. The Kakeya maximal function conjecture is then equivalent to the assertion that ${D(N)}$ has a subpolynomial upper bound (or equivalently, is of subpolynomial size). Currently this is only known in dimension ${d=2}$.

Example 2 (Restriction conjecture for the sphere) Here ${N}$ ranges over all of ${[1,+\infty)}$. Let ${d \geq 2}$ be a fixed dimension. We let ${D(N)}$ be the smallest constant for which one has the restriction inequality

$\displaystyle \| \widehat{fd\sigma} \|_{L^{\frac{2d}{d-1}}(B(0,N))} \leq D(N) \| f \|_{L^\infty(S^{d-1})}$

for all bounded measurable functions ${f}$ on the unit sphere ${S^{d-1}}$ equipped with surface measure ${d\sigma}$, where ${B(0,N)}$ is the ball of radius ${N}$ centred at the origin. The restriction conjecture of Stein for the sphere is then equivalent to the assertion that ${D(N)}$ has a subpolynomial upper bound (or equivalently, is of subpolynomial size). Currently this is only known in dimension ${d=2}$.

Example 3 (Multilinear Kakeya inequality) Again ${N}$ ranges over all of ${[1,+\infty)}$. Let ${d \geq 2}$ be a fixed dimension, and let ${S_1,\dots,S_d}$ be compact subsets of the sphere ${S^{d-1}}$ which are transverse in the sense that there is a uniform lower bound ${|\omega_1 \wedge \dots \wedge \omega_d| \geq c > 0}$ for the wedge product of directions ${\omega_i \in S_i}$ for ${i=1,\dots,d}$ (equivalently, there is no hyperplane through the origin that intersects all of the ${S_i}$). For each ${N \geq 1}$, we let ${D(N)}$ be the smallest constant for which one has the multilinear Kakeya inequality

$\displaystyle \| \mathrm{geom} \sum_{T \in {\mathcal T}_i} 1_{T} \|_{L^{\frac{d}{d-1}}(B(0,N))} \leq D(N) \mathrm{geom} \# {\mathcal T}_i,$

where for each ${i=1,\dots,d}$, ${{\mathcal T}_i}$ is a collection of infinite tubes in ${{\bf R}^d}$ of radius ${1}$ oriented in a direction in ${S_i}$, which are separated in the sense that for any two tubes ${T,T'}$ in ${{\mathcal T}_i}$, either the directions of ${T,T'}$ differ by an angle of at least ${1/N}$, or ${T,T'}$ are disjoint; and ${\mathrm{geom} = \mathrm{geom}_{1 \leq i \leq d}}$ is our notation for the geometric mean

$\displaystyle \mathrm{geom} a_i := (a_1 \dots a_d)^{1/d}.$

The multilinear Kakeya inequality of Bennett, Carbery, and myself establishes that ${D(N)}$ is of subpolynomial size; a later argument of Guth improves this further by showing that ${D(N)}$ is bounded (and in fact comparable to ${1}$).

Example 4 (Multilinear restriction theorem) Once again ${N}$ ranges over all of ${[1,+\infty)}$. Let ${d \geq 2}$ be a fixed dimension, and let ${S_1,\dots,S_d}$ be compact subsets of the sphere ${S^{d-1}}$ which are transverse as in the previous example. For each ${N \geq 1}$, we let ${D(N)}$ be the smallest constant for which one has the multilinear restriction inequality

$\displaystyle \| \mathrm{geom} \widehat{f_id\sigma} \|_{L^{\frac{2d}{d-1}}(B(0,N))} \leq D(N) \| f \|_{L^2(S^{d-1})}$

for all bounded measurable functions ${f_i}$ on ${S_i}$ for ${i=1,\dots,d}$. Then the multilinear restriction theorem of Bennett, Carbery, and myself establishes that ${D(N)}$ is of subpolynomial size; it is known to be bounded for ${d=2}$ (as can be easily verified from Plancherel’s theorem), but it remains open whether it is bounded for any ${d>2}$.

Example 5 (Decoupling for the paraboloid) ${N}$ now ranges over the square numbers. Let ${d \geq 2}$, and subdivide the unit cube ${[0,1]^{d-1}}$ into ${N^{(d-1)/2}}$ cubes ${Q}$ of sidelength ${1/N^{1/2}}$. For any ${g \in L^1([0,1]^{d-1})}$, define the extension operators

$\displaystyle E_{[0,1]^{d-1}} g( x', x_d ) := \int_{[0,1]^{d-1}} e^{2\pi i (x' \cdot \xi + x_d |\xi|^2)} g(\xi)\ d\xi$

and

$\displaystyle E_Q g( x', x_d ) := \int_{Q} e^{2\pi i (x' \cdot \xi + x_d |\xi|^2)} g(\xi)\ d\xi$

for ${x' \in {\bf R}^{d-1}}$ and ${x_d \in {\bf R}}$. We also introduce the weight function

$\displaystyle w_{B(0,N)}(x) := (1 + \frac{|x|}{N})^{-100d}.$

For any ${p}$, let ${D_p(N)}$ be the smallest constant for which one has the decoupling inequality

$\displaystyle \| E_{[0,1]^{d-1}} g \|_{L^p(w_{B(0,N)})} \leq D_p(N) (\sum_Q \| E_Q g \|_{L^p(w_{B(0,N)})}^2)^{1/2}.$

The decoupling theorem of Bourgain and Demeter asserts that ${D_p(N)}$ is of subpolynomial size for all ${p}$ in the optimal range ${2 \leq p \leq \frac{2(d+1)}{d-1}}$.

Example 6 (Decoupling for the moment curve) ${N}$ now ranges over the natural numbers. Let ${d \geq 2}$, and subdivide ${[0,1]}$ into ${N}$ intervals ${J}$ of length ${1/N}$. For any ${g \in L^1([0,1])}$, define the extension operators

$\displaystyle E_{[0,1]} g(x_1,\dots,x_d) = \int_{[0,1]} e^{2\pi i ( x_1 \xi + x_2 \xi^2 + \dots + x_d \xi^d} g(\xi)\ d\xi$

and more generally

$\displaystyle E_J g(x_1,\dots,x_d) = \int_{[0,1]} e^{2\pi i ( x_1 \xi + x_2 \xi^2 + \dots + x_d \xi^d} g(\xi)\ d\xi$

for ${(x_1,\dots,x_d) \in {\bf R}^d}$. For any ${p}$, let ${D_p(N)}$ be the smallest constant for which one has the decoupling inequality

$\displaystyle \| E_{[0,1]} g \|_{L^p(w_{B(0,N^d)})} \leq D_p(N) (\sum_J \| E_J g \|_{L^p(w_{B(0,N^d)})}^2)^{1/2}.$

It was shown by Bourgain, Demeter, and Guth that ${D_p(N)}$ is of subpolynomial size for all ${p}$ in the optimal range ${2 \leq p \leq d(d+1)}$, which among other things implies the Vinogradov main conjecture (as discussed in this previous post).

It is convenient to use asymptotic notation to express these estimates. We write ${X \lesssim Y}$, ${X = O(Y)}$, or ${Y \gtrsim X}$ to denote the inequality ${|X| \leq CY}$ for some constant ${C}$ independent of the scale parameter ${N}$, and write ${X \sim Y}$ for ${X \lesssim Y \lesssim X}$. We write ${X = o(Y)}$ to denote a bound of the form ${|X| \leq c(N) Y}$ where ${c(N) \rightarrow 0}$ as ${N \rightarrow \infty}$ along the given range of ${N}$. We then write ${X \lessapprox Y}$ for ${X \lesssim N^{o(1)} Y}$, and ${X \approx Y}$ for ${X \lessapprox Y \lessapprox X}$. Then the statement that ${D(N)}$ is of polynomial size can be written as

$\displaystyle D(N) \sim N^{O(1)},$

while the statement that ${D(N)}$ has a subpolynomial upper bound can be written as

$\displaystyle D(N) \lessapprox 1$

and similarly the statement that ${D(N)}$ is of subpolynomial size is simply

$\displaystyle D(N) \approx 1.$

Many modern approaches to bounding quantities like ${D(N)}$ in harmonic analysis rely on some sort of induction on scales approach in which ${D(N)}$ is bounded using quantities such as ${D(N^\theta)}$ for some exponents ${0 < \theta < 1}$. For instance, suppose one is somehow able to establish the inequality

$\displaystyle D(N) \lessapprox D(\sqrt{N}) \ \ \ \ \ (4)$

for all ${N \geq 1}$, and suppose that ${D}$ is also known to be of polynomial size. Then this implies that ${D}$ has a subpolynomial upper bound. Indeed, one can iterate this inequality to show that

$\displaystyle D(N) \lessapprox D(N^{1/2^k})$

for any fixed ${k}$; using the polynomial size hypothesis one thus has

$\displaystyle D(N) \lessapprox N^{C/2^k}$

for some constant ${C}$ independent of ${k}$. As ${k}$ can be arbitrarily large, we conclude that ${D(N) \lesssim N^\varepsilon}$ for any ${\varepsilon>0}$, and hence ${D}$ is of subpolynomial size. (This sort of iteration is used for instance in my paper with Bennett and Carbery to derive the multilinear restriction theorem from the multilinear Kakeya theorem.)

Exercise 7 If ${D}$ is of polynomial size, and obeys the inequality

$\displaystyle D(N) \lessapprox D(N^{1-\varepsilon}) + N^{O(\varepsilon)}$

for any fixed ${\varepsilon>0}$, where the implied constant in the ${O(\varepsilon)}$ notation is independent of ${\varepsilon}$, show that ${D}$ has a subpolynomial upper bound. This type of inequality is used to equate various linear estimates in harmonic analysis with their multilinear counterparts; see for instance this paper of myself, Vargas, and Vega for an early example of this method.

In more recent years, more sophisticated induction on scales arguments have emerged in which one or more auxiliary quantities besides ${D(N)}$ also come into play. Here is one example, this time being an abstraction of a short proof of the multilinear Kakeya inequality due to Guth. Let ${D(N)}$ be the quantity in Example 3. We define ${D(N,M)}$ similarly to ${D(N)}$ for any ${M \geq 1}$, except that we now also require that the diameter of each set ${S_i}$ is at most ${1/M}$. One can then observe the following estimates:

These inequalities now imply that ${D}$ has a subpolynomial upper bound, as we now demonstrate. Let ${k}$ be a large natural number (independent of ${N}$) to be chosen later. From many iterations of (6) we have

$\displaystyle D(N, N^{1/k}) \lessapprox D(N^{1/k},N^{1/k})^k$

and hence by (7) (with ${N}$ replaced by ${N^{1/k}}$) and (5)

$\displaystyle D(N) \lessapprox N^{O(1/k)}$

where the implied constant in the ${O(1/k)}$ exponent does not depend on ${k}$. As ${k}$ can be arbitrarily large, the claim follows. We remark that a nearly identical scheme lets one deduce decoupling estimates for the three-dimensional cone from that of the two-dimensional paraboloid; see the final section of this paper of Bourgain and Demeter.

Now we give a slightly more sophisticated example, abstracted from the proof of ${L^p}$ decoupling of the paraboloid by Bourgain and Demeter, as described in this study guide after specialising the dimension to ${2}$ and the exponent ${p}$ to the endpoint ${p=6}$ (the argument is also more or less summarised in this previous post). (In the cited papers, the argument was phrased only for the non-endpoint case ${p<6}$, but it has been observed independently by many experts that the argument extends with only minor modifications to the endpoint ${p=6}$.) Here we have a quantity ${D_p(N)}$ that we wish to show is of subpolynomial size. For any ${0 < \varepsilon < 1}$ and ${0 \leq u \leq 1}$, one can define an auxiliary quantity ${A_{p,u,\varepsilon}(N)}$. The precise definitions of ${D_p(N)}$ and ${A_{p,u,\varepsilon}(N)}$ are given in the study guide (where they are called ${\mathrm{Dec}_2(1/N,p)}$ and ${A_p(u, B(0,N^2), u, g)}$ respectively, setting ${\delta = 1/N}$ and ${\nu = \delta^\varepsilon}$) but will not be of importance to us for this discussion. Suffice to say that the following estimates are known:

In all of these bounds the implied constant exponents such as ${O(\varepsilon)}$ or ${O(u)}$ are independent of ${\varepsilon}$ and ${u}$, although the implied constants in the ${\lessapprox}$ notation can depend on both ${\varepsilon}$ and ${u}$. Here we gloss over an annoying technicality in that quantities such as ${N^{1-\varepsilon}}$, ${N^{1-u}}$, or ${N^u}$ might not be an integer (and might not divide evenly into ${N}$), which is needed for the application to decoupling theorems; this can be resolved by restricting the scales involved to powers of two and restricting the values of ${\varepsilon, u}$ to certain rational values, which introduces some complications to the later arguments below which we shall simply ignore as they do not significantly affect the numerology.

It turns out that these estimates imply that ${D_p(N)}$ is of subpolynomial size. We give the argument as follows. As ${D_p(N)}$ is known to be of polynomial size, we have some ${\eta>0}$ for which we have the bound

$\displaystyle D_p(N) \lessapprox N^\eta \ \ \ \ \ (11)$

for all ${N}$. We can pick ${\eta}$ to be the minimal exponent for which this bound is attained: thus

$\displaystyle \eta = \limsup_{N \rightarrow \infty} \frac{\log D_p(N)}{\log N}. \ \ \ \ \ (12)$

We will call this the upper exponent of ${D_p(N)}$. We need to show that ${\eta \leq 0}$. We assume for contradiction that ${\eta > 0}$. Let ${\varepsilon>0}$ be a sufficiently small quantity depending on ${\eta}$ to be chosen later. From (10) we then have

$\displaystyle A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2u,\varepsilon}(N)^{1/2} N^{\eta (\frac{1}{2} - \frac{u}{2})}$

for any sufficiently small ${u}$. A routine iteration then gives

$\displaystyle A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2^k u,\varepsilon}(N)^{1/2^k} N^{\eta (1 - \frac{1}{2^k} - k\frac{u}{2})}$

for any ${k \geq 1}$ that is independent of ${N}$, if ${u}$ is sufficiently small depending on ${k}$. A key point here is that the implied constant in the exponent ${O(\varepsilon)}$ is uniform in ${k}$ (the constant comes from summing a convergent geometric series). We now use the crude bound (9) followed by (11) and conclude that

$\displaystyle A_{p,u,\varepsilon}(N) \lessapprox N^{\eta (1 - k\frac{u}{2}) + O(\varepsilon) + O(u)}.$

Applying (8) we then have

$\displaystyle D_p(N) \lessapprox N^{\eta(1-\varepsilon)} + N^{\eta (1 - k\frac{u}{2}) + O(\varepsilon) + O(u)}.$

If we choose ${k}$ sufficiently large depending on ${\eta}$ (which was assumed to be positive), then the negative term ${-\eta k \frac{u}{2}}$ will dominate the ${O(u)}$ term. If we then pick ${u}$ sufficiently small depending on ${k}$, then finally ${\varepsilon}$ sufficiently small depending on all previous quantities, we will obtain ${D_p(N) \lessapprox N^{\eta'}}$ for some ${\eta'}$ strictly less than ${\eta}$, contradicting the definition of ${\eta}$. Thus ${\eta}$ cannot be positive, and hence ${D_p(N)}$ has a subpolynomial upper bound as required.

Exercise 8 Show that one still obtains a subpolynomial upper bound if the estimate (10) is replaced with

$\displaystyle A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2u,\varepsilon}(N)^{1-\theta} D_p(N)^{\theta}$

for some constant ${0 \leq \theta < 1/2}$, so long as we also improve (9) to

$\displaystyle A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} D_p(N^{1-u}).$

(This variant of the argument lets one handle the non-endpoint cases ${2 < p < 6}$ of the decoupling theorem for the paraboloid.)

To establish decoupling estimates for the moment curve, restricting to the endpoint case ${p = d(d+1)}$ for sake of discussion, an even more sophisticated induction on scales argument was deployed by Bourgain, Demeter, and Guth. The proof is discussed in this previous blog post, but let us just describe an abstract version of the induction on scales argument. To bound the quantity ${D_p(N) = D_{d(d+1)}(N)}$, some auxiliary quantities ${A_{t,q,s,\varepsilon}(N)}$ are introduced for various exponents ${1 \leq t \leq \infty}$ and ${0 \leq q,s \leq 1}$ and ${\varepsilon>0}$, with the following bounds:

It is now substantially less obvious that these estimates can be combined to demonstrate that ${D(N)}$ is of subpolynomial size; nevertheless this can be done. A somewhat complicated arrangement of the argument (involving some rather unmotivated choices of expressions to induct over) appears in my previous blog post; I give an alternate proof later in this post.

These examples indicate a general strategy to establish that some quantity ${D(N)}$ is of subpolynomial size, by

• (i) Introducing some family of related auxiliary quantities, often parameterised by several further parameters;
• (ii) establishing as many bounds between these quantities and the original quantity ${D(N)}$ as possible; and then
• (iii) appealing to some sort of “induction on scales” to conclude.

The first two steps (i), (ii) depend very much on the harmonic analysis nature of the quantities ${D(N)}$ and the related auxiliary quantities, and the estimates in (ii) will typically be proven from various harmonic analysis inputs such as Hölder’s inequality, rescaling arguments, decoupling estimates, or Kakeya type estimates. The final step (iii) requires no knowledge of where these quantities come from in harmonic analysis, but the iterations involved can become extremely complicated.

In this post I would like to observe that one can clean up and made more systematic this final step (iii) by passing to upper exponents (12) to eliminate the role of the parameter ${N}$ (and also “tropicalising” all the estimates), and then taking similar limit superiors to eliminate some other less important parameters, until one is left with a simple linear programming problem (which, among other things, could be amenable to computer-assisted proving techniques). This method is analogous to that of passing to a simpler asymptotic limit object in many other areas of mathematics (for instance using the Furstenberg correspondence principle to pass from a combinatorial problem to an ergodic theory problem, as discussed in this previous post). We use the limit superior exclusively in this post, but many of the arguments here would also apply with one of the other generalised limit functionals discussed in this previous post, such as ultrafilter limits.

For instance, if ${\eta}$ is the upper exponent of a quantity ${D(N)}$ of polynomial size obeying (4), then a comparison of the upper exponent of both sides of (4) one arrives at the scalar inequality

$\displaystyle \eta \leq \frac{1}{2} \eta$

from which it is immediate that ${\eta \leq 0}$, giving the required subpolynomial upper bound. Notice how the passage to upper exponents converts the ${\lessapprox}$ estimate to a simpler inequality ${\leq}$.

Exercise 9 Repeat Exercise 7 using this method.

Similarly, given the quantities ${D(N,M)}$ obeying the axioms (5), (6), (7), and assuming that ${D(N)}$ is of polynomial size (which is easily verified for the application at hand), we see that for any real numbers ${a, u \geq 0}$, the quantity ${D(N^a,N^u)}$ is also of polynomial size and hence has some upper exponent ${\eta(a,u)}$; meanwhile ${D(N)}$ itself has some upper exponent ${\eta}$. By reparameterising we have the homogeneity

$\displaystyle \eta(\lambda a, \lambda u) = \lambda \eta(a,u)$

for any ${\lambda \geq 0}$. Also, comparing the upper exponents of both sides of the axioms (5), (6), (7) we arrive at the inequalities

$\displaystyle \eta(1,u) = \eta + O(u)$

$\displaystyle \eta(a_1+a_2,u) \leq \eta(a_1,u) + \eta(a_2,u)$

$\displaystyle \eta(1,1) \leq 0.$

For any natural number ${k}$, the third inequality combined with homogeneity gives ${\eta(1/k,1/k)}$, which when combined with the second inequality gives ${\eta(1,1/k) \leq k \eta(1/k,1/k) \leq 0}$, which on combination with the first estimate gives ${\eta \leq O(1/k)}$. Sending ${k}$ to infinity we obtain ${\eta \leq 0}$ as required.

Now suppose that ${D_p(N)}$, ${A_{p,u,\varepsilon}(N)}$ obey the axioms (8), (9), (10). For any fixed ${u,\varepsilon}$, the quantity ${A_{p,u,\varepsilon}(N)}$ is of polynomial size (thanks to (9) and the polynomial size of ${D_6}$), and hence has some upper exponent ${\eta(u,\varepsilon)}$; similarly ${D_p(N)}$ has some upper exponent ${\eta}$. (Actually, strictly speaking our axioms only give an upper bound on ${A_{p,u,\varepsilon}}$ so we have to temporarily admit the possibility that ${\eta(u,\varepsilon)=-\infty}$, though this will soon be eliminated anyway.) Taking upper exponents of all the axioms we then conclude that

$\displaystyle \eta \leq \max( (1-\varepsilon) \eta, \eta(u,\varepsilon) + O(\varepsilon) + O(u) ) \ \ \ \ \ (20)$

$\displaystyle \eta(u,\varepsilon) \leq \eta + O(\varepsilon) + O(u)$

$\displaystyle \eta(u,\varepsilon) \leq \frac{1}{2} \eta(2u,\varepsilon) + \frac{1}{2} \eta (1-u) + O(\varepsilon)$

for all ${0 \leq u \leq 1}$ and ${0 \leq \varepsilon \leq 1}$.

Assume for contradiction that ${\eta>0}$, then ${(1-\varepsilon) \eta < \eta}$, and so the statement (20) simplifies to

$\displaystyle \eta \leq \eta(u,\varepsilon) + O(\varepsilon) + O(u).$

At this point we can eliminate the role of ${\varepsilon}$ and simplify the system by taking a second limit superior. If we write

$\displaystyle \eta(u) := \limsup_{\varepsilon \rightarrow 0} \eta(u,\varepsilon)$

then on taking limit superiors of the previous inequalities we conclude that

$\displaystyle \eta(u) \leq \eta + O(u)$

$\displaystyle \eta(u) \leq \frac{1}{2} \eta(2u) + \frac{1}{2} \eta (1-u) \ \ \ \ \ (21)$

$\displaystyle \eta \leq \eta(u) + O(u)$

for all ${u}$; in particular ${\eta(u) = \eta + O(u)}$. We take advantage of this by taking a further limit superior (or “upper derivative”) in the limit ${u \rightarrow 0}$ to eliminate the role of ${u}$ and simplify the system further. If we define

$\displaystyle \alpha := \limsup_{u \rightarrow 0^+} \frac{\eta(u)-\eta}{u},$

so that ${\alpha}$ is the best constant for which ${\eta(u) \leq \eta + \alpha u + o(u)}$ as ${u \rightarrow 0}$, then ${\alpha}$ is finite, and by inserting this “Taylor expansion” into the right-hand side of (21) and conclude that

$\displaystyle \alpha \leq \alpha - \frac{1}{2} \eta.$

This leads to a contradiction when ${\eta>0}$, and hence ${\eta \leq 0}$ as desired.

Exercise 10 Redo Exercise 8 using this method.

The same strategy now clarifies how to proceed with the more complicated system of quantities ${A_{t,q,s,\varepsilon}(N)}$ obeying the axioms (13)(19) with ${D_p(N)}$ of polynomial size. Let ${\eta}$ be the exponent of ${D_p(N)}$. From (14) we see that for fixed ${t,q,s,\varepsilon}$, each ${A_{t,q,s,\varepsilon}(N)}$ is also of polynomial size (at least in upper bound) and so has some exponent ${a( t,q,s,\varepsilon)}$ (which for now we can permit to be ${-\infty}$). Taking upper exponents of all the various axioms we can now eliminate ${N}$ and arrive at the simpler axioms

$\displaystyle \eta \leq \max( (1-\varepsilon) \eta, a(t,q,s,\varepsilon) + O(\varepsilon) + O(q) + O(s) )$

$\displaystyle a(t,q,s,\varepsilon) \leq \eta + O(\varepsilon) + O(q) + O(s)$

$\displaystyle a(t_0,q,s,\varepsilon) \leq a(t_1,q,s,\varepsilon) + O(\varepsilon)$

$\displaystyle a(t_\theta,q,s,\varepsilon) \leq (1-\theta) a(t_0,q,s,\varepsilon) + \theta a(t_1,q,s,\varepsilon) + O(\varepsilon)$

$\displaystyle a(d(d+1),q,s,\varepsilon) \leq \eta(1-q) + O(\varepsilon)$

for all ${0 \leq q,s \leq 1}$, ${1 \leq t \leq \infty}$, ${1 \leq t_0 \leq t_1 \leq \infty}$ and ${0 \leq \theta \leq 1}$, with the lower dimensional decoupling inequality

$\displaystyle a(k(k+1),q,s,\varepsilon) \leq a(k(k+1),s/k,s,\varepsilon) + O(\varepsilon)$

for ${1 \leq k \leq d-1}$ and ${q \leq s/k}$, and the multilinear Kakeya inequality

$\displaystyle a(k(d+1),q,kq,\varepsilon) \leq a(k(d+1),q,(k+1)q,\varepsilon)$

for ${1 \leq k \leq d-1}$ and ${0 \leq q \leq 1}$.

As before, if we assume for sake of contradiction that ${\eta>0}$ then the first inequality simplifies to

$\displaystyle \eta \leq a(t,q,s,\varepsilon) + O(\varepsilon) + O(q) + O(s).$

We can then again eliminate the role of ${\varepsilon}$ by taking a second limit superior as ${\varepsilon \rightarrow 0}$, introducing

$\displaystyle a(t,q,s) := \limsup_{\varepsilon \rightarrow 0} a(t,q,s,\varepsilon)$

and thus getting the simplified axiom system

$\displaystyle a(t,q,s) \leq \eta + O(q) + O(s) \ \ \ \ \ (22)$

$\displaystyle a(t_0,q,s) \leq a(t_1,q,s)$

$\displaystyle a(t_\theta,q,s) \leq (1-\theta) a(t_0,q,s) + \theta a(t_1,q,s)$

$\displaystyle a(d(d+1),q,s) \leq \eta(1-q)$

$\displaystyle \eta \leq a(t,q,s) + O(q) + O(s) \ \ \ \ \ (23)$

and also

$\displaystyle a(k(k+1),q,s) \leq a(k(k+1),s/k,s)$

for ${1 \leq k \leq d-1}$ and ${q \leq s/k}$, and

$\displaystyle a(k(d+1),q,kq) \leq a(k(d+1),q,(k+1)q)$

for ${1 \leq k \leq d-1}$ and ${0 \leq q \leq 1}$.

In view of the latter two estimates it is natural to restrict attention to the quantities ${a(t,q,kq)}$ for ${1 \leq k \leq d+1}$. By the axioms (22), these quantities are of the form ${\eta + O(q)}$. We can then eliminate the role of ${q}$ by taking another limit superior

$\displaystyle \alpha_k(t) := \limsup_{q \rightarrow 0} \frac{a(t,q,kq)-\eta}{q}.$

The axioms now simplify to

$\displaystyle \alpha_k(t) = O(1)$

$\displaystyle \alpha_k(t_0) \leq \alpha_k(t_1) \ \ \ \ \ (24)$

$\displaystyle \alpha_k(t_\theta) \leq (1-\theta) \alpha_k(t_0) + \theta \alpha_k(t_1) \ \ \ \ \ (25)$

$\displaystyle \alpha_k(d(d+1)) \leq -\eta \ \ \ \ \ (26)$

and

$\displaystyle \alpha_j(k(k+1)) \leq \frac{j}{k} \alpha_k(k(k+1)) \ \ \ \ \ (27)$

for ${1 \leq k \leq d-1}$ and ${k \leq j \leq d}$, and

$\displaystyle \alpha_k(k(d+1)) \leq \alpha_{k+1}(k(d+1)) \ \ \ \ \ (28)$

for ${1 \leq k \leq d-1}$.

It turns out that the inequality (27) is strongest when ${j=k+1}$, thus

$\displaystyle \alpha_{k+1}(k(k+1)) \leq \frac{k+1}{k} \alpha_k(k(k+1)) \ \ \ \ \ (29)$

for ${1 \leq k \leq d-1}$.

From the last two inequalities (28), (29) we see that a special role is likely to be played by the exponents

$\displaystyle \beta_k := \alpha_k(k(k-1))$

for ${2 \leq k \leq d}$ and

$\displaystyle \gamma_k := \alpha_k(k(d+1))$

for ${1 \leq k \leq d}$. From the convexity (25) and a brief calculation we have

$\displaystyle \alpha_{k+1}(k(d+1)) \leq \frac{1}{d-k+1} \alpha_{k+1}(k(k+1))$

$\displaystyle + \frac{d-k}{d-k+1} \alpha_{k+1}((k+1)(d+1)),$

for ${1 \leq k \leq d-1}$, hence from (28) we have

$\displaystyle \gamma_k \leq \frac{1}{d-k+1} \beta_{k+1} + \frac{d-k}{d-k+1} \gamma_{k+1}. \ \ \ \ \ (30)$

Similarly, from (25) and a brief calculation we have

$\displaystyle \alpha_k(k(k+1)) \leq \frac{(d-k)(k-1)}{(k+1)(d-k+2)} \alpha_k( k(k-1))$

$\displaystyle + \frac{2(d+1)}{(k+1)(d-k+2)} \alpha_k(k(d+1))$

for ${2 \leq k \leq d-1}$; the same bound holds for ${k=1}$ if we drop the term with the ${(k-1)}$ factor, thanks to (24). Thus from (29) we have

$\displaystyle \beta_{k+1} \leq \frac{(d-k)(k-1)}{k(d-k+2)} \beta_k + \frac{2(d+1)}{k(d-k+2)} \gamma_k, \ \ \ \ \ (31)$

for ${1 \leq k \leq d-1}$, again with the understanding that we omit the first term on the right-hand side when ${k=1}$. Finally, (26) gives

$\displaystyle \gamma_d \leq -\eta.$

Let us write out the system of equations we have obtained in full:

$\displaystyle \beta_2 \leq 2 \gamma_1 \ \ \ \ \ (32)$

$\displaystyle \gamma_1 \leq \frac{1}{d} \beta_2 + \frac{d-1}{d} \gamma_2 \ \ \ \ \ (33)$

$\displaystyle \beta_3 \leq \frac{d-2}{2d} \beta_2 + \frac{2(d+1)}{2d} \gamma_2 \ \ \ \ \ (34)$

$\displaystyle \gamma_2 \leq \frac{1}{d-1} \beta_3 + \frac{d-2}{d-1} \gamma_3 \ \ \ \ \ (35)$

$\displaystyle \beta_4 \leq \frac{2(d-3)}{3(d-1)} \beta_3 + \frac{2(d+1)}{3(d-1)} \gamma_3$

$\displaystyle \gamma_3 \leq \frac{1}{d-2} \beta_4 + \frac{d-3}{d-2} \gamma_4$

$\displaystyle ...$

$\displaystyle \beta_d \leq \frac{d-2}{(d-1) 3} \beta_{d-1} + \frac{2(d+1)}{(d-1) 3} \gamma_{d-1}$

$\displaystyle \gamma_{d-1} \leq \frac{1}{2} \beta_d + \frac{1}{2} \gamma_d \ \ \ \ \ (36)$

$\displaystyle \gamma_d \leq -\eta. \ \ \ \ \ (37)$

We can then eliminate the variables one by one. Inserting (33) into (32) we obtain

$\displaystyle \beta_2 \leq \frac{2}{d} \beta_2 + \frac{2(d-1)}{d} \gamma_2$

which simplifies to

$\displaystyle \beta_2 \leq \frac{2(d-1)}{d-2} \gamma_2.$

Inserting this into (34) gives

$\displaystyle \beta_3 \leq 2 \gamma_2$

which when combined with (35) gives

$\displaystyle \beta_3 \leq \frac{2}{d-1} \beta_3 + \frac{2(d-2)}{d-1} \gamma_3$

which simplifies to

$\displaystyle \beta_3 \leq \frac{2(d-2)}{d-3} \gamma_3.$

Iterating this we get

$\displaystyle \beta_{k+1} \leq 2 \gamma_k$

for all ${1 \leq k \leq d-1}$ and

$\displaystyle \beta_k \leq \frac{2(d-k+1)}{d-k} \gamma_k$

for all ${2 \leq k \leq d-1}$. In particular

$\displaystyle \beta_d \leq 2 \gamma_{d-1}$

which on insertion into (36), (37) gives

$\displaystyle \beta_d \leq \beta_d - \eta$

which is absurd if ${\eta>0}$. Thus ${\eta \leq 0}$ and so ${D_p(N)}$ must be of subpolynomial growth.

Remark 11 (This observation is essentially due to Heath-Brown.) If we let ${x}$ denote the column vector with entries ${\beta_2,\dots,\beta_d,\gamma_1,\dots,\gamma_{d-1}}$ (arranged in whatever order one pleases), then the above system of inequalities (32)(36) (using (37) to handle the appearance of ${\gamma_d}$ in (36)) reads

$\displaystyle x \leq Px + \eta v \ \ \ \ \ (38)$

for some explicit square matrix ${P}$ with non-negative coefficients, where the inequality denotes pointwise domination, and ${v}$ is an explicit vector with non-positive coefficients that reflects the effect of (37). It is possible to show (using (24), (26)) that all the coefficients of ${x}$ are negative (assuming the counterfactual situation ${\eta>0}$ of course). Then we can iterate this to obtain

$\displaystyle x \leq P^k x + \eta \sum_{j=0}^{k-1} P^j v$

for any natural number ${k}$. This would lead to an immediate contradiction if the Perron-Frobenius eigenvalue of ${P}$ exceeds ${1}$ because ${P^k x}$ would now grow exponentially; this is typically the situation for “non-endpoint” applications such as proving decoupling inequalities away from the endpoint. In the endpoint situation discussed above, the Perron-Frobenius eigenvalue is ${1}$, with ${v}$ having a non-trivial projection to this eigenspace, so the sum ${\sum_{j=0}^{k-1} \eta P^j v}$ now grows at least linearly, which still gives the required contradiction for any ${\eta>0}$. So it is important to gather “enough” inequalities so that the relevant matrix ${P}$ has a Perron-Frobenius eigenvalue greater than or equal to ${1}$ (and in the latter case one needs non-trivial injection of an induction hypothesis into an eigenspace corresponding to an eigenvalue ${1}$). More specifically, if ${\rho}$ is the spectral radius of ${P}$ and ${w^T}$ is a left Perron-Frobenius eigenvector, that is to say a non-negative vector, not identically zero, such that ${w^T P = \rho w^T}$, then by taking inner products of (38) with ${w}$ we obtain

$\displaystyle w^T x \leq \rho w^T x + \eta w^T v.$

If ${\rho > 1}$ this leads to a contradiction since ${w^T x}$ is negative and ${w^T v}$ is non-positive. When ${\rho = 1}$ one still gets a contradiction as long as ${w^T v}$ is strictly negative.

Remark 12 (This calculation is essentially due to Guo and Zorin-Kranich.) Here is a concrete application of the Perron-Frobenius strategy outlined above to the system of inequalities (32)(37). Consider the weighted sum

$\displaystyle W := \sum_{k=2}^d (k-1) \beta_k + \sum_{k=1}^{d-1} 2k \gamma_k;$

I had secretly calculated the weights ${k-1}$, ${2k}$ as coming from the left Perron-Frobenius eigenvector of the matrix ${P}$ described in the previous remark, but for this calculation the precise provenance of the weights is not relevant. Applying the inequalities (31), (30) we see that ${W}$ is bounded by

$\displaystyle \sum_{k=2}^d (k-1) (\frac{(d-k+1)(k-2)}{(k-1)(d-k+3)} \beta_{k-1} + \frac{2(d+1)}{(k-1)(d-k+3)} \gamma_{k-1})$

$\displaystyle + \sum_{k=1}^{d-1} 2k(\frac{1}{d-k+1} \beta_{k+1} + \frac{d-k}{d-k+1} \gamma_{k+1})$

(with the convention that the ${\beta_1}$ term is absent); this simplifies after some calculation to the bound

$\displaystyle W \leq W + \frac{1}{2} \gamma_d$

Exercise 13

• (i) Extend the above analysis to also cover the non-endpoint case ${d^2 < p < d(d+1)}$. (One will need to establish the claim ${\alpha_k(t) \leq -\eta}$ for ${t \leq p}$.)
• (ii) Modify the argument to deal with the remaining cases ${2 < p \leq d^2}$ by dropping some of the steps.

Given any finite collection of elements ${(f_i)_{i \in I}}$ in some Banach space ${X}$, the triangle inequality tells us that

$\displaystyle \| \sum_{i \in I} f_i \|_X \leq \sum_{i \in I} \|f_i\|_X.$

However, when the ${f_i}$ all “oscillate in different ways”, one expects to improve substantially upon the triangle inequality. For instance, if ${X}$ is a Hilbert space and the ${f_i}$ are mutually orthogonal, we have the Pythagorean theorem

$\displaystyle \| \sum_{i \in I} f_i \|_X = (\sum_{i \in I} \|f_i\|_X^2)^{1/2}.$

For sake of comparison, from the triangle inequality and Cauchy-Schwarz one has the general inequality

$\displaystyle \| \sum_{i \in I} f_i \|_X \leq (\# I)^{1/2} (\sum_{i \in I} \|f_i\|_X^2)^{1/2} \ \ \ \ \ (1)$

for any finite collection ${(f_i)_{i \in I}}$ in any Banach space ${X}$, where ${\# I}$ denotes the cardinality of ${I}$. Thus orthogonality in a Hilbert space yields “square root cancellation”, saving a factor of ${(\# I)^{1/2}}$ or so over the trivial bound coming from the triangle inequality.

More generally, let us somewhat informally say that a collection ${(f_i)_{i \in I}}$ exhibits decoupling in ${X}$ if one has the Pythagorean-like inequality

$\displaystyle \| \sum_{i \in I} f_i \|_X \ll_\varepsilon (\# I)^\varepsilon (\sum_{i \in I} \|f_i\|_X^2)^{1/2}$

for any ${\varepsilon>0}$, thus one obtains almost the full square root cancellation in the ${X}$ norm. The theory of almost orthogonality can then be viewed as the theory of decoupling in Hilbert spaces such as ${L^2({\bf R}^n)}$. In ${L^p}$ spaces for ${p < 2}$ one usually does not expect this sort of decoupling; for instance, if the ${f_i}$ are disjointly supported one has

$\displaystyle \| \sum_{i \in I} f_i \|_{L^p} = (\sum_{i \in I} \|f_i\|_{L^p}^p)^{1/p}$

and the right-hand side can be much larger than ${(\sum_{i \in I} \|f_i\|_{L^p}^2)^{1/2}}$ when ${p < 2}$. At the opposite extreme, one usually does not expect to get decoupling in ${L^\infty}$, since one could conceivably align the ${f_i}$ to all attain a maximum magnitude at the same location with the same phase, at which point the triangle inequality in ${L^\infty}$ becomes sharp.

However, in some cases one can get decoupling for certain ${2 < p < \infty}$. For instance, suppose we are in ${L^4}$, and that ${f_1,\dots,f_N}$ are bi-orthogonal in the sense that the products ${f_i f_j}$ for ${1 \leq i < j \leq N}$ are pairwise orthogonal in ${L^2}$. Then we have

$\displaystyle \| \sum_{i = 1}^N f_i \|_{L^4}^2 = \| (\sum_{i=1}^N f_i)^2 \|_{L^2}$

$\displaystyle = \| \sum_{1 \leq i,j \leq N} f_i f_j \|_{L^2}$

$\displaystyle \ll (\sum_{1 \leq i,j \leq N} \|f_i f_j \|_{L^2}^2)^{1/2}$

$\displaystyle = \| (\sum_{1 \leq i,j \leq N} |f_i f_j|^2)^{1/2} \|_{L^2}$

$\displaystyle = \| \sum_{i=1}^N |f_i|^2 \|_{L^2}$

$\displaystyle \leq \sum_{i=1}^N \| |f_i|^2 \|_{L^2}$

$\displaystyle = \sum_{i=1}^N \|f_i\|_{L^4}^2$

giving decoupling in ${L^4}$. (Similarly if each of the ${f_i f_j}$ is orthogonal to all but ${O_\varepsilon( N^\varepsilon )}$ of the other ${f_{i'} f_{j'}}$.) A similar argument also gives ${L^6}$ decoupling when one has tri-orthogonality (with the ${f_i f_j f_k}$ mostly orthogonal to each other), and so forth. As a slight variant, Khintchine’s inequality also indicates that decoupling should occur for any fixed ${2 < p < \infty}$ if one multiplies each of the ${f_i}$ by an independent random sign ${\epsilon_i \in \{-1,+1\}}$.

In recent years, Bourgain and Demeter have been establishing decoupling theorems in ${L^p({\bf R}^n)}$ spaces for various key exponents of ${2 < p < \infty}$, in the “restriction theory” setting in which the ${f_i}$ are Fourier transforms of measures supported on different portions of a given surface or curve; this builds upon the earlier decoupling theorems of Wolff. In a recent paper with Guth, they established the following decoupling theorem for the curve ${\gamma({\bf R}) \subset {\bf R}^n}$ parameterised by the polynomial curve

$\displaystyle \gamma: t \mapsto (t, t^2, \dots, t^n).$

For any ball ${B = B(x_0,r)}$ in ${{\bf R}^n}$, let ${w_B: {\bf R}^n \rightarrow {\bf R}^+}$ denote the weight

$\displaystyle w_B(x) := \frac{1}{(1 + \frac{|x-x_0|}{r})^{100n}},$

which should be viewed as a smoothed out version of the indicator function ${1_B}$ of ${B}$. In particular, the space ${L^p(w_B) = L^p({\bf R}^n, w_B(x)\ dx)}$ can be viewed as a smoothed out version of the space ${L^p(B)}$. For future reference we observe a fundamental self-similarity of the curve ${\gamma({\bf R})}$: any arc ${\gamma(I)}$ in this curve, with ${I}$ a compact interval, is affinely equivalent to the standard arc ${\gamma([0,1])}$.

Theorem 1 (Decoupling theorem) Let ${n \geq 1}$. Subdivide the unit interval ${[0,1]}$ into ${N}$ equal subintervals ${I_i}$ of length ${1/N}$, and for each such ${I_i}$, let ${f_i: {\bf R}^n \rightarrow {\bf R}}$ be the Fourier transform

$\displaystyle f_i(x) = \int_{\gamma(I_i)} e(x \cdot \xi)\ d\mu_i(\xi)$

of a finite Borel measure ${\mu_i}$ on the arc ${\gamma(I_i)}$, where ${e(\theta) := e^{2\pi i \theta}}$. Then the ${f_i}$ exhibit decoupling in ${L^{n(n+1)}(w_B)}$ for any ball ${B}$ of radius ${N^n}$.

Orthogonality gives the ${n=1}$ case of this theorem. The bi-orthogonality type arguments sketched earlier only give decoupling in ${L^p}$ up to the range ${2 \leq p \leq 2n}$; the point here is that we can now get a much larger value of ${n}$. The ${n=2}$ case of this theorem was previously established by Bourgain and Demeter (who obtained in fact an analogous theorem for any curved hypersurface). The exponent ${n(n+1)}$ (and the radius ${N^n}$) is best possible, as can be seen by the following basic example. If

$\displaystyle f_i(x) := \int_{I_i} e(x \cdot \gamma(\xi)) g_i(\xi)\ d\xi$

where ${g_i}$ is a bump function adapted to ${I_i}$, then standard Fourier-analytic computations show that ${f_i}$ will be comparable to ${1/N}$ on a rectangular box of dimensions ${N \times N^2 \times \dots \times N^n}$ (and thus volume ${N^{n(n+1)/2}}$) centred at the origin, and exhibit decay away from this box, with ${\|f_i\|_{L^{n(n+1)}(w_B)}}$ comparable to

$\displaystyle 1/N \times (N^{n(n+1)/2})^{1/(n(n+1))} = 1/\sqrt{N}.$

On the other hand, ${\sum_{i=1}^N f_i}$ is comparable to ${1}$ on a ball of radius comparable to ${1}$ centred at the origin, so ${\|\sum_{i=1}^N f_i\|_{L^{n(n+1)}(w_B)}}$ is ${\gg 1}$, which is just barely consistent with decoupling. This calculation shows that decoupling will fail if ${n(n+1)}$ is replaced by any larger exponent, and also if the radius of the ball ${B}$ is reduced to be significantly smaller than ${N^n}$.

This theorem has the following consequence of importance in analytic number theory:

Corollary 2 (Vinogradov main conjecture) Let ${s, n, N \geq 1}$ be integers, and let ${\varepsilon > 0}$. Then

$\displaystyle \int_{[0,1]^n} |\sum_{j=1}^N e( j x_1 + j^2 x_2 + \dots + j^n x_n)|^{2s}\ dx_1 \dots dx_n$

$\displaystyle \ll_{\varepsilon,s,n} N^{s+\varepsilon} + N^{2s - \frac{n(n+1)}{2}+\varepsilon}.$

Proof: By the Hölder inequality (and the trivial bound of ${N}$ for the exponential sum), it suffices to treat the critical case ${s = n(n+1)/2}$, that is to say to show that

$\displaystyle \int_{[0,1]^n} |\sum_{j=1}^N e( j x_1 + j^2 x_2 + \dots + j^n x_n)|^{n(n+1)}\ dx_1 \dots dx_n \ll_{\varepsilon,n} N^{\frac{n(n+1)}{2}+\varepsilon}.$

We can rescale this as

$\displaystyle \int_{[0,N] \times [0,N^2] \times \dots \times [0,N^n]} |\sum_{j=1}^N e( x \cdot \gamma(j/N) )|^{n(n+1)}\ dx \ll_{\varepsilon,n} N^{n(n+1)+\varepsilon}.$

As the integrand is periodic along the lattice ${N{\bf Z} \times N^2 {\bf Z} \times \dots \times N^n {\bf Z}}$, this is equivalent to

$\displaystyle \int_{[0,N^n]^n} |\sum_{j=1}^N e( x \cdot \gamma(j/N) )|^{n(n+1)}\ dx \ll_{\varepsilon,n} N^{\frac{n(n+1)}{2}+n^2+\varepsilon}.$

The left-hand side may be bounded by ${\ll \| \sum_{j=1}^N f_j \|_{L^{n(n+1)}(w_B)}^{n(n+1)}}$, where ${B := B(0,N^n)}$ and ${f_j(x) := e(x \cdot \gamma(j/N))}$. Since

$\displaystyle \| f_j \|_{L^{n(n+1)}(w_B)} \ll (N^{n^2})^{\frac{1}{n(n+1)}},$

the claim now follows from the decoupling theorem and a brief calculation. $\Box$

Using the Plancherel formula, one may equivalently (when ${s}$ is an integer) write the Vinogradov main conjecture in terms of solutions ${j_1,\dots,j_s,k_1,\dots,k_s \in \{1,\dots,N\}}$ to the system of equations

$\displaystyle j_1^i + \dots + j_s^i = k_1^i + \dots + k_s^i \forall i=1,\dots,n,$

but we will not use this formulation here.

A history of the Vinogradov main conjecture may be found in this survey of Wooley; prior to the Bourgain-Demeter-Guth theorem, the conjecture was solved completely for ${n \leq 3}$, or for ${n > 3}$ and ${s}$ either below ${n(n+1)/2 - n/3 + O(n^{2/3})}$ or above ${n(n-1)}$, with the bulk of recent progress coming from the efficient congruencing technique of Wooley. It has numerous applications to exponential sums, Waring’s problem, and the zeta function; to give just one application, the main conjecture implies the predicted asymptotic for the number of ways to express a large number as the sum of ${23}$ fifth powers (the previous best result required ${28}$ fifth powers). The Bourgain-Demeter-Guth approach to the Vinogradov main conjecture, based on decoupling, is ostensibly very different from the efficient congruencing technique, which relies heavily on the arithmetic structure of the program, but it appears (as I have been told from second-hand sources) that the two methods are actually closely related, with the former being a sort of “Archimedean” version of the latter (with the intervals ${I_i}$ in the decoupling theorem being analogous to congruence classes in the efficient congruencing method); hopefully there will be some future work making this connection more precise. One advantage of the decoupling approach is that it generalises to non-arithmetic settings in which the set ${\{1,\dots,N\}}$ that ${j}$ is drawn from is replaced by some other similarly separated set of real numbers. (A random thought – could this allow the Vinogradov-Korobov bounds on the zeta function to extend to Beurling zeta functions?)

Below the fold we sketch the Bourgain-Demeter-Guth argument proving Theorem 1.

I thank Jean Bourgain and Andrew Granville for helpful discussions.