You are currently browsing the tag archive for the ‘multilinear Kakeya conjecture’ tag.

This set of notes focuses on the restriction problem in Fourier analysis. Introduced by Elias Stein in the 1970s, the restriction problem is a key model problem for understanding more general oscillatory integral operators, and which has turned out to be connected to many questions in geometric measure theory, harmonic analysis, combinatorics, number theory, and PDE. Only partial results on the problem are known, but these partial results have already proven to be very useful or influential in many applications.
We work in a Euclidean space ${{\bf R}^d}$. Recall that ${L^p({\bf R}^d)}$ is the space of ${p^{th}}$-power integrable functions ${f: {\bf R}^d \rightarrow {\bf C}}$, quotiented out by almost everywhere equivalence, with the usual modifications when ${p=\infty}$. If ${f \in L^1({\bf R}^d)}$ then the Fourier transform ${\hat f: {\bf R}^d \rightarrow {\bf C}}$ will be defined in this course by the formula

$\displaystyle \hat f(\xi) := \int_{{\bf R}^d} f(x) e^{-2\pi i x \cdot \xi}\ dx. \ \ \ \ \ (1)$

From the dominated convergence theorem we see that ${\hat f}$ is a continuous function; from the Riemann-Lebesgue lemma we see that it goes to zero at infinity. Thus ${\hat f}$ lies in the space ${C_0({\bf R}^d)}$ of continuous functions that go to zero at infinity, which is a subspace of ${L^\infty({\bf R}^d)}$. Indeed, from the triangle inequality it is obvious that

$\displaystyle \|\hat f\|_{L^\infty({\bf R}^d)} \leq \|f\|_{L^1({\bf R}^d)}. \ \ \ \ \ (2)$

If ${f \in L^1({\bf R}^d) \cap L^2({\bf R}^d)}$, then Plancherel’s theorem tells us that we have the identity

$\displaystyle \|\hat f\|_{L^2({\bf R}^d)} = \|f\|_{L^2({\bf R}^d)}. \ \ \ \ \ (3)$

Because of this, there is a unique way to extend the Fourier transform ${f \mapsto \hat f}$ from ${L^1({\bf R}^d) \cap L^2({\bf R}^d)}$ to ${L^2({\bf R}^d)}$, in such a way that it becomes a unitary map from ${L^2({\bf R}^d)}$ to itself. By abuse of notation we continue to denote this extension of the Fourier transform by ${f \mapsto \hat f}$. Strictly speaking, this extension is no longer defined in a pointwise sense by the formula (1) (indeed, the integral on the RHS ceases to be absolutely integrable once ${f}$ leaves ${L^1({\bf R}^d)}$; we will return to the (surprisingly difficult) question of whether pointwise convergence continues to hold (at least in an almost everywhere sense) later in this course, when we discuss Carleson’s theorem. On the other hand, the formula (1) remains valid in the sense of distributions, and in practice most of the identities and inequalities one can show about the Fourier transform of “nice” functions (e.g., functions in ${L^1({\bf R}^d) \cap L^2({\bf R}^d)}$, or in the Schwartz class ${{\mathcal S}({\bf R}^d)}$, or test function class ${C^\infty_c({\bf R}^d)}$) can be extended to functions in “rough” function spaces such as ${L^2({\bf R}^d)}$ by standard limiting arguments.
By (2), (3), and the Riesz-Thorin interpolation theorem, we also obtain the Hausdorff-Young inequality

$\displaystyle \|\hat f\|_{L^{p'}({\bf R}^d)} \leq \|f\|_{L^p({\bf R}^d)} \ \ \ \ \ (4)$

for all ${1 \leq p \leq 2}$ and ${f \in L^1({\bf R}^d) \cap L^2({\bf R}^d)}$, where ${2 \leq p' \leq \infty}$ is the dual exponent to ${p}$, defined by the usual formula ${\frac{1}{p} + \frac{1}{p'} = 1}$. (One can improve this inequality by a constant factor, with the optimal constant worked out by Beckner, but the focus in these notes will not be on optimal constants.) As a consequence, the Fourier transform can also be uniquely extended as a continuous linear map from ${L^p({\bf R}^d) \rightarrow L^{p'}({\bf R}^d)}$. (The situation with ${p>2}$ is much worse; see below the fold.)
The restriction problem asks, for a given exponent ${1 \leq p \leq 2}$ and a subset ${S}$ of ${{\bf R}^d}$, whether it is possible to meaningfully restrict the Fourier transform ${\hat f}$ of a function ${f \in L^p({\bf R}^d)}$ to the set ${S}$. If the set ${S}$ has positive Lebesgue measure, then the answer is yes, since ${\hat f}$ lies in ${L^{p'}({\bf R}^d)}$ and therefore has a meaningful restriction to ${S}$ even though functions in ${L^{p'}}$ are only defined up to sets of measure zero. But what if ${S}$ has measure zero? If ${p=1}$, then ${\hat f \in C_0({\bf R}^d)}$ is continuous and therefore can be meaningfully restricted to any set ${S}$. At the other extreme, if ${p=2}$ and ${f}$ is an arbitrary function in ${L^2({\bf R}^d)}$, then by Plancherel’s theorem, ${\hat f}$ is also an arbitrary function in ${L^2({\bf R}^d)}$, and thus has no well-defined restriction to any set ${S}$ of measure zero.
It was observed by Stein (as reported in the Ph.D. thesis of Charlie Fefferman) that for certain measure zero subsets ${S}$ of ${{\bf R}^d}$, such as the sphere ${S^{d-1} := \{ \xi \in {\bf R}^d: |\xi| = 1\}}$, one can obtain meaningful restrictions of the Fourier transforms of functions ${f \in L^p({\bf R}^d)}$ for certain ${p}$ between ${1}$ and ${2}$, thus demonstrating that the Fourier transform of such functions retains more structure than a typical element of ${L^{p'}({\bf R}^d)}$:

Theorem 1 (Preliminary ${L^2}$ restriction theorem) If ${d \geq 2}$ and ${1 \leq p < \frac{4d}{3d+1}}$, then one has the estimate

$\displaystyle \| \hat f \|_{L^2(S^{d-1}, d\sigma)} \lesssim_{d,p} \|f\|_{L^p({\bf R}^d)}$

for all Schwartz functions ${f \in {\mathcal S}({\bf R}^d)}$, where ${d\sigma}$ denotes surface measure on the sphere ${S^{d-1}}$. In particular, the restriction ${\hat f|_S}$ can be meaningfully defined by continuous linear extension to an element of ${L^2(S^{d-1},d\sigma)}$.

Proof: Fix ${d,p,f}$. We expand out

$\displaystyle \| \hat f \|_{L^2(S^{d-1}, d\sigma)}^2 = \int_{S^{d-1}} |\hat f(\xi)|^2\ d\sigma(\xi).$

From (1) and Fubini’s theorem, the right-hand side may be expanded as

$\displaystyle \int_{{\bf R}^d} \int_{{\bf R}^d} f(x) \overline{f}(y) (d\sigma)^\vee(y-x)\ dx dy$

where the inverse Fourier transform ${(d\sigma)^\vee}$ of the measure ${d\sigma}$ is defined by the formula

$\displaystyle (d\sigma)^\vee(x) := \int_{S^{d-1}} e^{2\pi i x \cdot \xi}\ d\sigma(\xi).$

In other words, we have the identity

$\displaystyle \| \hat f \|_{L^2(S^{d-1}, d\sigma)}^2 = \langle f, f * (d\sigma)^\vee \rangle_{L^2({\bf R}^d)}, \ \ \ \ \ (5)$

using the Hermitian inner product ${\langle f, g\rangle_{L^2({\bf R}^d)} := \int_{{\bf R}^d} \overline{f(x)} g(x)\ dx}$. Since the sphere ${S^{d-1}}$ have bounded measure, we have from the triangle inequality that

$\displaystyle (d\sigma)^\vee(x) \lesssim_d 1. \ \ \ \ \ (6)$

Also, from the method of stationary phase (as covered in the previous class 247A), or Bessel function asymptotics, we have the decay

$\displaystyle (d\sigma)^\vee(x) \lesssim_d |x|^{-(d-1)/2} \ \ \ \ \ (7)$

for any ${x \in {\bf R}^d}$ (note that the bound already follows from (6) unless ${|x| \geq 1}$). We remark that the exponent ${-\frac{d-1}{2}}$ here can be seen geometrically from the following considerations. For ${|x|>1}$, the phase ${e^{2\pi i x \cdot \xi}}$ on the sphere is stationary at the two antipodal points ${x/|x|, -x/|x|}$ of the sphere, and constant on the tangent hyperplanes to the sphere at these points. The wavelength of this phase is proportional to ${1/|x|}$, so the phase would be approximately stationary on a cap formed by intersecting the sphere with a ${\sim 1/|x|}$ neighbourhood of the tangent hyperplane to one of the stationary points. As the sphere is tangent to second order at these points, this cap will have diameter ${\sim 1/|x|^{1/2}}$ in the directions of the ${d-1}$-dimensional tangent space, so the cap will have surface measure ${\sim |x|^{-(d-1)/2}}$, which leads to the prediction (7). We combine (6), (7) into the unified estimate

$\displaystyle (d\sigma)^\vee(x) \lesssim_d \langle x\rangle^{-(d-1)/2}, \ \ \ \ \ (8)$

where the “Japanese bracket” ${\langle x\rangle}$ is defined as ${\langle x \rangle := (1+|x|^2)^{1/2}}$. Since ${\langle x \rangle^{-\alpha}}$ lies in ${L^p({\bf R}^d)}$ precisely when ${p > \frac{d}{\alpha}}$, we conclude that

$\displaystyle (d\sigma)^\vee \in L^q({\bf R}^d) \hbox{ iff } q > \frac{d}{(d-1)/2}.$

Applying Young’s convolution inequality, we conclude (after some arithmetic) that

$\displaystyle \| f * (d\sigma)^\vee \|_{L^{p'}({\bf R}^d)} \lesssim_{p,d} \|f\|_{L^p({\bf R}^d)}$

whenever ${1 \leq p < \frac{4d}{3d+1}}$, and the claim now follows from (5) and Hölder’s inequality. $\Box$

Remark 2 By using the Hardy-Littlewood-Sobolev inequality in place of Young’s convolution inequality, one can also establish this result for ${p = \frac{4d}{3d+1}}$.

Motivated by this result, given any Radon measure ${\mu}$ on ${{\bf R}^d}$ and any exponents ${1 \leq p,q \leq \infty}$, we use ${R_\mu(p \rightarrow q)}$ to denote the claim that the restriction estimate

$\displaystyle \| \hat f \|_{L^q({\bf R}^d, \mu)} \lesssim_{d,p,q,\mu} \|f\|_{L^p({\bf R}^d)} \ \ \ \ \ (9)$

for all Schwartz functions ${f}$; if ${S}$ is a ${k}$-dimensional submanifold of ${{\bf R}^d}$ (possibly with boundary), we write ${R_S(p \rightarrow q)}$ for ${R_\mu(p \rightarrow q)}$ where ${\mu}$ is the ${k}$-dimensional surface measure on ${S}$. Thus, for instance, we trivially always have ${R_S(1 \rightarrow \infty)}$, while Theorem 1 asserts that ${R_{S^{d-1}}(p \rightarrow 2)}$ holds whenever ${1 \leq p < \frac{4d}{3d+1}}$. We will not give a comprehensive survey of restriction theory in these notes, but instead focus on some model results that showcase some of the basic techniques in the field. (I have a more detailed survey on this topic from 2003, but it is somewhat out of date.)
Read the rest of this entry »

Let ${\Omega}$ be some domain (such as the real numbers). For any natural number ${p}$, let ${L(\Omega^p)_{sym}}$ denote the space of symmetric real-valued functions ${F^{(p)}: \Omega^p \rightarrow {\bf R}}$ on ${p}$ variables ${x_1,\dots,x_p \in \Omega}$, thus

$\displaystyle F^{(p)}(x_{\sigma(1)},\dots,x_{\sigma(p)}) = F^{(p)}(x_1,\dots,x_p)$

for any permutation ${\sigma: \{1,\dots,p\} \rightarrow \{1,\dots,p\}}$. For instance, for any natural numbers ${k,p}$, the elementary symmetric polynomials

$\displaystyle e_k^{(p)}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} x_{i_1} \dots x_{i_k}$

will be an element of ${L({\bf R}^p)_{sym}}$. With the pointwise product operation, ${L(\Omega^p)_{sym}}$ becomes a commutative real algebra. We include the case ${p=0}$, in which case ${L(\Omega^0)_{sym}}$ consists solely of the real constants.

Given two natural numbers ${k,p}$, one can “lift” a symmetric function ${F^{(k)} \in L(\Omega^k)_{sym}}$ of ${k}$ variables to a symmetric function ${[F^{(k)}]_{k \rightarrow p} \in L(\Omega^p)_{sym}}$ of ${p}$ variables by the formula

$\displaystyle [F^{(k)}]_{k \rightarrow p}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} F^{(k)}(x_{i_1}, \dots, x_{i_k})$

$\displaystyle = \frac{1}{k!} \sum_\pi F^{(k)}( x_{\pi(1)}, \dots, x_{\pi(k)} )$

where ${\pi}$ ranges over all injections from ${\{1,\dots,k\}}$ to ${\{1,\dots,p\}}$ (the latter formula making it clearer that ${[F^{(k)}]_{k \rightarrow p}}$ is symmetric). Thus for instance

$\displaystyle [F^{(1)}(x_1)]_{1 \rightarrow p} = \sum_{i=1}^p F^{(1)}(x_i)$

$\displaystyle [F^{(2)}(x_1,x_2)]_{2 \rightarrow p} = \sum_{1 \leq i < j \leq p} F^{(2)}(x_i,x_j)$

and

$\displaystyle e_k^{(p)}(x_1,\dots,x_p) = [x_1 \dots x_k]_{k \rightarrow p}.$

Also we have

$\displaystyle [1]_{k \rightarrow p} = \binom{p}{k} = \frac{p(p-1)\dots(p-k+1)}{k!}.$

With these conventions, we see that ${[F^{(k)}]_{k \rightarrow p}}$ vanishes for ${p=0,\dots,k-1}$, and is equal to ${F}$ if ${k=p}$. We also have the transitivity

$\displaystyle [F^{(k)}]_{k \rightarrow p} = \frac{1}{\binom{p-k}{p-l}} [[F^{(k)}]_{k \rightarrow l}]_{l \rightarrow p}$

if ${k \leq l \leq p}$.

The lifting map ${[]_{k \rightarrow p}}$ is a linear map from ${L(\Omega^k)_{sym}}$ to ${L(\Omega^p)_{sym}}$, but it is not a ring homomorphism. For instance, when ${\Omega={\bf R}}$, one has

$\displaystyle [x_1]_{1 \rightarrow p} [x_1]_{1 \rightarrow p} = (\sum_{i=1}^p x_i)^2 \ \ \ \ \ (1)$

$\displaystyle = \sum_{i=1}^p x_i^2 + 2 \sum_{1 \leq i < j \leq p} x_i x_j$

$\displaystyle = [x_1^2]_{1 \rightarrow p} + 2 [x_1 x_2]_{1 \rightarrow p}$

$\displaystyle \neq [x_1^2]_{1 \rightarrow p}.$

In general, one has the identity

$\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow p} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow p} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (2)$

$\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow p}$

for all natural numbers ${k,l,p}$ and ${F^{(k)} \in L(\Omega^k)_{sym}}$, ${G^{(l)} \in L(\Omega^l)_{sym}}$, where ${\pi, \rho}$ range over all injections ${\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}$, ${\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}}$ with ${\pi(\{1,\dots,k\}) \cup \rho(\{1,\dots,l\}) = \{1,\dots,m\}}$. Combinatorially, the identity (2) follows from the fact that given any injections ${\tilde \pi: \{1,\dots,k\} \rightarrow \{1,\dots,p\}}$ and ${\tilde \rho: \{1,\dots,l\} \rightarrow \{1,\dots,p\}}$ with total image ${\tilde \pi(\{1,\dots,k\}) \cup \tilde \rho(\{1,\dots,l\})}$ of cardinality ${m}$, one has ${k,l \leq m \leq k+l}$, and furthermore there exist precisely ${m!}$ triples ${(\pi, \rho, \sigma)}$ of injections ${\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}$, ${\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}}$, ${\sigma: \{1,\dots,m\} \rightarrow \{1,\dots,p\}}$ such that ${\tilde \pi = \sigma \circ \pi}$ and ${\tilde \rho = \sigma \circ \rho}$.

Example 1 When ${\Omega = {\bf R}}$, one has

$\displaystyle [x_1 x_2]_{2 \rightarrow p} [x_1]_{1 \rightarrow p} = [\frac{1}{2! 1!}( 2 x_1^2 x_2 + 2 x_1 x_2^2 )]_{2 \rightarrow p} + [\frac{1}{2! 1!} 6 x_1 x_2 x_3]_{3 \rightarrow p}$

$\displaystyle = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow p} + [3x_1 x_2 x_3]_{3 \rightarrow p}$

which is just a restatement of the identity

$\displaystyle (\sum_{i < j} x_i x_j) (\sum_k x_k) = \sum_{i

Note that the coefficients appearing in (2) do not depend on the final number of variables ${p}$. We may therefore abstract the role of ${p}$ from the law (2) by introducing the real algebra ${L(\Omega^*)_{sym}}$ of formal sums

$\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}$

where for each ${k}$, ${F^{(k)}}$ is an element of ${L(\Omega^k)_{sym}}$ (with only finitely many of the ${F^{(k)}}$ being non-zero), and with the formal symbol ${[]_{k \rightarrow *}}$ being formally linear, thus

$\displaystyle [F^{(k)}]_{k \rightarrow *} + [G^{(k)}]_{k \rightarrow *} := [F^{(k)} + G^{(k)}]_{k \rightarrow *}$

and

$\displaystyle c [F^{(k)}]_{k \rightarrow *} := [cF^{(k)}]_{k \rightarrow *}$

for ${F^{(k)}, G^{(k)} \in L(\Omega^k)_{sym}}$ and scalars ${c \in {\bf R}}$, and with multiplication given by the analogue

$\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow *} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow *} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (3)$

$\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow *}$

of (2). Thus for instance, in this algebra ${L(\Omega^*)_{sym}}$ we have

$\displaystyle [x_1]_{1 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2]_{1 \rightarrow *} + 2 [x_1 x_2]_{2 \rightarrow *}$

and

$\displaystyle [x_1 x_2]_{2 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow *} + [3 x_1 x_2 x_3]_{3 \rightarrow *}.$

Informally, ${L(\Omega^*)_{sym}}$ is an abstraction (or “inverse limit”) of the concept of a symmetric function of an unspecified number of variables, which are formed by summing terms that each involve only a bounded number of these variables at a time. One can check (somewhat tediously) that ${L(\Omega^*)_{sym}}$ is indeed a commutative real algebra, with a unit ${[1]_{0 \rightarrow *}}$. (I do not know if this algebra has previously been studied in the literature; it is somewhat analogous to the abstract algebra of finite linear combinations of Schur polynomials, with multiplication given by a Littlewood-Richardson rule. )

For natural numbers ${p}$, there is an obvious specialisation map ${[]_{* \rightarrow p}}$ from ${L(\Omega^*)_{sym}}$ to ${L(\Omega^p)_{sym}}$, defined by the formula

$\displaystyle [\sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}]_{* \rightarrow p} := \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}.$

Thus, for instance, ${[]_{* \rightarrow p}}$ maps ${[x_1]_{1 \rightarrow *}}$ to ${[x_1]_{1 \rightarrow p}}$ and ${[x_1 x_2]_{2 \rightarrow *}}$ to ${[x_1 x_2]_{2 \rightarrow p}}$. From (2) and (3) we see that this map ${[]_{* \rightarrow p}: L(\Omega^*)_{sym} \rightarrow L(\Omega^p)_{sym}}$ is an algebra homomorphism, even though the maps ${[]_{k \rightarrow *}: L(\Omega^k)_{sym} \rightarrow L(\Omega^*)_{sym}}$ and ${[]_{k \rightarrow p}: L(\Omega^k)_{sym} \rightarrow L(\Omega^p)_{sym}}$ are not homomorphisms. By inspecting the ${p^{th}}$ component of ${L(\Omega^*)_{sym}}$ we see that the homomorphism ${[]_{* \rightarrow p}}$ is in fact surjective.

Now suppose that we have a measure ${\mu}$ on the space ${\Omega}$, which then induces a product measure ${\mu^p}$ on every product space ${\Omega^p}$. To avoid degeneracies we will assume that the integral ${\int_\Omega \mu}$ is strictly positive. Assuming suitable measurability and integrability hypotheses, a function ${F \in L(\Omega^p)_{sym}}$ can then be integrated against this product measure to produce a number

$\displaystyle \int_{\Omega^p} F\ d\mu^p.$

In the event that ${F}$ arises as a lift ${[F^{(k)}]_{k \rightarrow p}}$ of another function ${F^{(k)} \in L(\Omega^k)_{sym}}$, then from Fubini’s theorem we obtain the formula

$\displaystyle \int_{\Omega^p} F\ d\mu^p = \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}.$

Thus for instance, if ${\Omega={\bf R}}$,

$\displaystyle \int_{{\bf R}^p} [x_1]_{1 \rightarrow p}\ d\mu^p = p (\int_{\bf R} x\ d\mu(x)) (\int_{\bf R} \mu)^{p-1} \ \ \ \ \ (4)$

and

$\displaystyle \int_{{\bf R}^p} [x_1 x_2]_{2 \rightarrow p}\ d\mu^p = \binom{p}{2} (\int_{{\bf R}^2} x_1 x_2\ d\mu(x_1) d\mu(x_2)) (\int_{\bf R} \mu)^{p-2}. \ \ \ \ \ (5)$

On summing, we see that if

$\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}$

is an element of the formal algebra ${L(\Omega^*)_{sym}}$, then

$\displaystyle \int_{\Omega^p} [F^{(*)}]_{* \rightarrow p}\ d\mu^p = \sum_{k=0}^\infty \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}. \ \ \ \ \ (6)$

Note that by hypothesis, only finitely many terms on the right-hand side are non-zero.

Now for a key observation: whereas the left-hand side of (6) only makes sense when ${p}$ is a natural number, the right-hand side is meaningful when ${p}$ takes a fractional value (or even when it takes negative or complex values!), interpreting the binomial coefficient ${\binom{p}{k}}$ as a polynomial ${\frac{p(p-1) \dots (p-k+1)}{k!}}$ in ${p}$. As such, this suggests a way to introduce a “virtual” concept of a symmetric function on a fractional power space ${\Omega^p}$ for such values of ${p}$, and even to integrate such functions against product measures ${\mu^p}$, even if the fractional power ${\Omega^p}$ does not exist in the usual set-theoretic sense (and ${\mu^p}$ similarly does not exist in the usual measure-theoretic sense). More precisely, for arbitrary real or complex ${p}$, we now define ${L(\Omega^p)_{sym}}$ to be the space of abstract objects

$\displaystyle F^{(p)} = [F^{(*)}]_{* \rightarrow p} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}$

with ${F^{(*)} \in L(\Omega^*)_{sym}}$ and ${[]_{* \rightarrow p}}$ (and ${[]_{k \rightarrow p}}$ now interpreted as formal symbols, with the structure of a commutative real algebra inherited from ${L(\Omega^*)_{sym}}$, thus

$\displaystyle [F^{(*)}]_{* \rightarrow p} + [G^{(*)}]_{* \rightarrow p} := [F^{(*)} + G^{(*)}]_{* \rightarrow p}$

$\displaystyle c [F^{(*)}]_{* \rightarrow p} := [c F^{(*)}]_{* \rightarrow p}$

$\displaystyle [F^{(*)}]_{* \rightarrow p} [G^{(*)}]_{* \rightarrow p} := [F^{(*)} G^{(*)}]_{* \rightarrow p}.$

In particular, the multiplication law (2) continues to hold for such values of ${p}$, thanks to (3). Given any measure ${\mu}$ on ${\Omega}$, we formally define a measure ${\mu^p}$ on ${\Omega^p}$ with regards to which we can integrate elements ${F^{(p)}}$ of ${L(\Omega^p)_{sym}}$ by the formula (6) (providing one has sufficient measurability and integrability to make sense of this formula), thus providing a sort of “fractional dimensional integral” for symmetric functions. Thus, for instance, with this formalism the identities (4), (5) now hold for fractional values of ${p}$, even though the formal space ${{\bf R}^p}$ no longer makes sense as a set, and the formal measure ${\mu^p}$ no longer makes sense as a measure. (The formalism here is somewhat reminiscent of the technique of dimensional regularisation employed in the physical literature in order to assign values to otherwise divergent integrals. See also this post for an unrelated abstraction of the integration concept involving integration over supercommutative variables (and in particular over fermionic variables).)

Example 2 Suppose ${\mu}$ is a probability measure on ${\Omega}$, and ${X: \Omega \rightarrow {\bf R}}$ is a random variable; on any power ${\Omega^k}$, we let ${X_1,\dots,X_k: \Omega^k \rightarrow {\bf R}}$ be the usual independent copies of ${X}$ on ${\Omega^k}$, thus ${X_j(\omega_1,\dots,\omega_k) := X(\omega_j)}$ for ${(\omega_1,\dots,\omega_k) \in \Omega^k}$. Then for any real or complex ${p}$, the formal integral

$\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p$

can be evaluated by first using the identity

$\displaystyle [X_1]_{1 \rightarrow p}^2 = [X_1^2]_{1 \rightarrow p} + 2[X_1 X_2]_{2 \rightarrow p}$

(cf. (1)) and then using (6) and the probability measure hypothesis ${\int_\Omega\ d\mu = 1}$ to conclude that

$\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = \binom{p}{1} \int_{\Omega} X^2\ d\mu + 2 \binom{p}{2} \int_{\Omega^2} X_1 X_2\ d\mu^2$

$\displaystyle = p (\int_\Omega X^2\ d\mu - (\int_\Omega X\ d\mu)^2) + p^2 (\int_\Omega X\ d\mu)^2$

or in probabilistic notation

$\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2. \ \ \ \ \ (7)$

For ${p}$ a natural number, this identity has the probabilistic interpretation

$\displaystyle \mathbf{E}( X_1 + \dots + X_p)^2 = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2 \ \ \ \ \ (8)$

whenever ${X_1,\dots,X_p}$ are jointly independent copies of ${X}$, which reflects the well known fact that the sum ${X_1 + \dots + X_p}$ has expectation ${p \mathbf{E} X}$ and variance ${p \mathbf{Var}(X)}$. One can thus view (7) as an abstract generalisation of (8) to the case when ${p}$ is fractional, negative, or even complex, despite the fact that there is no sensible way in this case to talk about ${p}$ independent copies ${X_1,\dots,X_p}$ of ${X}$ in the standard framework of probability theory.

In this particular case, the quantity (7) is non-negative for every nonnegative ${p}$, which looks plausible given the form of the left-hand side. Unfortunately, this sort of non-negativity does not always hold; for instance, if ${X}$ has mean zero, one can check that

$\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^4\ d\mu^p = p \mathbf{Var}(X^2) + p(3p-2) (\mathbf{E}(X^2))^2$

and the right-hand side can become negative for ${p < 2/3}$. This is a shame, because otherwise one could hope to start endowing ${L(X^p)_{sym}}$ with some sort of commutative von Neumann algebra type structure (or the abstract probability structure discussed in this previous post) and then interpret it as a genuine measure space rather than as a virtual one. (This failure of positivity is related to the fact that the characteristic function of a random variable, when raised to the ${p^{th}}$ power, need not be a characteristic function of any random variable once ${p}$ is no longer a natural number: “fractional convolution” does not preserve positivity!) However, one vestige of positivity remains: if ${F: \Omega \rightarrow {\bf R}}$ is non-negative, then so is

$\displaystyle \int_{\Omega^p} [F]_{1 \rightarrow p}\ d\mu^p = p (\int_\Omega F\ d\mu) (\int_\Omega\ d\mu)^{p-1}.$

One can wonder what the point is to all of this abstract formalism and how it relates to the rest of mathematics. For me, this formalism originated implicitly in an old paper I wrote with Jon Bennett and Tony Carbery on the multilinear restriction and Kakeya conjectures, though we did not have a good language for working with it at the time, instead working first with the case of natural number exponents ${p}$ and appealing to a general extrapolation theorem to then obtain various identities in the fractional ${p}$ case. The connection between these fractional dimensional integrals and more traditional integrals ultimately arises from the simple identity

$\displaystyle (\int_\Omega\ d\mu)^p = \int_{\Omega^p}\ d\mu^p$

(where the right-hand side should be viewed as the fractional dimensional integral of the unit ${[1]_{0 \rightarrow p}}$ against ${\mu^p}$). As such, one can manipulate ${p^{th}}$ powers of ordinary integrals using the machinery of fractional dimensional integrals. A key lemma in this regard is

Lemma 3 (Differentiation formula) Suppose that a positive measure ${\mu = \mu(t)}$ on ${\Omega}$ depends on some parameter ${t}$ and varies by the formula

$\displaystyle \frac{d}{dt} \mu(t) = a(t) \mu(t) \ \ \ \ \ (9)$

for some function ${a(t): \Omega \rightarrow {\bf R}}$. Let ${p}$ be any real or complex number. Then, assuming sufficient smoothness and integrability of all quantities involved, we have

$\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}\ d\mu(t)^p = \int_{\Omega^p} F^{(p)} [a(t)]_{1 \rightarrow p}\ d\mu(t)^p \ \ \ \ \ (10)$

for all ${F^{(p)} \in L(\Omega^p)_{sym}}$ that are independent of ${t}$. If we allow ${F^{(p)}(t)}$ to now depend on ${t}$ also, then we have the more general total derivative formula

$\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}(t)\ d\mu(t)^p \ \ \ \ \ (11)$

$\displaystyle = \int_{\Omega^p} \frac{d}{dt} F^{(p)}(t) + F^{(p)}(t) [a(t)]_{1 \rightarrow p}\ d\mu(t)^p,$

again assuming sufficient amounts of smoothness and regularity.

Proof: We just prove (10), as (11) then follows by same argument used to prove the usual product rule. By linearity it suffices to verify this identity in the case ${F^{(p)} = [F^{(k)}]_{k \rightarrow p}}$ for some symmetric function ${F^{(k)} \in L(\Omega^k)_{sym}}$ for a natural number ${k}$. By (6), the left-hand side of (10) is then

$\displaystyle \frac{d}{dt} [\binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu(t)^k) (\int_\Omega\ d\mu(t))^{p-k}]. \ \ \ \ \ (12)$

Differentiating under the integral sign using (9) we have

$\displaystyle \frac{d}{dt} \int_\Omega\ d\mu(t) = \int_\Omega\ a(t)\ d\mu(t)$

and similarly

$\displaystyle \frac{d}{dt} \int_{\Omega^k} F^{(k)}\ d\mu(t)^k = \int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu(t)^k$

where ${a_1,\dots,a_k}$ are the standard ${k}$ copies of ${a = a(t)}$ on ${\Omega^k}$:

$\displaystyle a_j(\omega_1,\dots,\omega_k) := a(\omega_j).$

By the product rule, we can thus expand (12) as

$\displaystyle \binom{p}{k} (\int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu^k ) (\int_\Omega\ d\mu)^{p-k}$

$\displaystyle + \binom{p}{k} (p-k) (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ a\ d\mu) (\int_\Omega\ d\mu)^{p-k-1}$

where we have suppressed the dependence on ${t}$ for brevity. Since ${\binom{p}{k} (p-k) = \binom{p}{k+1} (k+1)}$, we can write this expression using (6) as

$\displaystyle \int_{\Omega^p} [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p}\ d\mu^p$

where ${F^{(k)} \ast a \in L(\Omega^{k+1})_{sym}}$ is the symmetric function

$\displaystyle F^{(k)} \ast a(\omega_1,\dots,\omega_{k+1}) := \sum_{j=1}^{k+1} F^{(k)}(\omega_1,\dots,\omega_{j-1},\omega_{j+1} \dots \omega_{k+1}) a(\omega_j).$

But from (2) one has

$\displaystyle [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p} = [F^{(k)}]_{k \rightarrow p} [a]_{1 \rightarrow p}$

and the claim follows. $\Box$

Remark 4 It is also instructive to prove this lemma in the special case when ${p}$ is a natural number, in which case the fractional dimensional integral ${\int_{\Omega^p} F^{(p)}\ d\mu(t)^p}$ can be interpreted as a classical integral. In this case, the identity (10) is immediate from applying the product rule to (9) to conclude that

$\displaystyle \frac{d}{dt} d\mu(t)^p = [a(t)]_{1 \rightarrow p} d\mu(t)^p.$

One could in fact derive (10) for arbitrary real or complex ${p}$ from the case when ${p}$ is a natural number by an extrapolation argument; see the appendix of my paper with Bennett and Carbery for details.

Let us give a simple PDE application of this lemma as illustration:

Proposition 5 (Heat flow monotonicity) Let ${u: [0,+\infty) \times {\bf R}^d \rightarrow {\bf R}}$ be a solution to the heat equation ${u_t = \Delta u}$ with initial data ${\mu_0}$ a rapidly decreasing finite non-negative Radon measure, or more explicitly

$\displaystyle u(t,x) = \frac{1}{(4\pi t)^{d/2}} \int_{{\bf R}^d} e^{-|x-y|^2/4t}\ d\mu_0(y)$

for al ${t>0}$. Then for any ${p>0}$, the quantity

$\displaystyle Q_p(t) := t^{\frac{d}{2} (p-1)} \int_{{\bf R}^d} u(t,x)^p\ dx$

is monotone non-decreasing in ${t \in (0,+\infty)}$ for ${1 < p < \infty}$, constant for ${p=1}$, and monotone non-increasing for ${0 < p < 1}$.

Proof: By a limiting argument we may assume that ${d\mu_0}$ is absolutely continuous, with Radon-Nikodym derivative a test function; this is more than enough regularity to justify the arguments below.

For any ${(t,x) \in (0,+\infty) \times {\bf R}^d}$, let ${\mu(t,x)}$ denote the Radon measure

$\displaystyle d\mu(t,x)(y) := \frac{1}{(4\pi)^{d/2}} e^{-|x-y|^2/4t}\ d\mu_0(y).$

Then the quantity ${Q_p(t)}$ can be written as a fractional dimensional integral

$\displaystyle Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx.$

Observe that

$\displaystyle \frac{\partial}{\partial t} d\mu(t,x) = \frac{|x-y|^2}{4t^2} d\mu(t,x)$

and thus by Lemma 3 and the product rule

$\displaystyle \frac{d}{dt} Q_p(t) = -\frac{d}{2t} Q_p(t) + t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [\frac{|x-y|^2}{4t^2}]_{1 \rightarrow p} d\mu(t,x)^p\ dx \ \ \ \ \ (13)$

where we use ${y}$ for the variable of integration in the factor space ${{\bf R}^d}$ of ${({\bf R}^d)^p}$.

To simplify this expression we will take advantage of integration by parts in the ${x}$ variable. Specifically, in any direction ${x_j}$, we have

$\displaystyle \frac{\partial}{\partial x_j} d\mu(t,x) = -\frac{x_j-y_j}{2t} d\mu(t,x)$

and hence by Lemma 3

$\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx = - \int_{({\bf R}^d)^p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.$

Multiplying by ${x_j}$ and integrating by parts, we see that

$\displaystyle d Q_p(t) = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx$

$\displaystyle = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx$

where we use the Einstein summation convention in ${j}$. Similarly, if ${F_j(y)}$ is any reasonable function depending only on ${y}$, we have

$\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}[F_j(y)]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx$

$\displaystyle = - \int_{({\bf R}^d)^p} [F_j(y)]_{1 \rightarrow p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx$

and hence on integration by parts

$\displaystyle 0 = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [F_j(y) \frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.$

We conclude that

$\displaystyle \frac{d}{2t} Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} (x_j - [F_j(y)]_{1 \rightarrow p}) [\frac{(x_j-y_j)}{4t}]_{1 \rightarrow p} d\mu(t,x)^p\ dx$

and thus by (13)

$\displaystyle \frac{d}{dt} Q_p(t) = \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}$

$\displaystyle [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p} - (x_j - [F_j(y)]_{1 \rightarrow p}) [x_j - y_j]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.$

The choice of ${F_j}$ that then achieves the most cancellation turns out to be ${F_j(y) = \frac{1}{p} y_j}$ (this cancels the terms that are linear or quadratic in the ${x_j}$), so that ${x_j - [F_j(y)]_{1 \rightarrow p} = \frac{1}{p} [x_j - y_j]_{1 \rightarrow p}}$. Repeating the calculations establishing (7), one has

$\displaystyle \int_{({\bf R}^d)^p} [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p}\ d\mu^p = p \mathop{\bf E} |x-Y|^2 (\int_{{\bf R}^d}\ d\mu)^{p}$

and

$\displaystyle \int_{({\bf R}^d)^p} [x_j-y_j]_{1 \rightarrow p} [x_j-y_j]_{1 \rightarrow p}\ d\mu^p$

$\displaystyle = (p \mathbf{Var}(x-Y) + p^2 |\mathop{\bf E} x-Y|^2) (\int_{{\bf R}^d}\ d\mu)^{p}$

where ${Y}$ is the random variable drawn from ${{\bf R}^d}$ with the normalised probability measure ${\mu / \int_{{\bf R}^d}\ d\mu}$. Since ${\mathop{\bf E} |x-Y|^2 = \mathbf{Var}(x-Y) + |\mathop{\bf E} x-Y|^2}$, one thus has

$\displaystyle \frac{d}{dt} Q_p(t) = (p-1) \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \mathbf{Var}(x-Y) (\int_{{\bf R}^d}\ d\mu)^{p}\ dx. \ \ \ \ \ (14)$

This expression is clearly non-negative for ${p>1}$, equal to zero for ${p=1}$, and positive for ${0 < p < 1}$, giving the claim. (One could simplify ${\mathbf{Var}(x-Y)}$ here as ${\mathbf{Var}(Y)}$ if desired, though it is not strictly necessary to do so for the proof.) $\Box$

Remark 6 As with Remark 4, one can also establish the identity (14) first for natural numbers ${p}$ by direct computation avoiding the theory of fractional dimensional integrals, and then extrapolate to the case of more general values of ${p}$. This particular identity is also simple enough that it can be directly established by integration by parts without much difficulty, even for fractional values of ${p}$.

A more complicated version of this argument establishes the non-endpoint multilinear Kakeya inequality (without any logarithmic loss in a scale parameter ${R}$); this was established in my previous paper with Jon Bennett and Tony Carbery, but using the “natural number ${p}$ first” approach rather than using the current formalism of fractional dimensional integration. However, the arguments can be translated into this formalism without much difficulty; we do so below the fold. (To simplify the exposition slightly we will not address issues of establishing enough regularity and integrability to justify all the manipulations, though in practice this can be done by standard limiting arguments.)

Given any finite collection of elements ${(f_i)_{i \in I}}$ in some Banach space ${X}$, the triangle inequality tells us that

$\displaystyle \| \sum_{i \in I} f_i \|_X \leq \sum_{i \in I} \|f_i\|_X.$

However, when the ${f_i}$ all “oscillate in different ways”, one expects to improve substantially upon the triangle inequality. For instance, if ${X}$ is a Hilbert space and the ${f_i}$ are mutually orthogonal, we have the Pythagorean theorem

$\displaystyle \| \sum_{i \in I} f_i \|_X = (\sum_{i \in I} \|f_i\|_X^2)^{1/2}.$

For sake of comparison, from the triangle inequality and Cauchy-Schwarz one has the general inequality

$\displaystyle \| \sum_{i \in I} f_i \|_X \leq (\# I)^{1/2} (\sum_{i \in I} \|f_i\|_X^2)^{1/2} \ \ \ \ \ (1)$

for any finite collection ${(f_i)_{i \in I}}$ in any Banach space ${X}$, where ${\# I}$ denotes the cardinality of ${I}$. Thus orthogonality in a Hilbert space yields “square root cancellation”, saving a factor of ${(\# I)^{1/2}}$ or so over the trivial bound coming from the triangle inequality.

More generally, let us somewhat informally say that a collection ${(f_i)_{i \in I}}$ exhibits decoupling in ${X}$ if one has the Pythagorean-like inequality

$\displaystyle \| \sum_{i \in I} f_i \|_X \ll_\varepsilon (\# I)^\varepsilon (\sum_{i \in I} \|f_i\|_X^2)^{1/2}$

for any ${\varepsilon>0}$, thus one obtains almost the full square root cancellation in the ${X}$ norm. The theory of almost orthogonality can then be viewed as the theory of decoupling in Hilbert spaces such as ${L^2({\bf R}^n)}$. In ${L^p}$ spaces for ${p < 2}$ one usually does not expect this sort of decoupling; for instance, if the ${f_i}$ are disjointly supported one has

$\displaystyle \| \sum_{i \in I} f_i \|_{L^p} = (\sum_{i \in I} \|f_i\|_{L^p}^p)^{1/p}$

and the right-hand side can be much larger than ${(\sum_{i \in I} \|f_i\|_{L^p}^2)^{1/2}}$ when ${p < 2}$. At the opposite extreme, one usually does not expect to get decoupling in ${L^\infty}$, since one could conceivably align the ${f_i}$ to all attain a maximum magnitude at the same location with the same phase, at which point the triangle inequality in ${L^\infty}$ becomes sharp.

However, in some cases one can get decoupling for certain ${2 < p < \infty}$. For instance, suppose we are in ${L^4}$, and that ${f_1,\dots,f_N}$ are bi-orthogonal in the sense that the products ${f_i f_j}$ for ${1 \leq i < j \leq N}$ are pairwise orthogonal in ${L^2}$. Then we have

$\displaystyle \| \sum_{i = 1}^N f_i \|_{L^4}^2 = \| (\sum_{i=1}^N f_i)^2 \|_{L^2}$

$\displaystyle = \| \sum_{1 \leq i,j \leq N} f_i f_j \|_{L^2}$

$\displaystyle \ll (\sum_{1 \leq i,j \leq N} \|f_i f_j \|_{L^2}^2)^{1/2}$

$\displaystyle = \| (\sum_{1 \leq i,j \leq N} |f_i f_j|^2)^{1/2} \|_{L^2}$

$\displaystyle = \| \sum_{i=1}^N |f_i|^2 \|_{L^2}$

$\displaystyle \leq \sum_{i=1}^N \| |f_i|^2 \|_{L^2}$

$\displaystyle = \sum_{i=1}^N \|f_i\|_{L^4}^2$

giving decoupling in ${L^4}$. (Similarly if each of the ${f_i f_j}$ is orthogonal to all but ${O_\varepsilon( N^\varepsilon )}$ of the other ${f_{i'} f_{j'}}$.) A similar argument also gives ${L^6}$ decoupling when one has tri-orthogonality (with the ${f_i f_j f_k}$ mostly orthogonal to each other), and so forth. As a slight variant, Khintchine’s inequality also indicates that decoupling should occur for any fixed ${2 < p < \infty}$ if one multiplies each of the ${f_i}$ by an independent random sign ${\epsilon_i \in \{-1,+1\}}$.

In recent years, Bourgain and Demeter have been establishing decoupling theorems in ${L^p({\bf R}^n)}$ spaces for various key exponents of ${2 < p < \infty}$, in the “restriction theory” setting in which the ${f_i}$ are Fourier transforms of measures supported on different portions of a given surface or curve; this builds upon the earlier decoupling theorems of Wolff. In a recent paper with Guth, they established the following decoupling theorem for the curve ${\gamma({\bf R}) \subset {\bf R}^n}$ parameterised by the polynomial curve

$\displaystyle \gamma: t \mapsto (t, t^2, \dots, t^n).$

For any ball ${B = B(x_0,r)}$ in ${{\bf R}^n}$, let ${w_B: {\bf R}^n \rightarrow {\bf R}^+}$ denote the weight

$\displaystyle w_B(x) := \frac{1}{(1 + \frac{|x-x_0|}{r})^{100n}},$

which should be viewed as a smoothed out version of the indicator function ${1_B}$ of ${B}$. In particular, the space ${L^p(w_B) = L^p({\bf R}^n, w_B(x)\ dx)}$ can be viewed as a smoothed out version of the space ${L^p(B)}$. For future reference we observe a fundamental self-similarity of the curve ${\gamma({\bf R})}$: any arc ${\gamma(I)}$ in this curve, with ${I}$ a compact interval, is affinely equivalent to the standard arc ${\gamma([0,1])}$.

Theorem 1 (Decoupling theorem) Let ${n \geq 1}$. Subdivide the unit interval ${[0,1]}$ into ${N}$ equal subintervals ${I_i}$ of length ${1/N}$, and for each such ${I_i}$, let ${f_i: {\bf R}^n \rightarrow {\bf R}}$ be the Fourier transform

$\displaystyle f_i(x) = \int_{\gamma(I_i)} e(x \cdot \xi)\ d\mu_i(\xi)$

of a finite Borel measure ${\mu_i}$ on the arc ${\gamma(I_i)}$, where ${e(\theta) := e^{2\pi i \theta}}$. Then the ${f_i}$ exhibit decoupling in ${L^{n(n+1)}(w_B)}$ for any ball ${B}$ of radius ${N^n}$.

Orthogonality gives the ${n=1}$ case of this theorem. The bi-orthogonality type arguments sketched earlier only give decoupling in ${L^p}$ up to the range ${2 \leq p \leq 2n}$; the point here is that we can now get a much larger value of ${n}$. The ${n=2}$ case of this theorem was previously established by Bourgain and Demeter (who obtained in fact an analogous theorem for any curved hypersurface). The exponent ${n(n+1)}$ (and the radius ${N^n}$) is best possible, as can be seen by the following basic example. If

$\displaystyle f_i(x) := \int_{I_i} e(x \cdot \gamma(\xi)) g_i(\xi)\ d\xi$

where ${g_i}$ is a bump function adapted to ${I_i}$, then standard Fourier-analytic computations show that ${f_i}$ will be comparable to ${1/N}$ on a rectangular box of dimensions ${N \times N^2 \times \dots \times N^n}$ (and thus volume ${N^{n(n+1)/2}}$) centred at the origin, and exhibit decay away from this box, with ${\|f_i\|_{L^{n(n+1)}(w_B)}}$ comparable to

$\displaystyle 1/N \times (N^{n(n+1)/2})^{1/(n(n+1))} = 1/\sqrt{N}.$

On the other hand, ${\sum_{i=1}^N f_i}$ is comparable to ${1}$ on a ball of radius comparable to ${1}$ centred at the origin, so ${\|\sum_{i=1}^N f_i\|_{L^{n(n+1)}(w_B)}}$ is ${\gg 1}$, which is just barely consistent with decoupling. This calculation shows that decoupling will fail if ${n(n+1)}$ is replaced by any larger exponent, and also if the radius of the ball ${B}$ is reduced to be significantly smaller than ${N^n}$.

This theorem has the following consequence of importance in analytic number theory:

Corollary 2 (Vinogradov main conjecture) Let ${s, n, N \geq 1}$ be integers, and let ${\varepsilon > 0}$. Then

$\displaystyle \int_{[0,1]^n} |\sum_{j=1}^N e( j x_1 + j^2 x_2 + \dots + j^n x_n)|^{2s}\ dx_1 \dots dx_n$

$\displaystyle \ll_{\varepsilon,s,n} N^{s+\varepsilon} + N^{2s - \frac{n(n+1)}{2}+\varepsilon}.$

Proof: By the Hölder inequality (and the trivial bound of ${N}$ for the exponential sum), it suffices to treat the critical case ${s = n(n+1)/2}$, that is to say to show that

$\displaystyle \int_{[0,1]^n} |\sum_{j=1}^N e( j x_1 + j^2 x_2 + \dots + j^n x_n)|^{n(n+1)}\ dx_1 \dots dx_n \ll_{\varepsilon,n} N^{\frac{n(n+1)}{2}+\varepsilon}.$

We can rescale this as

$\displaystyle \int_{[0,N] \times [0,N^2] \times \dots \times [0,N^n]} |\sum_{j=1}^N e( x \cdot \gamma(j/N) )|^{n(n+1)}\ dx \ll_{\varepsilon,n} N^{n(n+1)+\varepsilon}.$

As the integrand is periodic along the lattice ${N{\bf Z} \times N^2 {\bf Z} \times \dots \times N^n {\bf Z}}$, this is equivalent to

$\displaystyle \int_{[0,N^n]^n} |\sum_{j=1}^N e( x \cdot \gamma(j/N) )|^{n(n+1)}\ dx \ll_{\varepsilon,n} N^{\frac{n(n+1)}{2}+n^2+\varepsilon}.$

The left-hand side may be bounded by ${\ll \| \sum_{j=1}^N f_j \|_{L^{n(n+1)}(w_B)}^{n(n+1)}}$, where ${B := B(0,N^n)}$ and ${f_j(x) := e(x \cdot \gamma(j/N))}$. Since

$\displaystyle \| f_j \|_{L^{n(n+1)}(w_B)} \ll (N^{n^2})^{\frac{1}{n(n+1)}},$

the claim now follows from the decoupling theorem and a brief calculation. $\Box$

Using the Plancherel formula, one may equivalently (when ${s}$ is an integer) write the Vinogradov main conjecture in terms of solutions ${j_1,\dots,j_s,k_1,\dots,k_s \in \{1,\dots,N\}}$ to the system of equations

$\displaystyle j_1^i + \dots + j_s^i = k_1^i + \dots + k_s^i \forall i=1,\dots,n,$

but we will not use this formulation here.

A history of the Vinogradov main conjecture may be found in this survey of Wooley; prior to the Bourgain-Demeter-Guth theorem, the conjecture was solved completely for ${n \leq 3}$, or for ${n > 3}$ and ${s}$ either below ${n(n+1)/2 - n/3 + O(n^{2/3})}$ or above ${n(n-1)}$, with the bulk of recent progress coming from the efficient congruencing technique of Wooley. It has numerous applications to exponential sums, Waring’s problem, and the zeta function; to give just one application, the main conjecture implies the predicted asymptotic for the number of ways to express a large number as the sum of ${23}$ fifth powers (the previous best result required ${28}$ fifth powers). The Bourgain-Demeter-Guth approach to the Vinogradov main conjecture, based on decoupling, is ostensibly very different from the efficient congruencing technique, which relies heavily on the arithmetic structure of the program, but it appears (as I have been told from second-hand sources) that the two methods are actually closely related, with the former being a sort of “Archimedean” version of the latter (with the intervals ${I_i}$ in the decoupling theorem being analogous to congruence classes in the efficient congruencing method); hopefully there will be some future work making this connection more precise. One advantage of the decoupling approach is that it generalises to non-arithmetic settings in which the set ${\{1,\dots,N\}}$ that ${j}$ is drawn from is replaced by some other similarly separated set of real numbers. (A random thought – could this allow the Vinogradov-Korobov bounds on the zeta function to extend to Beurling zeta functions?)

Below the fold we sketch the Bourgain-Demeter-Guth argument proving Theorem 1.

I thank Jean Bourgain and Andrew Granville for helpful discussions.