I have just uploaded to the arXiv my paper “Sharp bounds for multilinear curved Kakeya, restriction and oscillatory integral estimates away from the endpoint“, submitted to Mathematika. In this paper I return (after more than a decade’s absence) to one of my first research interests, namely the Kakeya and restriction family of conjectures. The starting point is the following “multilinear Kakeya estimate” first established in the non-endpoint case by Bennett, Carbery, and myself, and then in the endpoint case by Guth (with further proofs and extensions by Bourgain-Guth and Carbery-Valdimarsson:
Theorem 1 (Multilinear Kakeya estimate) Let
be a radius. For each
, let
denote a finite family of infinite tubes
in
of radius
. Assume the following axiom:
- (i) (Transversality) whenever
is oriented in the direction of a unit vector
for
, we have
for some
, where we use the usual Euclidean norm on the wedge product
.
where
are the usual Lebesgue norms with respect to Lebesgue measure,
denotes the indicator function of
, and
denotes the cardinality of
.
The original proof of this proceeded using a heat flow monotonicity method, which in my previous post I reinterpreted using a “virtual integration” concept on a fractional Cartesian product space. It turns out that this machinery is somewhat flexible, and can be used to establish some other estimates of this type. The first result of this paper is to extend the above theorem to the curved setting, in which one localises to a ball of radius (and sets
to be small), but allows the tubes
to be curved in a
fashion. If one runs the heat flow monotonicity argument, one now picks up some additional error terms arising from the curvature, but as the spatial scale approaches zero, the tubes become increasingly linear, and as such the error terms end up being an integrable multiple of the main term, at which point one can conclude by Gronwall’s inequality (actually for technical reasons we use a bootstrap argument instead of Gronwall). A key point in this approach is that one obtains optimal bounds (not losing factors of
or
), so long as one stays away from the endpoint case
(which does not seem to be easily treatable by the heat flow methods). Previously, the paper of Bennett, Carbery, and myself was able to use an induction on scale argument to obtain a curved multilinear Kakeya estimate losing a factor of
(after optimising the argument); later arguments of Bourgain-Guth and Carbery-Valdimarsson, based on algebraic topology methods, could also obtain a curved multilinear Kakeya estimate without such losses, but only in the algebraic case when the tubes were neighbourhoods of algebraic curves of bounded degree.
Perhaps more interestingly, we are also able to extend the heat flow monotonicity method to apply directly to the multilinear restriction problem, giving the following global multilinear restriction estimate:
Theorem 2 (Multilinear restriction theorem) Let
be an exponent, and let
be a parameter. Let
be a sufficiently large natural number, depending only on
. For
, let
be an open subset of
, and let
be a smooth function obeying the following axioms:
Then one has
for any
,
, extended by zero outside of
, and
denotes the extension operator
Local versions of such estimate, in which is replaced with
for some
, and one accepts a loss of the form
, were already established by Bennett, Carbery, and myself using an induction on scale argument. In a later paper of Bourgain-Guth these losses were removed by “epsilon removal lemmas” to recover Theorme 2, but only in the case when all the hypersurfaces involved had curvatures bounded away from zero.
There are two main new ingredients in the proof of Theorem 2. The first is to replace the usual induction on scales scheme to establish multilinear restriction by a “ball inflation” induction on scales scheme that more closely resembles the proof of decoupling theorems. In particular, we actually prove the more general family of estimates
where denotes the local energies
(actually for technical reasons it is more convenient to use a smoother weight than the strict cutoff to the disk ). With logarithmic losses, it is not difficult to establish this estimate by an upward induction on
. To avoid such losses we use the heat flow monotonicity method. Here we run into the issue that the extension operators
are complex-valued rather than non-negative, and thus would not be expected to obey many good montonicity properties. However, the local energies
can be expressed in terms of the magnitude squared of what is essentially the Gabor transform of
, and these are non-negative; furthermore, the dispersion relation associated to the extension operators
implies that these Gabor transforms propagate along tubes, so that the situation becomes quite similar (up to several additional lower order error terms) to that in the multilinear Kakeya problem. (This can be viewed as a continuous version of the usual wave packet decomposition method used to relate restriction and Kakeya problems, which when combined with the heat flow monotonicity method allows for one to use a continuous version of induction on scales methods that do not concede any logarithmic factors.)
Finally, one can combine the curved multilinear Kakeya result with the multilinear restriction result to obtain estimates for multilinear oscillatory integrals away from the endpoint. Again, this sort of implication was already established in the previous paper of Bennett, Carbery, and myself, but the arguments there had some epsilon losses in the exponents; here we were able to run the argument more carefully and avoid these losses.
28 comments
Comments feed for this article
29 July, 2019 at 11:52 am
Anonymous
Is it possible to extend these oscilatory integral estimates for singular integrals?
30 July, 2019 at 9:38 am
Terence Tao
Well, there is some precedent for using heat flow monotonicity methods to control singular integrals, most notably by the work of Stefanie Petermichl and Alexander Volberg on sharp estimates for the Ahlfors-Beurling operator. On the other hand, the estimates here only work in a multilinear setting rather than a linear one, so there isn’t an obvious way to combine the two. I view heat flow monotonicity methods as a version of induction on scales (or as an analogue of the Bellman function technique in the dyadic setting, which I think was the motivation for the Petermichl-Volberg work), so if there are singular integral problems in which those techniques have already been indicated to be relevant, then a heat flow approach might be worth trying to see if one can sharpen the estimates somewhat.
3 August, 2019 at 7:03 pm
Anonymous
Dear elite Pro.Tao,
There are many good mathematicians in the world , but I only like you.Therefore, one day recent I hope you will stand on the highest stage that receives Clay millenium award. For many reasons including of far distant, I cannot meet you to tell many
secrets, you are very special in 52 Field medalists. Before I die, I want my wish become true, if not I am very sad
4 August, 2019 at 8:29 pm
Anonymous
Dear Pro.Tao
Nobody sees a light needle in a dark sea. I am very delighted to see that you solved one of the hardest problems in the world in 2019 . I can not reveal that problem but in disorder letters . May be yourself knows well: N,T,E,M,N,E,I,C,J,T,R,R,C,U,W,E,O,I,P,S.
5 August, 2019 at 12:14 am
Descrambler
How does this solve TWINPRIMESCONJECTURE?
29 July, 2019 at 12:10 pm
Anonymous
Wow, awesome work, my lovely Pro.Tao
1 August, 2019 at 6:24 pm
Anonymous
Hello, I have a small question, why isn’t the multilinear Kakeya estimate
a simple consequence of triangle inequality and proving the estimate for a single intersection of tubes? I.e. proving
where the supremum is taken over all collections of tubes
with
? Isn’t this last estimate immediate? I feel that the transversality condition should guarantee the tube intersection is small. What subtlety am I missing?
Given the opportunity, I’d also like to thank you (and others) who make their work available in the form of notes, blog posts, arxiv, etc.
1 August, 2019 at 9:48 pm
Terence Tao
This works in two dimensions, but in higher dimensions the most important values of
are less than 1, and so the triangle inequality is no longer available.
8 August, 2019 at 10:34 am
Allan van Hulst
Sorry, but I am struggling a bit with “By construction of epsilon” (arxiv-link, page 36, line 4). Does this refer to the same restrictions on epsilon as those mentioned at the start of Theorem 3.4? Or something different? For example, the function phi is indeed defined in a constructive way (page 18, below the itemize’d).
8 August, 2019 at 8:05 pm
Terence Tao
I am referring here to the definition of
at the start of Section 4 on page 33. I will add a more explicit link to this in the next revision of the ms. (There is no relation here with the definition of
.)
12 August, 2019 at 12:54 am
Anonymous
My question is, why are the functions not identified up to zero sets in your Fock-type construction?
Some remarks (please take with a grain of salt in case they’re wrong):
1) The fractional Cartesian product space seems to have the property that
is not
.
2) Although
is defined for all fractional powers
, for integer powers the Fock-type construction is actually a subset of the space the symbol stands for (maybe dense; but as far as I understand it, definitely not equal).
3) The construction of the fractional cartesian product reminded me of free probability, although it seems an element in the fractional cartesian product is identified by what it does with every other element, whereas elements in free probability spaces are identified with their free cumulants if I recall correctly. I have not yet read the section that proves the multi linear Kakeya using this tool but I would like to eventually read it.
12 August, 2019 at 7:22 am
Terence Tao
The main reason I did not identify functions that agree almost everywhere is that I will need to work with multiple measures
in later analysis. A priori I do not assume these measures to be mutually absolutely continuous, and so functions that agree almost everywhere with respect to one measure might not do so with respect to another measure. As it turns out, in the specific applications in this paper I do have mutual absolute continuity, but instead of spending a (small) amount of time on this technical point, it seemed simpler just to not identify a.e. equivalent functions in the first place.
The space
has a natural homomorphism into
when
is integer-valued (by mapping the virtual function
to its actual function interpretation, and extending by homomorphism), but the map is neither surjective nor injective in general. For instance, the image must take values in the space of symmetric functions on
(and in fact the image is dense in that space in weak topologies); in particular, the fractional power
is more like a quotient space
of the Cartesian power by the action of the relevant (product) symmetric group. I had thought about making the notation more complicated to more accurately reflect this feature, but ultimately decided that the notation was already heavy enough as it was. This also explains why
is “larger” than
, as the functions on the former space have less symmetry demanded of them. (But there is a natural homomorphism from
to
(mapping
to
and extending by homomorphism), which can be viewed as a “virtual” projection map from
to
).
Injectivity can also fail; for instance, if all the
are finite spaces, then
is finite dimensional, but
is infinite dimensional; there are “accidental” polynomials identities relating various functions
in this case that are not picked up by the abstract Fock space. One could potentially quotient out the Fock space by these additional identities to obtain a space that more accurately resembles
, but there didn’t seem to be any need for this in any of the applications I was considering (and there was some minor technical convenience in keeping the Fock space “free” of such identities in order to more easily define homomorphisms out of such spaces).
Certainly my exposure to free probability (or more generally noncommutative probability) influenced my thinking when trying to abstract out the key features of integration on product spaces that could extend to the fractional power setting. Indeed the Fock space constructed here could almost be interpreted as in the framework of noncommutative probability theory (though it remains commutative in this case), except for the fact that the positivity property
breaks down in the non-integer case (see the discussion before Theorem 3.4); related to this, I did not impose a topology on the Fock space with which to take a suitable weak closure to recover a von Neumann algebra. But one could still view
as a sort of “(non)commutative measure space” rather than a “(non)commutative probability space” if one wished, by abandoning the positivity axiom.
21 September, 2019 at 10:11 am
Johan Aspegren
Thomas Wolff proves in his lecture notes the decay estimate
So the restriction conjecture is proved for the constant functions! How come the whole conjecture be so far away then? For any essentially bounded
on the unit sphere we have then an estimate
Doesn`t Wolff`s result then show that
And we have proved the whole restriction conjecture, including the Kakeya conjectures?
21 September, 2019 at 1:42 pm
Terence Tao
Unfortunately due to the oscillation in the
factor, one cannot bound
pointwise by
. For even exponents
one can obtain a domination of
norms through Plancherel’s theorem, allowing one to prove the restriction conjecture in this case, but this method does not work in general due to the failure of the Hardy-Littlewood majorant conjecture. See for instance this paper of Ben Green for further discussion.
22 September, 2019 at 1:05 am
joas165
Ok, I see where I did a mistake. But we can easily decompose the spherical measure to positive and negative parts. Let
. For
non-negative valued we can represent
as
So
times the bigger one of the normed integrals dominates
and each of the normed integrals are dominated by
For example
So we have
at least for non-negative functions. I think we can extend this method. Where I was wrong, is that constant
or whatever will be involved in the general case.
22 September, 2019 at 1:41 am
Johan Aspegren
We can define
.I just noticed that,
gets dominated even by

In my first comment I meant that the decay estimate is
22 September, 2019 at 7:45 am
Terence Tao
Unfortunately this does not work either, for the same reason: the triangle inequality does not let one bound (say)
by
. As a general rule of thumb, one can only hope to make comparisons like this when all the oscillation on the right-hand side has been removed using absolute value signs.
Another way to see why comparisons of this sort in general must fail is that the Fourier transform
vanishes for some choices of
(when
is basically a zero of a Bessel function), but there is no reason why all the other Fourier transforms
have to vanish there also (e.g., if
then
will be positive). So there is no hope of controlling the latter pointwise by the former.
22 September, 2019 at 8:38 am
Johan Aspegren
Ok, in general it fails. Anyway, it works for example for even non-negative real functions, because then the Fourier transform is positive. But let
be non-negative. I didn`t plan to use just the triangle inequality. You have control of the imaginary part of
by
. The
is an even measure, so you have
I thought that the above is controlled by
because I was almost sure that by symmetry
23 September, 2019 at 1:46 am
Johan Aspegren
This will probably be my last comment on the subject. In general the estimate fails. But I can show that my estimate works for even functions
, because then the Fourier transform is real valued.
and
such that
Now, there exists
and
Thus,
24 September, 2019 at 1:31 pm
Terence Tao
There appear to be several LaTeX issues with your comment, but in any event this approach is unlikely to work, because the non-oscillatory integrals
are considerably larger than the oscillatory integral
. In particular, they do not decay to zero in the limit
and so one will not get any non-trivial
estimates this way.
26 September, 2019 at 9:22 am
Johan Aspegren
Yes, there are issues in my latex. I have been moving to an another town, so I make more mistakes than usually and I make mistakes a lot.
is small, it means that there are lot of cancellation. However, that cancellation is *fixed* for each *pointwise* estimate! So the relevant cancellation comes from 
I think it better to make a little bit more longer comment, but I cannot prove the whole thing in these comments. I try to run this comment through an editor, because I cannot see my comment otherwise.
Anyway, if
The worst cases are when
W.L.O.G let us suppose that
Thus, there exists
so that
In other words
On the other hand we have
for some
What we wan’t to prove is that
The bigger
gets for fixed
the more probable the estimate gets. IMO this estimate is a priori probable and it is clearly true for large
.
26 September, 2019 at 10:02 am
Johan Aspegren
I meant that the estimate is clearly true for large
if we don`t care about universal small constants. More interesting would be to quantify that cancellation
as an function of 
26 September, 2019 at 12:19 pm
Johan Aspegren
In any case we have pointwise
so
cannot decay slower than
if
is integrable.
26 September, 2019 at 1:10 pm
Johan Aspegren
I guess the Lord tests as all. I meant the following. In any case we have pointwise
so
cannot decay slower than
. Doesn`t this prove the resctriction conjecture?
3 October, 2019 at 3:57 pm
Terence Tao
No, since the factors of
on both sides cancel each other out and one is only left with the trivial bound
.
5 October, 2019 at 5:01 am
Johan Aspegren
Yeah, but don`t you lose information that way? If we had slower asymptotic decay to zero in infinity for $|\widehat{f\sigma}|^{q},$ then we would have
$$C|\hat{\sigma}|^{q} \leq |\widehat{f\sigma}|^{q} ,$$ but this kind of contradicts the pointwise estimate. BTW, don`t you divide by zero?
5 October, 2019 at 7:40 am
Johan Aspegren
Why wouldn`t we scale $$||f||_{\infty} = 2$$ and divide my pointwise estimate with $|\widehat{\sigma}|^{q}$ twice. Doesn`t that show that $$2^q|\hat{\sigma}|^{q} < |\widehat{f\sigma}|^{q} ,$$ is impossible?
5 October, 2019 at 5:54 pm
Anonymous
Stop wasting his time Johan you’re not going to solve the restriction problem