You are currently browsing Terence Tao’s articles.

I’ve just uploaded to the arXiv my paper “Almost all Collatz orbits attain almost bounded values“, submitted to the proceedings of the Forum of Mathematics, Pi. In this paper I returned to the topic of the notorious Collatz conjecture (also known as the {3x+1} conjecture), which I previously discussed in this blog post. This conjecture can be phrased as follows. Let {{\bf N}+1 = \{1,2,\dots\}} denote the positive integers (with {{\bf N} =\{0,1,2,\dots\}} the natural numbers), and let {\mathrm{Col}: {\bf N}+1 \rightarrow {\bf N}+1} be the map defined by setting {\mathrm{Col}(N)} equal to {3N+1} when {N} is odd and {N/2} when {N} is even. Let {\mathrm{Col}_{\min}(N) := \inf_{n \in {\bf N}} \mathrm{Col}^n(N)} be the minimal element of the Collatz orbit {N, \mathrm{Col}(N), \mathrm{Col}^2(N),\dots}. Then we have

Conjecture 1 (Collatz conjecture) One has {\mathrm{Col}_{\min}(N)=1} for all {N \in {\bf N}+1}.

Establishing the conjecture for all {N} remains out of reach of current techniques (for instance, as discussed in the previous blog post, it is basically at least as difficult as Baker’s theorem, all known proofs of which are quite difficult). However, the situation is more promising if one is willing to settle for results which only hold for “most” {N} in some sense. For instance, it is a result of Krasikov and Lagarias that

\displaystyle  \{ N \leq x: \mathrm{Col}_{\min}(N) = 1 \} \gg x^{0.84}

for all sufficiently large {x}. In another direction, it was shown by Terras that for almost all {N} (in the sense of natural density), one has {\mathrm{Col}_{\min}(N) < N}. This was then improved by Allouche to {\mathrm{Col}_{\min}(N)  0.869}, and extended later by Korec to cover all {\theta > \frac{\log 3}{\log 4} \approx 0.7924}. In this paper we obtain the following further improvement (at the cost of weakening natural density to logarithmic density):

Theorem 2 Let {f: {\bf N}+1 \rightarrow {\bf R}} be any function with {\lim_{N \rightarrow \infty} f(N) = +\infty}. Then we have {\mathrm{Col}_{\min}(N) < f(N)} for almost all {N} (in the sense of logarithmic density).

Thus for instance one has {\mathrm{Col}_{\min}(N) < \log\log\log\log N} for almost all {N} (in the sense of logarithmic density).

The difficulty here is one usually only expects to establish “local-in-time” results that control the evolution {\mathrm{Col}^n(N)} for times {n} that only get as large as a small multiple {c \log N} of {\log N}; the aforementioned results of Terras, Allouche, and Korec, for instance, are of this type. However, to get {\mathrm{Col}^n(N)} all the way down to {f(N)} one needs something more like an “(almost) global-in-time” result, where the evolution remains under control for so long that the orbit has nearly reached the bounded state {N=O(1)}.

However, as observed by Bourgain in the context of nonlinear Schrödinger equations, one can iterate “almost sure local wellposedness” type results (which give local control for almost all initial data from a given distribution) into “almost sure (almost) global wellposedness” type results if one is fortunate enough to draw one’s data from an invariant measure for the dynamics. To illustrate the idea, let us take Korec’s aforementioned result that if {\theta > \frac{\log 3}{\log 4}} one picks at random an integer {N} from a large interval {[1,x]}, then in most cases, the orbit of {N} will eventually move into the interval {[1,x^{\theta}]}. Similarly, if one picks an integer {M} at random from {[1,x^\theta]}, then in most cases, the orbit of {M} will eventually move into {[1,x^{\theta^2}]}. It is then tempting to concatenate the two statements and conclude that for most {N} in {[1,x]}, the orbit will eventually move {[1,x^{\theta^2}]}. Unfortunately, this argument does not quite work, because by the time the orbit from a randomly drawn {N \in [1,x]} reaches {[1,x^\theta]}, the distribution of the final value is unlikely to be close to being uniformly distributed on {[1,x^\theta]}, and in particular could potentially concentrate almost entirely in the exceptional set of {M \in [1,x^\theta]} that do not make it into {[1,x^{\theta^2}]}. The point here is the uniform measure on {[1,x]} is not transported by Collatz dynamics to anything resembling the uniform measure on {[1,x^\theta]}.

So, one now needs to locate a measure which has better invariance properties under the Collatz dynamics. It turns out to be technically convenient to work with a standard acceleration of the Collatz map known as the Syracuse map {\mathrm{Syr}: 2{\bf N}+1 \rightarrow 2{\bf N}+1}, defined on the odd numbers {2{\bf N}+1 = \{1,3,5,\dots\}} by setting {\mathrm{Syr}(N) = (3N+1)/2^a}, where {2^a} is the largest power of {2} that divides {3N+1}. (The advantage of using the Syracuse map over the Collatz map is that it performs precisely one multiplication of {3} at each iteration step, which makes the map better behaved when performing “{3}-adic” analysis.)

When viewed {3}-adically, we soon see that iterations of the Syracuse map become somewhat irregular. Most obviously, {\mathrm{Syr}(N)} is never divisible by {3}. A little less obviously, {\mathrm{Syr}(N)} is twice as likely to equal {2} mod {3} as it is to equal {1} mod {3}. This is because for a randomly chosen odd {\mathbf{N}}, the number of times {\mathbf{a}} that {2} divides {3\mathbf{N}+1} can be seen to have a geometric distribution of mean {2} – it equals any given value {a \in{\bf N}+1} with probability {2^{-a}}. Such a geometric random variable is twice as likely to be odd as to be even, which is what gives the above irregularity. There are similar irregularities modulo higher powers of {3}. For instance, one can compute that for large random odd {\mathbf{N}}, {\mathrm{Syr}^2(\mathbf{N}) \hbox{ mod } 9} will take the residue classes {0,1,2,3,4,5,6,7,8 \hbox{ mod } 9} with probabilities

\displaystyle  0, \frac{8}{63}, \frac{16}{63}, 0, \frac{11}{63}, \frac{4}{63}, 0, \frac{2}{63}, \frac{22}{63}

respectively. More generally, for any {n}, {\mathrm{Syr}^n(N) \hbox{ mod } 3^n} will be distributed according to the law of a random variable {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} on {{\bf Z}/3^n{\bf Z}} that we call a Syracuse random variable, and can be described explicitly as

\displaystyle  \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) = 2^{-\mathbf{a}_1} + 3^1 2^{-\mathbf{a}_1-\mathbf{a}_2} + \dots + 3^{n-1} 2^{-\mathbf{a}_1-\dots-\mathbf{a}_n} \hbox{ mod } 3^n, \ \ \ \ \ (1)

where {\mathbf{a}_1,\dots,\mathbf{a}_n} are iid copies of a geometric random variable of mean {2}.

In view of this, any proposed “invariant” (or approximately invariant) measure (or family of measures) for the Syracuse dynamics should take this {3}-adic irregularity of distribution into account. It turns out that one can use the Syracuse random variables {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} to construct such a measure, but only if these random variables stabilise in the limit {n \rightarrow \infty} in a certain total variation sense. More precisely, in the paper we establish the estimate

\displaystyle  \sum_{Y \in {\bf Z}/3^n{\bf Z}} | \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^n{\bf Z})=Y) - 3^{m-n} \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^m{\bf Z})=Y \hbox{ mod } 3^m)| \ \ \ \ \ (2)

\displaystyle  \ll_A m^{-A}

for any {1 \leq m \leq n} and any {A > 0}. This type of stabilisation is plausible from entropy heuristics – the tuple {(\mathbf{a}_1,\dots,\mathbf{a}_n)} of geometric random variables that generates {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} has Shannon entropy {n \log 4}, which is significantly larger than the total entropy {n \log 3} of the uniform distribution on {{\bf Z}/3^n{\bf Z}}, so we expect a lot of “mixing” and “collision” to occur when converting the tuple {(\mathbf{a}_1,\dots,\mathbf{a}_n)} to {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}; these heuristics can be supported by numerics (which I was able to work out up to about {n=10} before running into memory and CPU issues), but it turns out to be surprisingly delicate to make this precise.

A first hint of how to proceed comes from the elementary number theory observation (easily proven by induction) that the rational numbers

\displaystyle  2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{n-1} 2^{-a_1-\dots-a_n}

are all distinct as {(a_1,\dots,a_n)} vary over tuples in {({\bf N}+1)^n}. Unfortunately, the process of reducing mod {3^n} creates a lot of collisions (as must happen from the pigeonhole principle); however, by a simple “Lefschetz principle” type argument one can at least show that the reductions

\displaystyle  2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{m-1} 2^{-a_1-\dots-a_m} \hbox{ mod } 3^n \ \ \ \ \ (3)

are mostly distinct for “typical” {a_1,\dots,a_m} (as drawn using the geometric distribution) as long as {m} is a bit smaller than {\frac{\log 3}{\log 4} n} (basically because the rational number appearing in (3) then typically takes a form like {M/2^{2m}} with {M} an integer between {0} and {3^n}). This analysis of the component (3) of (1) is already enough to get quite a bit of spreading on { \mathbf{Syrac}({\bf Z}/3^n{\bf Z})} (roughly speaking, when the argument is optimised, it shows that this random variable cannot concentrate in any subset of {{\bf Z}/3^n{\bf Z}} of density less than {n^{-C}} for some large absolute constant {C>0}). To get from this to a stabilisation property (2) we have to exploit the mixing effects of the remaining portion of (1) that does not come from (3). After some standard Fourier-analytic manipulations, matters then boil down to obtaining non-trivial decay of the characteristic function of {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}, and more precisely in showing that

\displaystyle  \mathbb{E} e^{-2\pi i \xi \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) / 3^n} \ll_A n^{-A} \ \ \ \ \ (4)

for any {A > 0} and any {\xi \in {\bf Z}/3^n{\bf Z}} that is not divisible by {3}.

If the random variable (1) was the sum of independent terms, one could express this characteristic function as something like a Riesz product, which would be straightforward to estimate well. Unfortunately, the terms in (1) are loosely coupled together, and so the characteristic factor does not immediately factor into a Riesz product. However, if one groups adjacent terms in (1) together, one can rewrite it (assuming {n} is even for sake of discussion) as

\displaystyle  (2^{\mathbf{a}_2} + 3) 2^{-\mathbf{b}_1} + (2^{\mathbf{a}_4}+3) 3^2 2^{-\mathbf{b}_1-\mathbf{b}_2} + \dots

\displaystyle  + (2^{\mathbf{a}_n}+3) 3^{n-2} 2^{-\mathbf{b}_1-\dots-\mathbf{b}_{n/2}} \hbox{ mod } 3^n

where {\mathbf{b}_j := \mathbf{a}_{2j-1} + \mathbf{a}_{2j}}. The point here is that after conditioning on the {\mathbf{b}_1,\dots,\mathbf{b}_{n/2}} to be fixed, the random variables {\mathbf{a}_2, \mathbf{a}_4,\dots,\mathbf{a}_n} remain independent (though the distribution of each {\mathbf{a}_{2j}} depends on the value that we conditioned {\mathbf{b}_j} to), and so the above expression is a conditional sum of independent random variables. This lets one express the characeteristic function of (1) as an averaged Riesz product. One can use this to establish the bound (4) as long as one can show that the expression

\displaystyle  \frac{\xi 3^{2j-2} (2^{-\mathbf{b}_1-\dots-\mathbf{b}_j+1} \mod 3^n)}{3^n}

is not close to an integer for a moderately large number ({\gg A \log n}, to be precise) of indices {j = 1,\dots,n/2}. (Actually, for technical reasons we have to also restrict to those {j} for which {\mathbf{b}_j=3}, but let us ignore this detail here.) To put it another way, if we let {B} denote the set of pairs {(j,l)} for which

\displaystyle  \frac{\xi 3^{2j-2} (2^{-l+1} \mod 3^n)}{3^n} \in [-\varepsilon,\varepsilon] + {\bf Z},

we have to show that (with overwhelming probability) the random walk

\displaystyle (1,\mathbf{b}_1), (2, \mathbf{b}_1 + \mathbf{b}_2), \dots, (n/2, \mathbf{b}_1+\dots+\mathbf{b}_{n/2})

(which we view as a two-dimensional renewal process) contains at least a few points lying outside of {B}.

A little bit of elementary number theory and combinatorics allows one to describe the set {B} as the union of “triangles” with a certain non-zero separation between them. If the triangles were all fairly small, then one expects the renewal process to visit at least one point outside of {B} after passing through any given such triangle, and it then becomes relatively easy to then show that the renewal process usually has the required number of points outside of {B}. The most difficult case is when the renewal process passes through a particularly large triangle in {B}. However, it turns out that large triangles enjoy particularly good separation properties, and in particular afer passing through a large triangle one is likely to only encounter nothing but small triangles for a while. After making these heuristics more precise, one is finally able to get enough points on the renewal process outside of {B} that one can finish the proof of (4), and thus Theorem 2.

In the fall quarter (starting Sep 27) I will be teaching a graduate course on analytic prime number theory.  This will be similar to a graduate course I taught in 2015, and in particular will reuse several of the lecture notes from that course, though it will also incorporate some new material (and omit some material covered in the previous course, to compensate).  I anticipate covering the following topics:

  1. Elementary multiplicative number theory
  2. Complex-analytic multiplicative number theory
  3. The entropy decrement argument
  4. Bounds for exponential sums
  5. Zero density theorems
  6. Halasz’s theorem and the Matomaki-Radziwill theorem
  7. The circle method
  8. (If time permits) Chowla’s conjecture and the Erdos discrepancy problem

Lecture notes for topics 3, 6, and 8 will be forthcoming.

 

William Banks, Kevin Ford, and I have just uploaded to the arXiv our paper “Large prime gaps and probabilistic models“. In this paper we introduce a random model to help understand the connection between two well known conjectures regarding the primes {{\mathcal P} := \{2,3,5,\dots\}}, the Cramér conjecture and the Hardy-Littlewood conjecture:

Conjecture 1 (Cramér conjecture) If {x} is a large number, then the largest prime gap {G_{\mathcal P}(x) := \sup_{p_n, p_{n+1} \leq x} p_{n+1}-p_n} in {[1,x]} is of size {\asymp \log^2 x}. (Granville refines this conjecture to {\gtrsim \xi \log^2 x}, where {\xi := 2e^{-\gamma} = 1.1229\dots}. Here we use the asymptotic notation {X \gtrsim Y} for {X \geq (1-o(1)) Y}, {X \sim Y} for {X \gtrsim Y \gtrsim X}, {X \gg Y} for {X \geq C^{-1} Y}, and {X \asymp Y} for {X \gg Y \gg X}.)

Conjecture 2 (Hardy-Littlewood conjecture) If {\mathcal{H} := \{h_1,\dots,h_k\}} are fixed distinct integers, then the number of numbers {n \in [1,x]} with {n+h_1,\dots,n+h_k} all prime is {({\mathfrak S}(\mathcal{H}) +o(1)) \int_2^x \frac{dt}{\log^k t}} as {x \rightarrow \infty}, where the singular series {{\mathfrak S}(\mathcal{H})} is defined by the formula

\displaystyle {\mathfrak S}(\mathcal{H}) := \prod_p \left( 1 - \frac{|{\mathcal H} \hbox{ mod } p|}{p}\right) (1-\frac{1}{p})^{-k}.

(One can view these conjectures as modern versions of two of the classical Landau problems, namely Legendre’s conjecture and the twin prime conjecture respectively.)

A well known connection between the Hardy-Littlewood conjecture and prime gaps was made by Gallagher. Among other things, Gallagher showed that if the Hardy-Littlewood conjecture was true, then the prime gaps {p_{n+1}-p_n} with {n \leq x} were asymptotically distributed according to an exponential distribution of mean {\log x}, in the sense that

\displaystyle | \{ n: p_n \leq x, p_{n+1}-p_n \geq \lambda \log x \}| = (e^{-\lambda}+o(1)) \frac{x}{\log x} \ \ \ \ \ (1)

 

as {x \rightarrow \infty} for any fixed {\lambda \geq 0}. Roughly speaking, the way this is established is by using the Hardy-Littlewood conjecture to control the mean values of {\binom{|{\mathcal P} \cap (p_n, p_n + \lambda \log x)|}{k}} for fixed {k,\lambda}, where {p_n} ranges over the primes in {[1,x]}. The relevance of these quantities arises from the Bonferroni inequalities (or “Brun pure sieve“), which can be formulated as the assertion that

\displaystyle 1_{N=0} \leq \sum_{k=0}^K (-1)^k \binom{N}{k}

when {K} is even and

\displaystyle 1_{N=0} \geq \sum_{k=0}^K (-1)^k \binom{N}{k}

when {K} is odd, for any natural number {N}; setting {N := |{\mathcal P} \cap (p_n, p_n + \lambda \log x)|} and taking means, one then gets upper and lower bounds for the probability that the interval {(p_n, p_n + \lambda \log x)} is free of primes. The most difficult step is to control the mean values of the singular series {{\mathfrak S}(\mathcal{H})} as {{\mathcal H}} ranges over {k}-tuples in a fixed interval such as {[0, \lambda \log x]}.

Heuristically, if one extrapolates the asymptotic (1) to the regime {\lambda \asymp \log x}, one is then led to Cramér’s conjecture, since the right-hand side of (1) falls below {1} when {\lambda} is significantly larger than {\log x}. However, this is not a rigorous derivation of Cramér’s conjecture from the Hardy-Littlewood conjecture, since Gallagher’s computations only establish (1) for fixed choices of {\lambda}, which is only enough to establish the far weaker bound {G_{\mathcal P}(x) / \log x \rightarrow \infty}, which was already known (see this previous paper for a discussion of the best known unconditional lower bounds on {G_{\mathcal P}(x)}). An inspection of the argument shows that if one wished to extend (1) to parameter choices {\lambda} that were allowed to grow with {x}, then one would need as input a stronger version of the Hardy-Littlewood conjecture in which the length {k} of the tuple {{\mathcal H} = (h_1,\dots,h_k)}, as well as the magnitudes of the shifts {h_1,\dots,h_k}, were also allowed to grow with {x}. Our initial objective in this project was then to quantify exactly what strengthening of the Hardy-Littlewood conjecture would be needed to rigorously imply Cramer’s conjecture. The precise results are technical, but roughly we show results of the following form:

Theorem 3 (Large gaps from Hardy-Littlewood, rough statement)

  • If the Hardy-Littlewood conjecture is uniformly true for {k}-tuples of length {k \ll \frac{\log x}{\log\log x}}, and with shifts {h_1,\dots,h_k} of size {O( \log^2 x )}, with a power savings in the error term, then {G_{\mathcal P}(x) \gg \frac{\log^2 x}{\log\log x}}.
  • If the Hardy-Littlewood conjecture is “true on average” for {k}-tuples of length {k \ll \frac{y}{\log x}} and shifts {h_1,\dots,h_k} of size {y} for all {\log x \leq y \leq \log^2 x \log\log x}, with a power savings in the error term, then {G_{\mathcal P}(x) \gg \log^2 x}.

In particular, we can recover Cramer’s conjecture given a sufficiently powerful version of the Hardy-Littlewood conjecture “on the average”.

Our proof of this theorem proceeds more or less along the same lines as Gallagher’s calculation, but now with {k} allowed to grow slowly with {x}. Again, the main difficulty is to accurately estimate average values of the singular series {{\mathfrak S}({\mathfrak H})}. Here we found it useful to switch to a probabilistic interpretation of this series. For technical reasons it is convenient to work with a truncated, unnormalised version

\displaystyle V_{\mathcal H}(z) := \prod_{p \leq z} \left( 1 - \frac{|{\mathcal H} \hbox{ mod } p|}{p} \right)

of the singular series, for a suitable cutoff {z}; it turns out that when studying prime tuples of size {t}, the most convenient cutoff {z(t)} is the “Pólya magic cutoff“, defined as the largest prime for which

\displaystyle \prod_{p \leq z(t)}(1-\frac{1}{p}) \geq \frac{1}{\log t} \ \ \ \ \ (2)

 

(this is well defined for {t \geq e^2}); by Mertens’ theorem, we have {z(t) \sim t^{1/e^\gamma}}. One can interpret {V_{\mathcal Z}(z)} probabilistically as

\displaystyle V_{\mathcal Z}(z) = \mathbf{P}( {\mathcal H} \subset \mathcal{S}_z )

where {\mathcal{S}_z \subset {\bf Z}} is the randomly sifted set of integers formed by removing one residue class {a_p \hbox{ mod } p} uniformly at random for each prime {p \leq z}. The Hardy-Littlewood conjecture can be viewed as an assertion that the primes {{\mathcal P}} behave in some approximate statistical sense like the random sifted set {\mathcal{S}_z}, and one can prove the above theorem by using the Bonferroni inequalities both for the primes {{\mathcal P}} and for the random sifted set, and comparing the two (using an even {K} for the sifted set and an odd {K} for the primes in order to be able to combine the two together to get a useful bound).

The proof of Theorem 3 ended up not using any properties of the set of primes {{\mathcal P}} other than that this set obeyed some form of the Hardy-Littlewood conjectures; the theorem remains true (with suitable notational changes) if this set were replaced by any other set. In order to convince ourselves that our theorem was not vacuous due to our version of the Hardy-Littlewood conjecture being too strong to be true, we then started exploring the question of coming up with random models of {{\mathcal P}} which obeyed various versions of the Hardy-Littlewood and Cramér conjectures.

This line of inquiry was started by Cramér, who introduced what we now call the Cramér random model {{\mathcal C}} of the primes, in which each natural number {n \geq 3} is selected for membership in {{\mathcal C}} with an independent probability of {1/\log n}. This model matches the primes well in some respects; for instance, it almost surely obeys the “Riemann hypothesis”

\displaystyle | {\mathcal C} \cap [1,x] | = \int_2^x \frac{dt}{\log t} + O( x^{1/2+o(1)})

and Cramér also showed that the largest gap {G_{\mathcal C}(x)} was almost surely {\sim \log^2 x}. On the other hand, it does not obey the Hardy-Littlewood conjecture; more precisely, it obeys a simplified variant of that conjecture in which the singular series {{\mathfrak S}({\mathcal H})} is absent.

Granville proposed a refinement {{\mathcal G}} to Cramér’s random model {{\mathcal C}} in which one first sieves out (in each dyadic interval {[x,2x]}) all residue classes {0 \hbox{ mod } p} for {p \leq A} for a certain threshold {A = \log^{1-o(1)} x = o(\log x)}, and then places each surviving natural number {n} in {{\mathcal G}} with an independent probability {\frac{1}{\log n} \prod_{p \leq A} (1-\frac{1}{p})^{-1}}. One can verify that this model obeys the Hardy-Littlewood conjectures, and Granville showed that the largest gap {G_{\mathcal G}(x)} in this model was almost surely {\gtrsim \xi \log^2 x}, leading to his conjecture that this bound also was true for the primes. (Interestingly, this conjecture is not yet borne out by numerics; calculations of prime gaps up to {10^{18}}, for instance, have shown that {G_{\mathcal P}(x)} never exceeds {0.9206 \log^2 x} in this range. This is not necessarily a conflict, however; Granville’s analysis relies on inspecting gaps in an extremely sparse region of natural numbers that are more devoid of primes than average, and this region is not well explored by existing numerics. See this previous blog post for more discussion of Granville’s argument.)

However, Granville’s model does not produce a power savings in the error term of the Hardy-Littlewood conjectures, mostly due to the need to truncate the singular series at the logarithmic cutoff {A}. After some experimentation, we were able to produce a tractable random model {{\mathcal R}} for the primes which obeyed the Hardy-Littlewood conjectures with power savings, and which reproduced Granville’s gap prediction of {\gtrsim \xi \log^2 x} (we also get an upper bound of {\lesssim \xi \log^2 x \frac{\log\log x}{2 \log\log\log x}} for both models, though we expect the lower bound to be closer to the truth); to us, this strengthens the case for Granville’s version of Cramér’s conjecture. The model can be described as follows. We select one residue class {a_p \hbox{ mod } p} uniformly at random for each prime {p}, and as before we let {S_z} be the sifted set of integers formed by deleting the residue classes {a_p \hbox{ mod } p} with {p \leq z}. We then set

\displaystyle {\mathcal R} := \{ n \geq e^2: n \in S_{z(t)}\}

with {z(t)} Pólya’s magic cutoff (this is the cutoff that gives {{\mathcal R}} a density consistent with the prime number theorem or the Riemann hypothesis). As stated above, we are able to show that almost surely one has

\displaystyle \xi \log^2 x \lesssim {\mathcal G}_{\mathcal R}(x) \lesssim \xi \log^2 x \frac{\log\log x}{2 \log\log\log x} \ \ \ \ \ (3)

 

and that the Hardy-Littlewood conjectures hold with power savings for {k} up to {\log^c x} for any fixed {c < 1} and for shifts {h_1,\dots,h_k} of size {O(\log^c x)}. This is unfortunately a tiny bit weaker than what Theorem 3 requires (which more or less corresponds to the endpoint {c=1}), although there is a variant of Theorem 3 that can use this input to produce a lower bound on gaps in the model {{\mathcal R}} (but it is weaker than the one in (3)). In fact we prove a more precise almost sure asymptotic formula for {{\mathcal G}_{\mathcal R}(x) } that involves the optimal bounds for the linear sieve (or interval sieve), in which one deletes one residue class modulo {p} from an interval {[0,y]} for all primes {p} up to a given threshold. The lower bound in (3) relates to the case of deleting the {0 \hbox{ mod } p} residue classes from {[0,y]}; the upper bound comes from the delicate analysis of the linear sieve by Iwaniec. Improving on either of the two bounds looks to be quite a difficult problem.

The probabilistic analysis of {{\mathcal R}} is somewhat more complicated than of {{\mathcal C}} or {{\mathcal G}} as there is now non-trivial coupling between the events {n \in {\mathcal R}} as {n} varies, although moment methods such as the second moment method are still viable and allow one to verify the Hardy-Littlewood conjectures by a lengthy but fairly straightforward calculation. To analyse large gaps, one has to understand the statistical behaviour of a random linear sieve in which one starts with an interval {[0,y]} and randomly deletes a residue class {a_p \hbox{ mod } p} for each prime {p} up to a given threshold. For very small {p} this is handled by the deterministic theory of the linear sieve as discussed above. For medium sized {p}, it turns out that there is good concentration of measure thanks to tools such as Bennett’s inequality or Azuma’s inequality, as one can view the sieving process as a martingale or (approximately) as a sum of independent random variables. For larger primes {p}, in which only a small number of survivors are expected to be sieved out by each residue class, a direct combinatorial calculation of all possible outcomes (involving the random graph that connects interval elements {n \in [0,y]} to primes {p} if {n} falls in the random residue class {a_p \hbox{ mod } p}) turns out to give the best results.

I’ve just uploaded to the arXiv my paper “Quantitative bounds for critically bounded solutions to the Navier-Stokes equations“, submitted to the proceedings of the Linde Hall Inaugural Math Symposium. (I unfortunately had to cancel my physical attendance at this symposium for personal reasons, but was still able to contribute to the proceedings.) In recent years I have been interested in working towards establishing the existence of classical solutions for the Navier-Stokes equations

\displaystyle \partial_t u + (u \cdot \nabla) u = \Delta u - \nabla p

\displaystyle \nabla \cdot u = 0

that blow up in finite time, but this time for a change I took a look at the other side of the theory, namely the conditional regularity results for this equation. There are several such results that assert that if a certain norm of the solution stays bounded (or grows at a controlled rate), then the solution stays regular; taken in the contrapositive, they assert that if a solution blows up at a certain finite time {T_*}, then certain norms of the solution must also go to infinity. Here are some examples (not an exhaustive list) of such blowup criteria:

  • (Leray blowup criterion, 1934) If {u} blows up at a finite time {T_*}, and {3 < p \leq \infty}, then {\liminf_{t \rightarrow T_*} (T_* - t)^{\frac{1}{2}-\frac{3}{2p}} \| u(t) \|_{L^p_x({\bf R}^3)} \geq c} for an absolute constant {c>0}.
  • (ProdiSerrinLadyzhenskaya blowup criterion, 1959-1967) If {u} blows up at a finite time {T_*}, and {3 < p \leq \infty}, then {\| u \|_{L^q_t L^p_x([0,T_*) \times {\bf R}^3)} =+\infty}, where {\frac{1}{q} := \frac{1}{2} - \frac{3}{2p}}.
  • (Beale-Kato-Majda blowup criterion, 1984) If {u} blows up at a finite time {T_*}, then {\| \omega \|_{L^1_t L^\infty_x([0,T_*) \times {\bf R}^3)} = +\infty}, where {\omega := \nabla \times u} is the vorticity.
  • (Kato blowup criterion, 1984) If {u} blows up at a finite time {T_*}, then {\liminf_{t \rightarrow T_*} \|u(t) \|_{L^3_x({\bf R}^3)} \geq c} for some absolute constant {c>0}.
  • (Escauriaza-Seregin-Sverak blowup criterion, 2003) If {u} blows up at a finite time {T_*}, then {\limsup_{t \rightarrow T_*} \|u(t) \|_{L^3_x({\bf R}^3)} = +\infty}.
  • (Seregin blowup criterion, 2012) If {u} blows up at a finite time {T_*}, then {\lim_{t \rightarrow T_*} \|u(t) \|_{L^3_x({\bf R}^3)} = +\infty}.
  • (Phuc blowup criterion, 2015) If {u} blows up at a finite time {T_*}, then {\limsup_{t \rightarrow T_*} \|u(t) \|_{L^{3,q}_x({\bf R}^3)} = +\infty} for any {q < \infty}.
  • (Gallagher-Koch-Planchon blowup criterion, 2016) If {u} blows up at a finite time {T_*}, then {\limsup_{t \rightarrow T_*} \|u(t) \|_{\dot B_{p,q}^{-1+3/p}({\bf R}^3)} = +\infty} for any {3 < p, q < \infty}.
  • (Albritton blowup criterion, 2016) If {u} blows up at a finite time {T_*}, then {\lim_{t \rightarrow T_*} \|u(t) \|_{\dot B_{p,q}^{-1+3/p}({\bf R}^3)} = +\infty} for any {3 < p, q < \infty}.

My current paper is most closely related to the Escauriaza-Seregin-Sverak blowup criterion, which was the first to show a critical (i.e., scale-invariant, or dimensionless) spatial norm, namely {L^3_x({\bf R}^3)}, had to become large. This result now has many proofs; for instance, many of the subsequent blowup criterion results imply the Escauriaza-Seregin-Sverak one as a special case, and there are also additional proofs by Gallagher-Koch-Planchon (building on ideas of Kenig-Koch), and by Dong-Du. However, all of these proofs rely on some form of a compactness argument: given a finite time blowup, one extracts some suitable family of rescaled solutions that converges in some weak sense to a limiting solution that has some additional good properties (such as almost periodicity modulo symmetries), which one can then rule out using additional qualitative tools, such as unique continuation and backwards uniqueness theorems for parabolic heat equations. In particular, all known proofs use some version of the backwards uniqueness theorem of Escauriaza, Seregin, and Sverak. Because of this reliance on compactness, the existing proofs of the Escauriaza-Seregin-Sverak blowup criterion are qualitative, in that they do not provide any quantitative information on how fast the {\|u(t)\|_{L^3_x({\bf R}^3)}} norm will go to infinity (along a subsequence of times).

On the other hand, it is a general principle that qualitative arguments established using compactness methods ought to have quantitative analogues that replace the use of compactness by more complicated substitutes that give effective bounds; see for instance these previous blog posts for more discussion. I therefore was interested in trying to obtain a quantitative version of this blowup criterion that gave reasonably good effective bounds (in particular, my objective was to avoid truly enormous bounds such as tower-exponential or Ackermann function bounds, which often arise if one “naively” tries to make a compactness argument effective). In particular, I obtained the following triple-exponential quantitative regularity bounds:

Theorem 1 If {u} is a classical solution to Navier-Stokes on {[0,T) \times {\bf R}^3} with

\displaystyle \| u \|_{L^\infty_t L^3_x([0,T) \times {\bf R}^3)} \leq A \ \ \ \ \ (1)

 

for some {A \geq 2}, then

\displaystyle | \nabla^j u(t,x)| \leq \exp\exp\exp(A^{O(1)}) t^{-\frac{j+1}{2}}

and

\displaystyle | \nabla^j \omega(t,x)| \leq \exp\exp\exp(A^{O(1)}) t^{-\frac{j+2}{2}}

for {(t,x) \in [0,T) \times {\bf R}^3} and {j=0,1}.

As a corollary, one can now improve the Escauriaza-Seregin-Sverak blowup criterion to

\displaystyle \limsup_{t \rightarrow T_*} \frac{\|u(t) \|_{L^3_x({\bf R}^3)}}{(\log\log\log \frac{1}{T_*-t})^c} = +\infty

for some absolute constant {c>0}, which to my knowledge is the first (very slightly) supercritical blowup criterion for Navier-Stokes in the literature.

The proof uses many of the same quantitative inputs as previous arguments, most notably the Carleman inequalities used to establish unique continuation and backwards uniqueness theorems for backwards heat equations, but also some additional techniques that make the quantitative bounds more efficient. The proof focuses initially on points of concentration of the solution, which we define as points {(t_0,x_0)} where there is a frequency {N_0} for which one has the bound

\displaystyle |N_0^{-1} P_{N_0} u(t_0,x_0)| \geq A^{-C_0} \ \ \ \ \ (2)

 

for a large absolute constant {C_0}, where {P_{N_0}} is a Littlewood-Paley projection to frequencies {\sim N_0}. (This can be compared with the upper bound of {O(A)} for the quantity on the left-hand side that follows from (1).) The factor of {N_0^{-1}} normalises the left-hand side of (2) to be dimensionless (i.e., critical). The main task is to show that the dimensionless quantity {t_0 N_0^2} cannot get too large; in particular, we end up establishing a bound of the form

\displaystyle t_0 N_0^2 \lesssim \exp\exp\exp A^{O(C_0^6)}

from which the above theorem ends up following from a routine adaptation of the local well-posedness and regularity theory for Navier-Stokes.

The strategy is to show that any concentration such as (2) when {t_0 N_0^2} is large must force a significant component of the {L^3_x} norm of {u(t_0)} to also show up at many other locations than {x_0}, which eventually contradicts (1) if one can produce enough such regions of non-trivial {L^3_x} norm. (This can be viewed as a quantitative variant of the “rigidity” theorems in some of the previous proofs of the Escauriaza-Seregin-Sverak theorem that rule out solutions that exhibit too much “compactness” or “almost periodicity” in the {L^3_x} topology.) The chain of causality that leads from a concentration (2) at {(t_0,x_0)} to significant {L^3_x} norm at other regions of the time slice {t_0 \times {\bf R}^3} is somewhat involved (though simpler than the much more convoluted schemes I initially envisaged for this argument):

  1. Firstly, by using Duhamel’s formula, one can show that a concentration (2) can only occur (with {t_0 N_0^2} large) if there was also a preceding concentration

    \displaystyle |N_1^{-1} P_{N_1} u(t_1,x_1)| \geq A^{-C_0} \ \ \ \ \ (3)

     

    at some slightly previous point {(t_1,x_1)} in spacetime, with {N_1} also close to {N_0} (more precisely, we have {t_1 = t_0 - A^{-O(C_0^3)} N_0^{-2}}, {N_1 = A^{O(C_0^2)} N_0}, and {x_1 = x_0 + O( A^{O(C_0^4)} N_0^{-1})}). This can be viewed as a sort of contrapositive of a “local regularity theorem”, such as the ones established by Caffarelli, Kohn, and Nirenberg. A key point here is that the lower bound {A^{-C_0}} in the conclusion (3) is precisely the same as the lower bound in (2), so that this backwards propagation of concentration can be iterated.

  2. Iterating the previous step, one can find a sequence of concentration points

    \displaystyle |N_n^{-1} P_{N_n} u(t_n,x_n)| \geq A^{-C_0} \ \ \ \ \ (4)

     

    with the {(t_n,x_n)} propagating backwards in time; by using estimates ultimately resulting from the dissipative term in the energy identity, one can extract such a sequence in which the {t_0-t_n} increase geometrically with time, the {N_n} are comparable (up to polynomial factors in {A}) to the natural frequency scale {(t_0-t_n)^{-1/2}}, and one has {x_n = x_0 + O( |t_0-t_n|^{1/2} )}. Using the “epochs of regularity” theory that ultimately dates back to Leray, and tweaking the {t_n} slightly, one can also place the times {t_n} in intervals {I_n} (of length comparable to a small multiple of {|t_0-t_n|}) in which the solution is quite regular (in particular, {u, \nabla u, \omega, \nabla \omega} enjoy good {L^\infty_t L^\infty_x} bounds on {I_n \times {\bf R}^3}).

  3. The concentration (4) can be used to establish a lower bound for the {L^2_t L^2_x} norm of the vorticity {\omega} near {(t_n,x_n)}. As is well known, the vorticity obeys the vorticity equation

    \displaystyle \partial_t \omega = \Delta \omega - (u \cdot \nabla) \omega + (\omega \cdot \nabla) u.

    In the epoch of regularity {I_n \times {\bf R}^3}, the coefficients {u, \nabla u} of this equation obey good {L^\infty_x} bounds, allowing the machinery of Carleman estimates to come into play. Using a Carleman estimate that is used to establish unique continuation results for backwards heat equations, one can propagate this lower bound to also give lower {L^2} bounds on the vorticity (and its first derivative) in annuli of the form {\{ (t,x) \in I_n \times {\bf R}^3: R \leq |x-x_n| \leq R' \}} for various radii {R,R'}, although the lower bounds decay at a gaussian rate with {R}.

  4. Meanwhile, using an energy pigeonholing argument of Bourgain (which, in this Navier-Stokes context, is actually an enstrophy pigeonholing argument), one can locate some annuli {\{ x \in {\bf R}^3: R \leq |x-x_n| \leq R'\}} where (a slightly normalised form of) the entrosphy is small at time {t=t_n}; using a version of the localised enstrophy estimates from a previous paper of mine, one can then propagate this sort of control forward in time, obtaining an “annulus of regularity” of the form {\{ (t,x) \in [t_n,t_0] \times {\bf R}^3: R_n \leq |x-x_n| \leq R'_n\}} in which one has good estimates; in particular, one has {L^\infty_x} type bounds on {u, \nabla u, \omega, \nabla \omega} in this cylindrical annulus.
  5. By intersecting the previous epoch of regularity {I_n \times {\bf R}^3} with the above annulus of regularity, we have some lower bounds on the {L^2} norm of the vorticity (and its first derivative) in the annulus of regularity. Using a Carleman estimate first introduced by Escauriaza, Seregin, and Sverak, as well as a second application of the Carleman estimate used previously, one can then propagate this lower bound back up to time {t=t_0}, establishing a lower bound for the vorticity on the spatial annulus {\{ (t_0,x): R_n \leq |x-x_n| \leq R'_n \}}. By some basic Littlewood-Paley theory one can parlay this lower bound to a lower bound on the {L^3} norm of the velocity {u}; crucially, this lower bound is uniform in {n}.
  6. If {t_0 N_0^2} is very large (triple exponential in {A}!), one can then find enough scales {n} with disjoint {\{ (t_0,x): R_n \leq |x-x_n| \leq R'_n \}} annuli that the total lower bound on the {L^3_x} norm of {u(t_0)} provided by the above arguments is inconsistent with (1), thus establishing the claim.

The chain of causality is summarised in the following image:

scheme

It seems natural to conjecture that similar triply logarithmic improvements can be made to several of the other blowup criteria listed above, but I have not attempted to pursue this question. It seems difficult to improve the triple logarithmic factor using only the techniques here; the Bourgain pigeonholing argument inevitably costs one exponential, the Carleman inequalities cost a second, and the stacking of scales at the end to contradict the {L^3} upper bound costs the third.

 

Peter Denton, Stephen Parke, Xining Zhang, and I have just uploaded to the arXiv the short unpublished note “Eigenvectors from eigenvalues“. This note gives two proofs of a general eigenvector identity observed recently by Denton, Parke and Zhang in the course of some quantum mechanical calculations. The identity is as follows:

Theorem 1 Let {A} be an {n \times n} Hermitian matrix, with eigenvalues {\lambda_1(A),\dots,\lambda_n(A)}. Let {v_i} be a unit eigenvector corresponding to the eigenvalue {\lambda_i(A)}, and let {v_{i,j}} be the {j^{th}} component of {v_i}. Then

\displaystyle  |v_{i,j}|^2 \prod_{k=1; k \neq i}^n (\lambda_i(A) - \lambda_k(A)) = \prod_{k=1}^{n-1} (\lambda_i(A) - \lambda_k(M_j))

where {M_j} is the {n-1 \times n-1} Hermitian matrix formed by deleting the {j^{th}} row and column from {A}.

For instance, if we have

\displaystyle  A = \begin{pmatrix} a & X^* \\ X & M \end{pmatrix}

for some real number {a}, {n-1}-dimensional vector {X}, and {n-1 \times n-1} Hermitian matrix {M}, then we have

\displaystyle  |v_{i,1}|^2 = \frac{\prod_{k=1}^{n-1} (\lambda_i(A) - \lambda_k(M))}{\prod_{k=1; k \neq i}^n (\lambda_i(A) - \lambda_k(A))} \ \ \ \ \ (1)

assuming that the denominator is non-zero.

Once one is aware of the identity, it is not so difficult to prove it; we give two proofs, each about half a page long, one of which is based on a variant of the Cauchy-Binet formula, and the other based on properties of the adjugate matrix. But perhaps it is surprising that such a formula exists at all; one does not normally expect to learn much information about eigenvectors purely from knowledge of eigenvalues. In the random matrix theory literature, for instance in this paper of Erdos, Schlein, and Yau, or this later paper of Van Vu and myself, a related identity has been used, namely

\displaystyle  |v_{i,1}|^2 = \frac{1}{1 + \| (M-\lambda_i(A))^{-1} X \|^2}, \ \ \ \ \ (2)

but it is not immediately obvious that one can derive the former identity from the latter. (I do so below the fold; we ended up not putting this proof in the note as it was longer than the two other proofs we found. I also give two other proofs below the fold, one from a more geometric perspective and one proceeding via Cramer’s rule.) It was certainly something of a surprise to me that there is no explicit appearance of the {a,X} components of {A} in the formula (1) (though they do indirectly appear through their effect on the eigenvalues {\lambda_k(A)}; for instance from taking traces one sees that {a = \sum_{k=1}^n \lambda_k(A) - \sum_{k=1}^{n-1} \lambda_k(M)}).

One can get some feeling of the identity (1) by considering some special cases. Suppose for instance that {A} is a diagonal matrix with all distinct entries. The upper left entry {a} of {A} is one of the eigenvalues of {A}. If it is equal to {\lambda_i(A)}, then the eigenvalues of {M} are the other {n-1} eigenvalues of {A}, and now the left and right-hand sides of (1) are equal to {1}. At the other extreme, if {a} is equal to a different eigenvalue of {A}, then {\lambda_i(A)} now appears as an eigenvalue of {M}, and both sides of (1) now vanish. More generally, if we order the eigenvalues {\lambda_1(A) \leq \dots \leq \lambda_n(A)} and {\lambda_1(M) \leq \dots \leq \lambda_{n-1}(M)}, then the Cauchy interlacing inequalities tell us that

\displaystyle  0 \leq \lambda_i(A) - \lambda_k(M) \leq \lambda_i(A) - \lambda_k(A)

for {1 \leq k < i}, and

\displaystyle  \lambda_i(A) - \lambda_{k+1}(A) \leq \lambda_i(A) - \lambda_k(M) < 0

for {i \leq k \leq n-1}, so that the right-hand side of (1) lies between {0} and {1}, which is of course consistent with (1) as {v_i} is a unit vector. Thus the identity relates the coefficient sizes of an eigenvector with the extent to which the Cauchy interlacing inequalities are sharp.

Read the rest of this entry »

I have just uploaded to the arXiv my paper “Sharp bounds for multilinear curved Kakeya, restriction and oscillatory integral estimates away from the endpoint“, submitted to Mathematika. In this paper I return (after more than a decade’s absence) to one of my first research interests, namely the Kakeya and restriction family of conjectures. The starting point is the following “multilinear Kakeya estimate” first established in the non-endpoint case by Bennett, Carbery, and myself, and then in the endpoint case by Guth (with further proofs and extensions by Bourgain-Guth and Carbery-Valdimarsson:

Theorem 1 (Multilinear Kakeya estimate) Let {\delta > 0} be a radius. For each {j = 1,\dots,d}, let {\mathbb{T}_j} denote a finite family of infinite tubes {T_j} in {{\bf R}^d} of radius {\delta}. Assume the following axiom:

  • (i) (Transversality) whenever {T_j \in \mathbb{T}_j} is oriented in the direction of a unit vector {n_j} for {j =1,\dots,d}, we have

    \displaystyle  \left|\bigwedge_{j=1}^d n_j\right| \geq A^{-1}

    for some {A>0}, where we use the usual Euclidean norm on the wedge product {\bigwedge^d {\bf R}^d}.

Then, for any {p \geq \frac{1}{d-1}}, one has

\displaystyle  \left\| \prod_{j=1}^d \sum_{T_j \in \mathbb{T}_j} 1_{T_j} \right\|_{L^p({\bf R}^d)} \lesssim_{A,p} \delta^{\frac{d}{p}} \prod_{j \in [d]} \# \mathbb{T}_j. \ \ \ \ \ (1)

where {L^p({\bf R}^d)} are the usual Lebesgue norms with respect to Lebesgue measure, {1_{T_j}} denotes the indicator function of {T_j}, and {\# \mathbb{T}_j} denotes the cardinality of {\mathbb{T}_j}.

The original proof of this proceeded using a heat flow monotonicity method, which in my previous post I reinterpreted using a “virtual integration” concept on a fractional Cartesian product space. It turns out that this machinery is somewhat flexible, and can be used to establish some other estimates of this type. The first result of this paper is to extend the above theorem to the curved setting, in which one localises to a ball of radius {O(1)} (and sets {\delta} to be small), but allows the tubes {T_j} to be curved in a {C^2} fashion. If one runs the heat flow monotonicity argument, one now picks up some additional error terms arising from the curvature, but as the spatial scale approaches zero, the tubes become increasingly linear, and as such the error terms end up being an integrable multiple of the main term, at which point one can conclude by Gronwall’s inequality (actually for technical reasons we use a bootstrap argument instead of Gronwall). A key point in this approach is that one obtains optimal bounds (not losing factors of {\delta^{-\varepsilon}} or {\log^{O(1)} \frac{1}{\delta}}), so long as one stays away from the endpoint case {p=\frac{1}{d-1}} (which does not seem to be easily treatable by the heat flow methods). Previously, the paper of Bennett, Carbery, and myself was able to use an induction on scale argument to obtain a curved multilinear Kakeya estimate losing a factor of {\log^{O(1)} \frac{1}{\delta}} (after optimising the argument); later arguments of Bourgain-Guth and Carbery-Valdimarsson, based on algebraic topology methods, could also obtain a curved multilinear Kakeya estimate without such losses, but only in the algebraic case when the tubes were neighbourhoods of algebraic curves of bounded degree.

Perhaps more interestingly, we are also able to extend the heat flow monotonicity method to apply directly to the multilinear restriction problem, giving the following global multilinear restriction estimate:

Theorem 2 (Multilinear restriction theorem) Let {\frac{1}{d-1} < p \leq \infty} be an exponent, and let {A \geq 2} be a parameter. Let {M} be a sufficiently large natural number, depending only on {d}. For {j \in [d]}, let {U_j} be an open subset of {B^{d-1}(0,A)}, and let {h_j: U_j \rightarrow {\bf R}} be a smooth function obeying the following axioms:

  • (i) (Regularity) For each {j \in [d]} and {\xi \in U_j}, one has

    \displaystyle  |\nabla_\xi^{\otimes m} \otimes h_j(\xi)| \leq A \ \ \ \ \ (2)

    for all {1 \leq m \leq M}.

  • (ii) (Transversality) One has

    \displaystyle  \left| \bigwedge_{j \in [d]} (-\nabla_\xi h_j(\xi_j),1) \right| \geq A^{-1}

    whenever {\xi_j \in U_j} for {j \in [d]}.

Let {U_{j,1/A} \subset U_j} be the sets

\displaystyle  U_{j,1/A} := \{ \xi \in U_j: B^{d-1}(\xi,1/A) \subset U_j \}. \ \ \ \ \ (3)

Then one has

\displaystyle  \left\| \prod_{j \in [d]} {\mathcal E}_j f_j \right\|_{L^{2p}({\bf R}^d)} \leq A^{O(1)} \left(d-1-\frac{1}{p}\right)^{-O(1)} \prod_{j \in [d]} \|f_j \|_{L^2(U_{j,1/A})}

for any {f_j \in L^2(U_{j,1/A} \rightarrow {\bf C})}, {j \in [d]}, extended by zero outside of {U_{j,1/A}}, and {{\mathcal E}_j} denotes the extension operator

\displaystyle  {\mathcal E}_j f_j( x', x_d ) := \int_{U_j} e^{2\pi i (x' \xi^T + x_d h_j(\xi))} f_j(\xi)\ d\xi.

Local versions of such estimate, in which {L^{2p}({\bf R}^d)} is replaced with {L^{2p}(B^d(0,R))} for some {R \geq 2}, and one accepts a loss of the form {\log^{O(1)} R}, were already established by Bennett, Carbery, and myself using an induction on scale argument. In a later paper of Bourgain-Guth these losses were removed by “epsilon removal lemmas” to recover Theorme 2, but only in the case when all the hypersurfaces involved had curvatures bounded away from zero.

There are two main new ingredients in the proof of Theorem 2. The first is to replace the usual induction on scales scheme to establish multilinear restriction by a “ball inflation” induction on scales scheme that more closely resembles the proof of decoupling theorems. In particular, we actually prove the more general family of estimates

\displaystyle  \left\| \prod_{j \in [d]} E_{r}[{\mathcal E}_j f_j] \right\|_{L^{p}({\bf R}^d)} \leq A^{O(1)} \left(d-1 - \frac{1}{p}\right)^{O(1)} r^{\frac{d}{p}} \prod_{j \in [d]} \| f_j \|_{L^2(U_{j,1/A})}^2

where {E_r} denotes the local energies

\displaystyle  E_{r}[f](x',x_d) := \int_{B^{d-1}(x',r)} |f(y',x_d)|^2\ dy'

(actually for technical reasons it is more convenient to use a smoother weight than the strict cutoff to the disk {B^{d-1}(x',r)}). With logarithmic losses, it is not difficult to establish this estimate by an upward induction on {r}. To avoid such losses we use the heat flow monotonicity method. Here we run into the issue that the extension operators {{\mathcal E}_j f_j} are complex-valued rather than non-negative, and thus would not be expected to obey many good montonicity properties. However, the local energies {E_r[{\mathcal E}_j f_j]} can be expressed in terms of the magnitude squared of what is essentially the Gabor transform of {{\mathcal E}_j f_j}, and these are non-negative; furthermore, the dispersion relation associated to the extension operators {{\mathcal E}_j f_j} implies that these Gabor transforms propagate along tubes, so that the situation becomes quite similar (up to several additional lower order error terms) to that in the multilinear Kakeya problem. (This can be viewed as a continuous version of the usual wave packet decomposition method used to relate restriction and Kakeya problems, which when combined with the heat flow monotonicity method allows for one to use a continuous version of induction on scales methods that do not concede any logarithmic factors.)

Finally, one can combine the curved multilinear Kakeya result with the multilinear restriction result to obtain estimates for multilinear oscillatory integrals away from the endpoint. Again, this sort of implication was already established in the previous paper of Bennett, Carbery, and myself, but the arguments there had some epsilon losses in the exponents; here we were able to run the argument more carefully and avoid these losses.

Earlier this month, Hao Huang (who, incidentally, was a graduate student here at UCLA) gave a remarkably short proof of a long-standing problem in theoretical computer science known as the sensitivity conjecture. See for instance this blog post of Gil Kalai for further discussion and links to many other online discussions of this result. One formulation of the theorem proved is as follows. Define the {n}-dimensional hypercube graph {Q_n} to be the graph with vertex set {({\bf Z}/2{\bf Z})^n}, and with every vertex {v \in ({\bf Z}/2{\bf Z})^n} joined to the {n} vertices {v + e_1,\dots,v+e_n}, where {e_1,\dots,e_n} is the standard basis of {({\bf Z}/2{\bf Z})^n}.

Theorem 1 (Lower bound on maximum degree of induced subgraphs of hypercube) Let {E} be a set of at least {2^{n-1}+1} vertices in {Q_n}. Then there is a vertex in {E} that is adjacent (in {Q_n}) to at least {\sqrt{n}} other vertices in {E}.

The bound {\sqrt{n}} (or more precisely, {\lceil \sqrt{n} \rceil}) is completely sharp, as shown by Chung, Furedi, Graham, and Seymour; we describe this example below the fold. When combined with earlier reductions of Gotsman-Linial and Nisan-Szegedy; we give these below the fold also.

Let {A = (a_{vw})_{v,w \in ({\bf Z}/2{\bf Z})^n}} be the adjacency matrix of {Q_n} (where we index the rows and columns directly by the vertices in {({\bf Z}/2{\bf Z})^n}, rather than selecting some enumeration {1,\dots,2^n}), thus {a_{vw}=1} when {w = v+e_i} for some {i=1,\dots,n}, and {a_{vw}=0} otherwise. The above theorem then asserts that if {E} is a set of at least {2^{n-1}+1} vertices, then the {E \times E} minor {(a_{vw})_{v,w \in E}} of {A} has a row (or column) that contains at least {\sqrt{n}} non-zero entries.

The key step to prove this theorem is the construction of rather curious variant {\tilde A} of the adjacency matrix {A}:

Proposition 2 There exists a {({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})^n} matrix {\tilde A = (\tilde a_{vw})_{v,w \in ({\bf Z}/2{\bf Z})^n}} which is entrywise dominated by {A} in the sense that

\displaystyle  |\tilde a_{vw}| \leq a_{vw} \hbox{ for all } v,w \in ({\bf Z}/2{\bf Z})^n \ \ \ \ \ (1)

and such that {\tilde A} has {\sqrt{n}} as an eigenvalue with multiplicity {2^{n-1}}.

Assuming this proposition, the proof of Theorem 1 can now be quickly concluded. If we view {\tilde A} as a linear operator on the {2^n}-dimensional space {\ell^2(({\bf Z}/2{\bf Z})^n)} of functions of {({\bf Z}/2{\bf Z})^n}, then by hypothesis this space has a {2^{n-1}}-dimensional subspace {V} on which {\tilde A} acts by multiplication by {\sqrt{n}}. If {E} is a set of at least {2^{n-1}+1} vertices in {Q_n}, then the space {\ell^2(E)} of functions on {E} has codimension at most {2^{n-1}-1} in {\ell^2(({\bf Z}/2{\bf Z})^n)}, and hence intersects {V} non-trivially. Thus the {E \times E} minor {\tilde A_E} of {\tilde A} also has {\sqrt{n}} as an eigenvalue (this can also be derived from the Cauchy interlacing inequalities), and in particular this minor has operator norm at least {\sqrt{n}}. By Schur’s test, this implies that one of the rows or columns of this matrix has absolute values summing to at least {\sqrt{n}}, giving the claim.

Remark 3 The argument actually gives a strengthening of Theorem 1: there exists a vertex {v_0} of {E} with the property that for every natural number {k}, there are at least {n^{k/2}} paths of length {k} in the restriction {Q_n|_E} of {Q_n} to {E} that start from {v_0}. Indeed, if we let {(u_v)_{v \in E}} be an eigenfunction of {\tilde A} on {\ell^2(E)}, and let {v_0} be a vertex in {E} that maximises the value of {|u_{v_0}|}, then for any {k} we have that the {v_0} component of {\tilde A_E^k (u_v)_{v \in E}} is equal to {n^{k/2} |u_{v_0}|}; on the other hand, by the triangle inequality, this component is at most {|u_{v_0}|} times the number of length {k} paths in {Q_n|_E} starting from {v_0}, giving the claim.

This argument can be viewed as an instance of a more general “interlacing method” to try to control the behaviour of a graph {G} on all large subsets {E} by first generating a matrix {\tilde A} on {G} with very good spectral properties, which are then partially inherited by the {E \times E} minor of {\tilde A} by interlacing inequalities. In previous literature using this method (see e.g., this survey of Haemers, or this paper of Wilson), either the original adjacency matrix {A}, or some non-negatively weighted version of that matrix, was used as the controlling matrix {\tilde A}; the novelty here is the use of signed controlling matrices. It will be interesting to see what further variants and applications of this method emerge in the near future. (Thanks to Anurag Bishoi in the comments for these references.)

The “magic” step in the above argument is constructing {\tilde A}. In Huang’s paper, {\tilde A} is constructed recursively in the dimension {n} in a rather simple but mysterious fashion. Very recently, Roman Karasev gave an interpretation of this matrix in terms of the exterior algebra on {{\bf R}^n}. In this post I would like to give an alternate interpretation in terms of the operation of twisted convolution, which originated in the theory of the Heisenberg group in quantum mechanics.

Firstly note that the original adjacency matrix {A}, when viewed as a linear operator on {\ell^2(({\bf Z}/2{\bf Z})^n)}, is a convolution operator

\displaystyle  A f = f * \mu

where

\displaystyle \mu(x) := \sum_{i=1}^n 1_{x=e_i}

is the counting measure on the standard basis {e_1,\dots,e_n}, and {*} denotes the ordinary convolution operation

\displaystyle  f * g(x) := \sum_{y \in ({\bf Z}/2{\bf Z})^n} f(y) g(x-y) = \sum_{y_1+y_2 = x} f(y_1) g(y_2).

As is well known, this operation is commutative and associative. Thus for instance the square {A^2} of the adjacency operator {A} is also a convolution operator

\displaystyle  A^2 f = f * (\mu * \mu)(x)

where the convolution kernel {\mu * \mu} is moderately complicated:

\displaystyle  \mu*\mu(x) = n \times 1_{x=0} + \sum_{1 \leq i < j \leq n} 2 \times 1_{x = e_i + e_j}.

The factor {2} in this expansion comes from combining the two terms {1_{x=e_i} * 1_{x=e_j}} and {1_{x=e_j} * 1_{x=e_i}}, which both evaluate to {1_{x=e_i+e_j}}.

More generally, given any bilinear form {B: ({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})^n \rightarrow {\bf Z}/2{\bf Z}}, one can define the twisted convolution

\displaystyle  f *_B g(x) := \sum_{y \in ({\bf Z}/2{\bf Z})^n} (-1)^{B(y,x-y)} f(y) g(x-y)

\displaystyle  = \sum_{y_1+y_2=x} (-1)^{B(y_1,y_2)} f(y_1) g(y_2)

of two functions {f,g \in \ell^2(({\bf Z}/2{\bf Z})^n)}. This operation is no longer commutative (unless {B} is symmetric). However, it remains associative; indeed, one can easily compute that

\displaystyle  (f *_B g) *_B h(x) = f *_B (g *_B h)(x)

\displaystyle = \sum_{y_1+y_2+y_3=x} (-1)^{B(y_1,y_2)+B(y_1,y_3)+B(y_2,y_3)} f(y_1) g(y_2) h(y_3).

In particular, if we define the twisted convolution operator

\displaystyle  A_B f(x) := f *_B \mu(x)

then the square {A_B^2} is also a twisted convolution operator

\displaystyle  A_B^2 f = f *_B (\mu *_B \mu)

and the twisted convolution kernel {\mu *_B \mu} can be computed as

\displaystyle  \mu *_B \mu(x) = (\sum_{i=1}^n (-1)^{B(e_i,e_i)}) 1_{x=0}

\displaystyle + \sum_{1 \leq i < j \leq n} ((-1)^{B(e_i,e_j)} + (-1)^{B(e_j,e_i)}) 1_{x=e_i+e_j}.

For general bilinear forms {B}, this twisted convolution is just as messy as {\mu * \mu} is. But if we take the specific bilinear form

\displaystyle  B(x,y) := \sum_{1 \leq i < j \leq n} x_i y_j \ \ \ \ \ (2)

then {B(e_i,e_i)=0} for {1 \leq i \leq n} and {B(e_i,e_j)=1, B(e_j,e_i)=0} for {1 \leq i < j \leq n}, and the above twisted convolution simplifies to

\displaystyle  \mu *_B \mu(x) = n 1_{x=0}

and now {A_B^2} is very simple:

\displaystyle  A_B^2 f = n f.

Thus the only eigenvalues of {A_B} are {+\sqrt{n}} and {-\sqrt{n}}. The matrix {A_B} is entrywise dominated by {A} in the sense of (1), and in particular has trace zero; thus the {+\sqrt{n}} and {-\sqrt{n}} eigenvalues must occur with equal multiplicity, so in particular the {+\sqrt{n}} eigenvalue occurs with multiplicity {2^{n-1}} since the matrix has dimensions {2^n \times 2^n}. This establishes Proposition 2.

Remark 4 Twisted convolution {*_B} is actually just a component of ordinary convolution, but not on the original group {({\bf Z}/2{\bf Z})^n}; instead it relates to convolution on a Heisenberg group extension of this group. More specifically, define the Heisenberg group {H} to be the set of pairs {(x, t) \in ({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})} with group law

\displaystyle  (x,t) \cdot (y,s) := (x+y, t+s+B(x,y))

and inverse operation

\displaystyle  (x,t)^{-1} = (-x, -t+B(x,x))

(one can dispense with the negative signs here if desired, since we are in characteristic two). Convolution on {H} is defined in the usual manner: one has

\displaystyle  F*G( (x,t) ) := \sum_{(y,s) \in H} F(y,s) G( (y,s)^{-1} (x,t) )

for any {F,G \in \ell^2(H)}. Now if {f \in \ell^2(({\bf Z}/2{\bf Z})^n)} is a function on the original group {({\bf Z}/2{\bf Z})^n}, we can define the lift {\tilde f \in \ell^2(H)} by the formula

\displaystyle  \tilde f(x,t) := (-1)^t f(x)

and then by chasing all the definitions one soon verifies that

\displaystyle  \tilde f * \tilde g = 2 \widetilde{f *_B g}

for any {f,g \in \ell^2(({\bf Z}/2{\bf Z})^n)}, thus relating twisted convolution {*_B} to Heisenberg group convolution {*}.

Remark 5 With the twisting by the specific bilinear form {B} given by (2), convolution by {1_{x=e_i}} and {1_{x=e_j}} now anticommute rather than commute. This makes the twisted convolution algebra {(\ell^2(({\bf Z}/2{\bf Z})^n), *_B)} isomorphic to a Clifford algebra {Cl({\bf R}^n,I_n)} (the real or complex algebra generated by formal generators {v_1,\dots,v_n} subject to the relations {(v_iv_j+v_jv_i)/2 = 1_{i=j}} for {i,j=1,\dots,n}) rather than the commutative algebra more familiar to abelian Fourier analysis. This connection to Clifford algebra (also observed independently by Tom Mrowka and by Daniel Matthews) may be linked to the exterior algebra interpretation of the argument in the recent preprint of Karasev mentioned above.

Remark 6 One could replace the form (2) in this argument by any other bilinear form {B'} that obeyed the relations {B'(e_i,e_i)=0} and {B'(e_i,e_j) + B'(e_j,e_i)=1} for {i \neq j}. However, this additional level of generality does not add much; any such {B'} will differ from {B} by an antisymmetric form {C} (so that {C(x,x) = 0} for all {x}, which in characteristic two implied that {C(x,y) = C(y,x)} for all {x,y}), and such forms can always be decomposed as {C(x,y) = C'(x,y) + C'(y,x)}, where {C'(x,y) := \sum_{i<j} C(e_i,e_j) x_i y_j}. As such, the matrices {A_B} and {A_{B'}} are conjugate, with the conjugation operator being the diagonal matrix with entries {(-1)^{C'(x,x)}} at each vertex {x}.

Remark 7 (Added later) This remark combines the two previous remarks. One can view any of the matrices {A_{B'}} in Remark 6 as components of a single canonical matrix {A_{Cl}} that is still of dimensions {({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})^n}, but takes values in the Clifford algebra {Cl({\bf R}^n,I_n)} from Remark 5; with this “universal algebra” perspective, one no longer needs to make any arbitrary choices of form {B}. More precisely, let {\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))} denote the vector space of functions {f: ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n)} from the hypercube to the Clifford algebra; as a real vector space, this is a {2^{2n}} dimensional space, isomorphic to the direct sum of {2^n} copies of {\ell^2(({\bf Z}/2{\bf Z})^n)}, as the Clifford algebra is itself {2^n} dimensional. One can then define a canonical Clifford adjacency operator {A_{Cl}} on this space by

\displaystyle  A_{Cl} f(x) := \sum_{i=1}^n f(x+e_i) v_i

where {v_1,\dots,v_n} are the generators of {Cl({\bf R}^n,I_n)}. This operator can either be identified with a Clifford-valued {2^n \times 2^n} matrix or as a real-valued {2^{2n} \times 2^{2n}} matrix. In either case one still has the key algebraic relations {A_{Cl}^2 = n} and {\mathrm{tr} A_{Cl} = 0}, ensuring that when viewed as a real {2^{2n} \times 2^{2n}} matrix, half of the eigenvalues are equal to {+\sqrt{n}} and half equal to {-\sqrt{n}}. One can then use this matrix in place of any of the {A_{B'}} to establish Theorem 1 (noting that Schur’s test continues to work for Clifford-valued matrices because of the norm structure on {Cl({\bf R}^n,I_n)}).

To relate {A_{Cl}} to the real {2^n \times 2^n} matrices {A_{B'}}, first observe that each point {x} in the hypercube {({\bf Z}/2{\bf Z})^n} can be associated with a one-dimensional real subspace {\ell_x} (i.e., a line) in the Clifford algebra {Cl({\bf R}^n,I_n)} by the formula

\displaystyle  \ell_{e_{i_1} + \dots + e_{i_k}} := \mathrm{span}_{\bf R}( v_{i_1} \dots v_{i_k} )

for any {i_1,\dots,i_k \in \{1,\dots,n\}} (note that this definition is well-defined even if the {i_1,\dots,i_k} are out of order or contain repetitions). This can be viewed as a discrete line bundle over the hypercube. Since {\ell_{x+e_i} = \ell_x e_i} for any {i}, we see that the {2^n}-dimensional real linear subspace {V} of {\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))} of sections of this bundle, that is to say the space of functions {f: ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n)} such that {f(x) \in \ell_x} for all {x \in ({\bf Z}/2{\bf Z})^n}, is an invariant subspace of {A_{Cl}}. (Indeed, using the left-action of the Clifford algebra on {\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))}, which commutes with {A_{Cl}}, one can naturally identify {\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))} with {Cl({\bf R}^n,I_n) \otimes V}, with the left action of {Cl({\bf R}^n,I_n)} acting purely on the first factor and {A_{Cl}} acting purely on the second factor.) Any trivialisation of this line bundle lets us interpret the restriction {A_{Cl}|_V} of {A_{Cl}} to {V} as a real {2^n \times 2^n} matrix. In particular, given one of the bilinear forms {B'} from Remark 6, we can identify {V} with {\ell^2(({\bf Z}/2{\bf Z})^n)} by identifying any real function {f \in \ell^2( ({\bf Z}/2{\bf Z})^n)} with the lift {\tilde f \in V} defined by

\displaystyle  \tilde f(e_{i_1} + \dots + e_{i_k}) := (-1)^{\sum_{1 \leq j < j' \leq k} B'(e_{i_j}, e_{i_{j'}})}

\displaystyle f(e_{i_1} + \dots + e_{i_k}) v_{i_1} \dots v_{i_k}

whenever {1 \leq i_1 < \dots < i_k \leq n}. A somewhat tedious computation using the properties of {B'} then eventually gives the intertwining identity

\displaystyle  A_{Cl} \tilde f = \widetilde{A_{B'} f}

and so {A_{B'}} is conjugate to {A_{Cl}|_V}.

Read the rest of this entry »

Let {\Omega} be some domain (such as the real numbers). For any natural number {p}, let {L(\Omega^p)_{sym}} denote the space of symmetric real-valued functions {F^{(p)}: \Omega^p \rightarrow {\bf R}} on {p} variables {x_1,\dots,x_p \in \Omega}, thus

\displaystyle F^{(p)}(x_{\sigma(1)},\dots,x_{\sigma(p)}) = F^{(p)}(x_1,\dots,x_p)

for any permutation {\sigma: \{1,\dots,p\} \rightarrow \{1,\dots,p\}}. For instance, for any natural numbers {k,p}, the elementary symmetric polynomials

\displaystyle e_k^{(p)}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} x_{i_1} \dots x_{i_k}

will be an element of {L({\bf R}^p)_{sym}}. With the pointwise product operation, {L(\Omega^p)_{sym}} becomes a commutative real algebra. We include the case {p=0}, in which case {L(\Omega^0)_{sym}} consists solely of the real constants.

Given two natural numbers {k,p}, one can “lift” a symmetric function {F^{(k)} \in L(\Omega^k)_{sym}} of {k} variables to a symmetric function {[F^{(k)}]_{k \rightarrow p} \in L(\Omega^p)_{sym}} of {p} variables by the formula

\displaystyle [F^{(k)}]_{k \rightarrow p}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} F^{(k)}(x_{i_1}, \dots, x_{i_k})

\displaystyle = \frac{1}{k!} \sum_\pi F^{(k)}( x_{\pi(1)}, \dots, x_{\pi(k)} )

where {\pi} ranges over all injections from {\{1,\dots,k\}} to {\{1,\dots,p\}} (the latter formula making it clearer that {[F^{(k)}]_{k \rightarrow p}} is symmetric). Thus for instance

\displaystyle [F^{(1)}(x_1)]_{1 \rightarrow p} = \sum_{i=1}^p F^{(1)}(x_i)

\displaystyle [F^{(2)}(x_1,x_2)]_{2 \rightarrow p} = \sum_{1 \leq i < j \leq p} F^{(2)}(x_i,x_j)

and

\displaystyle e_k^{(p)}(x_1,\dots,x_p) = [x_1 \dots x_k]_{k \rightarrow p}.

Also we have

\displaystyle [1]_{k \rightarrow p} = \binom{p}{k} = \frac{p(p-1)\dots(p-k+1)}{k!}.

With these conventions, we see that {[F^{(k)}]_{k \rightarrow p}} vanishes for {p=0,\dots,k-1}, and is equal to {F} if {k=p}. We also have the transitivity

\displaystyle [F^{(k)}]_{k \rightarrow p} = \frac{1}{\binom{p-k}{p-l}} [[F^{(k)}]_{k \rightarrow l}]_{l \rightarrow p}

if {k \leq l \leq p}.

The lifting map {[]_{k \rightarrow p}} is a linear map from {L(\Omega^k)_{sym}} to {L(\Omega^p)_{sym}}, but it is not a ring homomorphism. For instance, when {\Omega={\bf R}}, one has

\displaystyle [x_1]_{1 \rightarrow p} [x_1]_{1 \rightarrow p} = (\sum_{i=1}^p x_i)^2 \ \ \ \ \ (1)

 

\displaystyle = \sum_{i=1}^p x_i^2 + 2 \sum_{1 \leq i < j \leq p} x_i x_j

\displaystyle = [x_1^2]_{1 \rightarrow p} + 2 [x_1 x_2]_{1 \rightarrow p}

\displaystyle \neq [x_1^2]_{1 \rightarrow p}.

In general, one has the identity

\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow p} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow p} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (2)

 

\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow p}

for all natural numbers {k,l,p} and {F^{(k)} \in L(\Omega^k)_{sym}}, {G^{(l)} \in L(\Omega^l)_{sym}}, where {\pi, \rho} range over all injections {\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}, {\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}} with {\pi(\{1,\dots,k\}) \cup \rho(\{1,\dots,l\}) = \{1,\dots,m\}}. Combinatorially, the identity (2) follows from the fact that given any injections {\tilde \pi: \{1,\dots,k\} \rightarrow \{1,\dots,p\}} and {\tilde \rho: \{1,\dots,l\} \rightarrow \{1,\dots,p\}} with total image {\tilde \pi(\{1,\dots,k\}) \cup \tilde \rho(\{1,\dots,l\})} of cardinality {m}, one has {k,l \leq m \leq k+l}, and furthermore there exist precisely {m!} triples {(\pi, \rho, \sigma)} of injections {\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}, {\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}}, {\sigma: \{1,\dots,m\} \rightarrow \{1,\dots,p\}} such that {\tilde \pi = \sigma \circ \pi} and {\tilde \rho = \sigma \circ \rho}.

Example 1 When {\Omega = {\bf R}}, one has

\displaystyle [x_1 x_2]_{2 \rightarrow p} [x_1]_{1 \rightarrow p} = [\frac{1}{2! 1!}( 2 x_1^2 x_2 + 2 x_1 x_2^2 )]_{2 \rightarrow p} + [\frac{1}{2! 1!} 6 x_1 x_2 x_3]_{3 \rightarrow p}

\displaystyle = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow p} + [3x_1 x_2 x_3]_{3 \rightarrow p}

which is just a restatement of the identity

\displaystyle (\sum_{i < j} x_i x_j) (\sum_k x_k) = \sum_{i<j} x_i^2 x_j + x_i x_j^2 + \sum_{i < j < k} 3 x_i x_j x_k.

Note that the coefficients appearing in (2) do not depend on the final number of variables {p}. We may therefore abstract the role of {p} from the law (2) by introducing the real algebra {L(\Omega^*)_{sym}} of formal sums

\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}

where for each {k}, {F^{(k)}} is an element of {L(\Omega^k)_{sym}} (with only finitely many of the {F^{(k)}} being non-zero), and with the formal symbol {[]_{k \rightarrow *}} being formally linear, thus

\displaystyle [F^{(k)}]_{k \rightarrow *} + [G^{(k)}]_{k \rightarrow *} := [F^{(k)} + G^{(k)}]_{k \rightarrow *}

and

\displaystyle c [F^{(k)}]_{k \rightarrow *} := [cF^{(k)}]_{k \rightarrow *}

for {F^{(k)}, G^{(k)} \in L(\Omega^k)_{sym}} and scalars {c \in {\bf R}}, and with multiplication given by the analogue

\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow *} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow *} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (3)

 

\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow *}

of (2). Thus for instance, in this algebra {L(\Omega^*)_{sym}} we have

\displaystyle [x_1]_{1 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2]_{1 \rightarrow *} + 2 [x_1 x_2]_{2 \rightarrow *}

and

\displaystyle [x_1 x_2]_{2 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow *} + [3 x_1 x_2 x_3]_{3 \rightarrow *}.

Informally, {L(\Omega^*)_{sym}} is an abstraction (or “inverse limit”) of the concept of a symmetric function of an unspecified number of variables, which are formed by summing terms that each involve only a bounded number of these variables at a time. One can check (somewhat tediously) that {L(\Omega^*)_{sym}} is indeed a commutative real algebra, with a unit {[1]_{0 \rightarrow *}}. (I do not know if this algebra has previously been studied in the literature; it is somewhat analogous to the abstract algebra of finite linear combinations of Schur polynomials, with multiplication given by a Littlewood-Richardson rule. )

For natural numbers {p}, there is an obvious specialisation map {[]_{* \rightarrow p}} from {L(\Omega^*)_{sym}} to {L(\Omega^p)_{sym}}, defined by the formula

\displaystyle [\sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}]_{* \rightarrow p} := \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}.

Thus, for instance, {[]_{* \rightarrow p}} maps {[x_1]_{1 \rightarrow *}} to {[x_1]_{1 \rightarrow p}} and {[x_1 x_2]_{2 \rightarrow *}} to {[x_1 x_2]_{2 \rightarrow p}}. From (2) and (3) we see that this map {[]_{* \rightarrow p}: L(\Omega^*)_{sym} \rightarrow L(\Omega^p)_{sym}} is an algebra homomorphism, even though the maps {[]_{k \rightarrow *}: L(\Omega^k)_{sym} \rightarrow L(\Omega^*)_{sym}} and {[]_{k \rightarrow p}: L(\Omega^k)_{sym} \rightarrow L(\Omega^p)_{sym}} are not homomorphisms. By inspecting the {p^{th}} component of {L(\Omega^*)_{sym}} we see that the homomorphism {[]_{* \rightarrow p}} is in fact surjective.

Now suppose that we have a measure {\mu} on the space {\Omega}, which then induces a product measure {\mu^p} on every product space {\Omega^p}. To avoid degeneracies we will assume that the integral {\int_\Omega \mu} is strictly positive. Assuming suitable measurability and integrability hypotheses, a function {F \in L(\Omega^p)_{sym}} can then be integrated against this product measure to produce a number

\displaystyle \int_{\Omega^p} F\ d\mu^p.

In the event that {F} arises as a lift {[F^{(k)}]_{k \rightarrow p}} of another function {F^{(k)} \in L(\Omega^k)_{sym}}, then from Fubini’s theorem we obtain the formula

\displaystyle \int_{\Omega^p} F\ d\mu^p = \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}.

Thus for instance, if {\Omega={\bf R}},

\displaystyle \int_{{\bf R}^p} [x_1]_{1 \rightarrow p}\ d\mu^p = p (\int_{\bf R} x\ d\mu(x)) (\int_{\bf R} \mu)^{p-1} \ \ \ \ \ (4)

 

and

\displaystyle \int_{{\bf R}^p} [x_1 x_2]_{2 \rightarrow p}\ d\mu^p = \binom{p}{2} (\int_{{\bf R}^2} x_1 x_2\ d\mu(x_1) d\mu(x_2)) (\int_{\bf R} \mu)^{p-2}. \ \ \ \ \ (5)

 

On summing, we see that if

\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}

is an element of the formal algebra {L(\Omega^*)_{sym}}, then

\displaystyle \int_{\Omega^p} [F^{(*)}]_{* \rightarrow p}\ d\mu^p = \sum_{k=0}^\infty \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}. \ \ \ \ \ (6)

 

Note that by hypothesis, only finitely many terms on the right-hand side are non-zero.

Now for a key observation: whereas the left-hand side of (6) only makes sense when {p} is a natural number, the right-hand side is meaningful when {p} takes a fractional value (or even when it takes negative or complex values!), interpreting the binomial coefficient {\binom{p}{k}} as a polynomial {\frac{p(p-1) \dots (p-k+1)}{k!}} in {p}. As such, this suggests a way to introduce a “virtual” concept of a symmetric function on a fractional power space {\Omega^p} for such values of {p}, and even to integrate such functions against product measures {\mu^p}, even if the fractional power {\Omega^p} does not exist in the usual set-theoretic sense (and {\mu^p} similarly does not exist in the usual measure-theoretic sense). More precisely, for arbitrary real or complex {p}, we now define {L(\Omega^p)_{sym}} to be the space of abstract objects

\displaystyle F^{(p)} = [F^{(*)}]_{* \rightarrow p} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}

with {F^{(*)} \in L(\Omega^*)_{sym}} and {[]_{* \rightarrow p}} (and {[]_{k \rightarrow p}} now interpreted as formal symbols, with the structure of a commutative real algebra inherited from {L(\Omega^*)_{sym}}, thus

\displaystyle [F^{(*)}]_{* \rightarrow p} + [G^{(*)}]_{* \rightarrow p} := [F^{(*)} + G^{(*)}]_{* \rightarrow p}

\displaystyle c [F^{(*)}]_{* \rightarrow p} := [c F^{(*)}]_{* \rightarrow p}

\displaystyle [F^{(*)}]_{* \rightarrow p} [G^{(*)}]_{* \rightarrow p} := [F^{(*)} G^{(*)}]_{* \rightarrow p}.

In particular, the multiplication law (2) continues to hold for such values of {p}, thanks to (3). Given any measure {\mu} on {\Omega}, we formally define a measure {\mu^p} on {\Omega^p} with regards to which we can integrate elements {F^{(p)}} of {L(\Omega^p)_{sym}} by the formula (6) (providing one has sufficient measurability and integrability to make sense of this formula), thus providing a sort of “fractional dimensional integral” for symmetric functions. Thus, for instance, with this formalism the identities (4), (5) now hold for fractional values of {p}, even though the formal space {{\bf R}^p} no longer makes sense as a set, and the formal measure {\mu^p} no longer makes sense as a measure. (The formalism here is somewhat reminiscent of the technique of dimensional regularisation employed in the physical literature in order to assign values to otherwise divergent integrals. See also this post for an unrelated abstraction of the integration concept involving integration over supercommutative variables (and in particular over fermionic variables).)

Example 2 Suppose {\mu} is a probability measure on {\Omega}, and {X: \Omega \rightarrow {\bf R}} is a random variable; on any power {\Omega^k}, we let {X_1,\dots,X_k: \Omega^k \rightarrow {\bf R}} be the usual independent copies of {X} on {\Omega^k}, thus {X_j(\omega_1,\dots,\omega_k) := X(\omega_j)} for {(\omega_1,\dots,\omega_k) \in \Omega^k}. Then for any real or complex {p}, the formal integral

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p

can be evaluated by first using the identity

\displaystyle [X_1]_{1 \rightarrow p}^2 = [X_1^2]_{1 \rightarrow p} + 2[X_1 X_2]_{2 \rightarrow p}

(cf. (1)) and then using (6) and the probability measure hypothesis {\int_\Omega\ d\mu = 1} to conclude that

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = \binom{p}{1} \int_{\Omega} X^2\ d\mu + 2 \binom{p}{2} \int_{\Omega^2} X_1 X_2\ d\mu^2

\displaystyle = p (\int_\Omega X^2\ d\mu - (\int_\Omega X\ d\mu)^2) + p^2 (\int_\Omega X\ d\mu)^2

or in probabilistic notation

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2. \ \ \ \ \ (7)

 

For {p} a natural number, this identity has the probabilistic interpretation

\displaystyle \mathbf{E}( X_1 + \dots + X_p)^2 = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2 \ \ \ \ \ (8)

 

whenever {X_1,\dots,X_p} are jointly independent copies of {X}, which reflects the well known fact that the sum {X_1 + \dots + X_p} has expectation {p \mathbf{E} X} and variance {p \mathbf{Var}(X)}. One can thus view (7) as an abstract generalisation of (8) to the case when {p} is fractional, negative, or even complex, despite the fact that there is no sensible way in this case to talk about {p} independent copies {X_1,\dots,X_p} of {X} in the standard framework of probability theory.

In this particular case, the quantity (7) is non-negative for every nonnegative {p}, which looks plausible given the form of the left-hand side. Unfortunately, this sort of non-negativity does not always hold; for instance, if {X} has mean zero, one can check that

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^4\ d\mu^p = p \mathbf{Var}(X^2) + p(3p-2) (\mathbf{E}(X^2))^2

and the right-hand side can become negative for {p < 2/3}. This is a shame, because otherwise one could hope to start endowing {L(X^p)_{sym}} with some sort of commutative von Neumann algebra type structure (or the abstract probability structure discussed in this previous post) and then interpret it as a genuine measure space rather than as a virtual one. (This failure of positivity is related to the fact that the characteristic function of a random variable, when raised to the {p^{th}} power, need not be a characteristic function of any random variable once {p} is no longer a natural number: “fractional convolution” does not preserve positivity!) However, one vestige of positivity remains: if {F: \Omega \rightarrow {\bf R}} is non-negative, then so is

\displaystyle \int_{\Omega^p} [F]_{1 \rightarrow p}\ d\mu^p = p (\int_\Omega F\ d\mu) (\int_\Omega\ d\mu)^{p-1}.

One can wonder what the point is to all of this abstract formalism and how it relates to the rest of mathematics. For me, this formalism originated implicitly in an old paper I wrote with Jon Bennett and Tony Carbery on the multilinear restriction and Kakeya conjectures, though we did not have a good language for working with it at the time, instead working first with the case of natural number exponents {p} and appealing to a general extrapolation theorem to then obtain various identities in the fractional {p} case. The connection between these fractional dimensional integrals and more traditional integrals ultimately arises from the simple identity

\displaystyle (\int_\Omega\ d\mu)^p = \int_{\Omega^p}\ d\mu^p

(where the right-hand side should be viewed as the fractional dimensional integral of the unit {[1]_{0 \rightarrow p}} against {\mu^p}). As such, one can manipulate {p^{th}} powers of ordinary integrals using the machinery of fractional dimensional integrals. A key lemma in this regard is

Lemma 3 (Differentiation formula) Suppose that a positive measure {\mu = \mu(t)} on {\Omega} depends on some parameter {t} and varies by the formula

\displaystyle \frac{d}{dt} \mu(t) = a(t) \mu(t) \ \ \ \ \ (9)

 

for some function {a(t): \Omega \rightarrow {\bf R}}. Let {p} be any real or complex number. Then, assuming sufficient smoothness and integrability of all quantities involved, we have

\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}\ d\mu(t)^p = \int_{\Omega^p} F^{(p)} [a(t)]_{1 \rightarrow p}\ d\mu(t)^p \ \ \ \ \ (10)

 

for all {F^{(p)} \in L(\Omega^p)_{sym}} that are independent of {t}. If we allow {F^{(p)}(t)} to now depend on {t} also, then we have the more general total derivative formula

\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}(t)\ d\mu(t)^p \ \ \ \ \ (11)

 

\displaystyle = \int_{\Omega^p} \frac{d}{dt} F^{(p)}(t) + F^{(p)}(t) [a(t)]_{1 \rightarrow p}\ d\mu(t)^p,

again assuming sufficient amounts of smoothness and regularity.

Proof: We just prove (10), as (11) then follows by same argument used to prove the usual product rule. By linearity it suffices to verify this identity in the case {F^{(p)} = [F^{(k)}]_{k \rightarrow p}} for some symmetric function {F^{(k)} \in L(\Omega^k)_{sym}} for a natural number {k}. By (6), the left-hand side of (10) is then

\displaystyle \frac{d}{dt} [\binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu(t)^k) (\int_\Omega\ d\mu(t))^{p-k}]. \ \ \ \ \ (12)

 

Differentiating under the integral sign using (9) we have

\displaystyle \frac{d}{dt} \int_\Omega\ d\mu(t) = \int_\Omega\ a(t)\ d\mu(t)

and similarly

\displaystyle \frac{d}{dt} \int_{\Omega^k} F^{(k)}\ d\mu(t)^k = \int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu(t)^k

where {a_1,\dots,a_k} are the standard {k} copies of {a = a(t)} on {\Omega^k}:

\displaystyle a_j(\omega_1,\dots,\omega_k) := a(\omega_j).

By the product rule, we can thus expand (12) as

\displaystyle \binom{p}{k} (\int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu^k ) (\int_\Omega\ d\mu)^{p-k}

\displaystyle + \binom{p}{k} (p-k) (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ a\ d\mu) (\int_\Omega\ d\mu)^{p-k-1}

where we have suppressed the dependence on {t} for brevity. Since {\binom{p}{k} (p-k) = \binom{p}{k+1} (k+1)}, we can write this expression using (6) as

\displaystyle \int_{\Omega^p} [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p}\ d\mu^p

where {F^{(k)} \ast a \in L(\Omega^{k+1})_{sym}} is the symmetric function

\displaystyle F^{(k)} \ast a(\omega_1,\dots,\omega_{k+1}) := \sum_{j=1}^{k+1} F^{(k)}(\omega_1,\dots,\omega_{j-1},\omega_{j+1} \dots \omega_{k+1}) a(\omega_j).

But from (2) one has

\displaystyle [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p} = [F^{(k)}]_{k \rightarrow p} [a]_{1 \rightarrow p}

and the claim follows. \Box

Remark 4 It is also instructive to prove this lemma in the special case when {p} is a natural number, in which case the fractional dimensional integral {\int_{\Omega^p} F^{(p)}\ d\mu(t)^p} can be interpreted as a classical integral. In this case, the identity (10) is immediate from applying the product rule to (9) to conclude that

\displaystyle \frac{d}{dt} d\mu(t)^p = [a(t)]_{1 \rightarrow p} d\mu(t)^p.

One could in fact derive (10) for arbitrary real or complex {p} from the case when {p} is a natural number by an extrapolation argument; see the appendix of my paper with Bennett and Carbery for details.

Let us give a simple PDE application of this lemma as illustration:

Proposition 5 (Heat flow monotonicity) Let {u: [0,+\infty) \times {\bf R}^d \rightarrow {\bf R}} be a solution to the heat equation {u_t = \Delta u} with initial data {\mu_0} a rapidly decreasing finite non-negative Radon measure, or more explicitly

\displaystyle u(t,x) = \frac{1}{(4\pi t)^{d/2}} \int_{{\bf R}^d} e^{-|x-y|^2/4t}\ d\mu_0(y)

for al {t>0}. Then for any {p>0}, the quantity

\displaystyle Q_p(t) := t^{\frac{d}{2} (p-1)} \int_{{\bf R}^d} u(t,x)^p\ dx

is monotone non-decreasing in {t \in (0,+\infty)} for {1 < p < \infty}, constant for {p=1}, and monotone non-increasing for {0 < p < 1}.

Proof: By a limiting argument we may assume that {d\mu_0} is absolutely continuous, with Radon-Nikodym derivative a test function; this is more than enough regularity to justify the arguments below.

For any {(t,x) \in (0,+\infty) \times {\bf R}^d}, let {\mu(t,x)} denote the Radon measure

\displaystyle d\mu(t,x)(y) := \frac{1}{(4\pi)^{d/2}} e^{-|x-y|^2/4t}\ d\mu_0(y).

Then the quantity {Q_p(t)} can be written as a fractional dimensional integral

\displaystyle Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx.

Observe that

\displaystyle \frac{\partial}{\partial t} d\mu(t,x) = \frac{|x-y|^2}{4t^2} d\mu(t,x)

and thus by Lemma 3 and the product rule

\displaystyle \frac{d}{dt} Q_p(t) = -\frac{d}{2t} Q_p(t) + t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [\frac{|x-y|^2}{4t^2}]_{1 \rightarrow p} d\mu(t,x)^p\ dx \ \ \ \ \ (13)

 

where we use {y} for the variable of integration in the factor space {{\bf R}^d} of {({\bf R}^d)^p}.

To simplify this expression we will take advantage of integration by parts in the {x} variable. Specifically, in any direction {x_j}, we have

\displaystyle \frac{\partial}{\partial x_j} d\mu(t,x) = -\frac{x_j-y_j}{2t} d\mu(t,x)

and hence by Lemma 3

\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx = - \int_{({\bf R}^d)^p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.

Multiplying by {x_j} and integrating by parts, we see that

\displaystyle d Q_p(t) = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

\displaystyle = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

where we use the Einstein summation convention in {j}. Similarly, if {F_j(y)} is any reasonable function depending only on {y}, we have

\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}[F_j(y)]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

\displaystyle = - \int_{({\bf R}^d)^p} [F_j(y)]_{1 \rightarrow p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

and hence on integration by parts

\displaystyle 0 = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [F_j(y) \frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.

We conclude that

\displaystyle \frac{d}{2t} Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} (x_j - [F_j(y)]_{1 \rightarrow p}) [\frac{(x_j-y_j)}{4t}]_{1 \rightarrow p} d\mu(t,x)^p\ dx

and thus by (13)

\displaystyle \frac{d}{dt} Q_p(t) = \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}

\displaystyle [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p} - (x_j - [F_j(y)]_{1 \rightarrow p}) [x_j - y_j]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.

The choice of {F_j} that then achieves the most cancellation turns out to be {F_j(y) = \frac{1}{p} y_j} (this cancels the terms that are linear or quadratic in the {x_j}), so that {x_j - [F_j(y)]_{1 \rightarrow p} = \frac{1}{p} [x_j - y_j]_{1 \rightarrow p}}. Repeating the calculations establishing (7), one has

\displaystyle \int_{({\bf R}^d)^p} [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p}\ d\mu^p = p \mathop{\bf E} |x-Y|^2 (\int_{{\bf R}^d}\ d\mu)^{p}

and

\displaystyle \int_{({\bf R}^d)^p} [x_j-y_j]_{1 \rightarrow p} [x_j-y_j]_{1 \rightarrow p}\ d\mu^p

\displaystyle = (p \mathbf{Var}(x-Y) + p^2 |\mathop{\bf E} x-Y|^2) (\int_{{\bf R}^d}\ d\mu)^{p}

where {Y} is the random variable drawn from {{\bf R}^d} with the normalised probability measure {\mu / \int_{{\bf R}^d}\ d\mu}. Since {\mathop{\bf E} |x-Y|^2 = \mathbf{Var}(x-Y) + |\mathop{\bf E} x-Y|^2}, one thus has

\displaystyle \frac{d}{dt} Q_p(t) = (p-1) \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \mathbf{Var}(x-Y) (\int_{{\bf R}^d}\ d\mu)^{p}\ dx. \ \ \ \ \ (14)

 

This expression is clearly non-negative for {p>1}, equal to zero for {p=1}, and positive for {0 < p < 1}, giving the claim. (One could simplify {\mathbf{Var}(x-Y)} here as {\mathbf{Var}(Y)} if desired, though it is not strictly necessary to do so for the proof.) \Box

Remark 6 As with Remark 4, one can also establish the identity (14) first for natural numbers {p} by direct computation avoiding the theory of fractional dimensional integrals, and then extrapolate to the case of more general values of {p}. This particular identity is also simple enough that it can be directly established by integration by parts without much difficulty, even for fractional values of {p}.

A more complicated version of this argument establishes the non-endpoint multilinear Kakeya inequality (without any logarithmic loss in a scale parameter {R}); this was established in my previous paper with Jon Bennett and Tony Carbery, but using the “natural number {p} first” approach rather than using the current formalism of fractional dimensional integration. However, the arguments can be translated into this formalism without much difficulty; we do so below the fold. (To simplify the exposition slightly we will not address issues of establishing enough regularity and integrability to justify all the manipulations, though in practice this can be done by standard limiting arguments.)

Read the rest of this entry »

The AMS and MAA have recently published (and made available online) a collection of essays entitled “Living Proof: Stories of Resilience Along the Mathematical Journey”.  Each author contributes a story of how they encountered some internal or external difficulty in advancing their mathematical career, and how they were able to deal with such difficulties.  I myself have contributed one of these essays; I was initially somewhat surprised when I was approached for a contribution, as my career trajectory has been somewhat of an outlier, and I have been very fortunate to not experience to the same extent many of the obstacles that other contributors write about in this text.    Nevertheless there was a turning point in my career that I write about here during my graduate years, when I found that the improvised and poorly disciplined study habits that were able to get me into graduate school due to an over-reliance on raw mathematical ability were completely inadequate to handle the graduate qualifying exam.  With a combination of an astute advisor and some sheer luck, I was able to pass the exam and finally develop a more sustainable approach to learning and doing mathematics, but it could easily have gone quite differently.  (My 20 25-year old writeup of this examination, complete with spelling errors, may be found here.)

 

 

The following situation is very common in modern harmonic analysis: one has a large scale parameter {N} (sometimes written as {N=1/\delta} in the literature for some small scale parameter {\delta}, or as {N=R} for some large radius {R}), which ranges over some unbounded subset of {[1,+\infty)} (e.g. all sufficiently large real numbers {N}, or all powers of two), and one has some positive quantity {D(N)} depending on {N} that is known to be of polynomial size in the sense that

\displaystyle  C^{-1} N^{-C} \leq D(N) \leq C N^C \ \ \ \ \ (1)

for all {N} in the range and some constant {C>0}, and one wishes to obtain a subpolynomial upper bound for {D(N)}, by which we mean an upper bound of the form

\displaystyle  D(N) \leq C_\varepsilon N^\varepsilon \ \ \ \ \ (2)

for all {\varepsilon>0} and all {N} in the range, where {C_\varepsilon>0} can depend on {\varepsilon} but is independent of {N}. In many applications, this bound is nearly tight in the sense that one can easily establish a matching lower bound

\displaystyle  D(N) \geq C_\varepsilon N^{-\varepsilon}

in which case the property of having a subpolynomial upper bound is equivalent to that of being subpolynomial size in the sense that

\displaystyle  C_\varepsilon N^{-\varepsilon} \leq D(N) \leq C_\varepsilon N^\varepsilon \ \ \ \ \ (3)

for all {\varepsilon>0} and all {N} in the range. It would naturally be of interest to tighten these bounds further, for instance to show that {D(N)} is polylogarithmic or even bounded in size, but a subpolynomial bound is already sufficient for many applications.

Let us give some illustrative examples of this type of problem:

Example 1 (Kakeya conjecture) Here {N} ranges over all of {[1,+\infty)}. Let {d \geq 2} be a fixed dimension. For each {N \geq 1}, we pick a maximal {1/N}-separated set of directions {\Omega_N \subset S^{d-1}}. We let {D(N)} be the smallest constant for which one has the Kakeya inequality

\displaystyle  \| \sum_{\omega \in \Omega_N} 1_{T_\omega} \|_{L^{\frac{d}{d-1}}({\bf R}^d)} \leq D(N),

where {T_\omega} is a {1/N \times 1}-tube oriented in the direction {\omega}. The Kakeya maximal function conjecture is then equivalent to the assertion that {D(N)} has a subpolynomial upper bound (or equivalently, is of subpolynomial size). Currently this is only known in dimension {d=2}.

Example 2 (Restriction conjecture for the sphere) Here {N} ranges over all of {[1,+\infty)}. Let {d \geq 2} be a fixed dimension. We let {D(N)} be the smallest constant for which one has the restriction inequality

\displaystyle  \| \widehat{fd\sigma} \|_{L^{\frac{2d}{d-1}}(B(0,N))} \leq D(N) \| f \|_{L^\infty(S^{d-1})}

for all bounded measurable functions {f} on the unit sphere {S^{d-1}} equipped with surface measure {d\sigma}, where {B(0,N)} is the ball of radius {N} centred at the origin. The restriction conjecture of Stein for the sphere is then equivalent to the assertion that {D(N)} has a subpolynomial upper bound (or equivalently, is of subpolynomial size). Currently this is only known in dimension {d=2}.

Example 3 (Multilinear Kakeya inequality) Again {N} ranges over all of {[1,+\infty)}. Let {d \geq 2} be a fixed dimension, and let {S_1,\dots,S_d} be compact subsets of the sphere {S^{d-1}} which are transverse in the sense that there is a uniform lower bound {|\omega_1 \wedge \dots \wedge \omega_d| \geq c > 0} for the wedge product of directions {\omega_i \in S_i} for {i=1,\dots,d} (equivalently, there is no hyperplane through the origin that intersects all of the {S_i}). For each {N \geq 1}, we let {D(N)} be the smallest constant for which one has the multilinear Kakeya inequality

\displaystyle  \| \mathrm{geom} \sum_{T \in {\mathcal T}_i} 1_{T} \|_{L^{\frac{d}{d-1}}(B(0,N))} \leq D(N) \mathrm{geom} \# {\mathcal T}_i,

where for each {i=1,\dots,d}, {{\mathcal T}_i} is a collection of infinite tubes in {{\bf R}^d} of radius {1} oriented in a direction in {S_i}, which are separated in the sense that for any two tubes {T,T'} in {{\mathcal T}_i}, either the directions of {T,T'} differ by an angle of at least {1/N}, or {T,T'} are disjoint; and {\mathrm{geom} = \mathrm{geom}_{1 \leq i \leq d}} is our notation for the geometric mean

\displaystyle  \mathrm{geom} a_i := (a_1 \dots a_d)^{1/d}.

The multilinear Kakeya inequality of Bennett, Carbery, and myself establishes that {D(N)} is of subpolynomial size; a later argument of Guth improves this further by showing that {D(N)} is bounded (and in fact comparable to {1}).

Example 4 (Multilinear restriction theorem) Once again {N} ranges over all of {[1,+\infty)}. Let {d \geq 2} be a fixed dimension, and let {S_1,\dots,S_d} be compact subsets of the sphere {S^{d-1}} which are transverse as in the previous example. For each {N \geq 1}, we let {D(N)} be the smallest constant for which one has the multilinear restriction inequality

\displaystyle  \| \mathrm{geom} \widehat{f_id\sigma} \|_{L^{\frac{2d}{d-1}}(B(0,N))} \leq D(N) \| f \|_{L^2(S^{d-1})}

for all bounded measurable functions {f_i} on {S_i} for {i=1,\dots,d}. Then the multilinear restriction theorem of Bennett, Carbery, and myself establishes that {D(N)} is of subpolynomial size; it is known to be bounded for {d=2} (as can be easily verified from Plancherel’s theorem), but it remains open whether it is bounded for any {d>2}.

Example 5 (Decoupling for the paraboloid) {N} now ranges over the square numbers. Let {d \geq 2}, and subdivide the unit cube {[0,1]^{d-1}} into {N^{(d-1)/2}} cubes {Q} of sidelength {1/N^{1/2}}. For any {g \in L^1([0,1]^{d-1})}, define the extension operators

\displaystyle  E_{[0,1]^{d-1}} g( x', x_d ) := \int_{[0,1]^{d-1}} e^{2\pi i (x' \cdot \xi + x_d |\xi|^2)} g(\xi)\ d\xi

and

\displaystyle  E_Q g( x', x_d ) := \int_{Q} e^{2\pi i (x' \cdot \xi + x_d |\xi|^2)} g(\xi)\ d\xi

for {x' \in {\bf R}^{d-1}} and {x_d \in {\bf R}}. We also introduce the weight function

\displaystyle  w_{B(0,N)}(x) := (1 + \frac{|x|}{N})^{-100d}.

For any {p}, let {D_p(N)} be the smallest constant for which one has the decoupling inequality

\displaystyle  \| E_{[0,1]^{d-1}} g \|_{L^p(w_{B(0,N)})} \leq D_p(N) (\sum_Q \| E_Q g \|_{L^p(w_{B(0,N)})}^2)^{1/2}.

The decoupling theorem of Bourgain and Demeter asserts that {D_p(N)} is of subpolynomial size for all {p} in the optimal range {2 \leq p \leq \frac{2(d+1)}{d-1}}.

Example 6 (Decoupling for the moment curve) {N} now ranges over the natural numbers. Let {d \geq 2}, and subdivide {[0,1]} into {N} intervals {J} of length {1/N}. For any {g \in L^1([0,1])}, define the extension operators

\displaystyle  E_{[0,1]} g(x_1,\dots,x_d) = \int_{[0,1]} e^{2\pi i ( x_1 \xi + x_2 \xi^2 + \dots + x_d \xi^d} g(\xi)\ d\xi

and more generally

\displaystyle  E_J g(x_1,\dots,x_d) = \int_{[0,1]} e^{2\pi i ( x_1 \xi + x_2 \xi^2 + \dots + x_d \xi^d} g(\xi)\ d\xi

for {(x_1,\dots,x_d) \in {\bf R}^d}. For any {p}, let {D_p(N)} be the smallest constant for which one has the decoupling inequality

\displaystyle  \| E_{[0,1]} g \|_{L^p(w_{B(0,N^d)})} \leq D_p(N) (\sum_J \| E_J g \|_{L^p(w_{B(0,N^d)})}^2)^{1/2}.

It was shown by Bourgain, Demeter, and Guth that {D_p(N)} is of subpolynomial size for all {p} in the optimal range {2 \leq p \leq d(d+1)}, which among other things implies the Vinogradov main conjecture (as discussed in this previous post).

It is convenient to use asymptotic notation to express these estimates. We write {X \lesssim Y}, {X = O(Y)}, or {Y \gtrsim X} to denote the inequality {|X| \leq CY} for some constant {C} independent of the scale parameter {N}, and write {X \sim Y} for {X \lesssim Y \lesssim X}. We write {X = o(Y)} to denote a bound of the form {|X| \leq c(N) Y} where {c(N) \rightarrow 0} as {N \rightarrow \infty} along the given range of {N}. We then write {X \lessapprox Y} for {X \lesssim N^{o(1)} Y}, and {X \approx Y} for {X \lessapprox Y \lessapprox X}. Then the statement that {D(N)} is of polynomial size can be written as

\displaystyle  D(N) \sim N^{O(1)},

while the statement that {D(N)} has a subpolynomial upper bound can be written as

\displaystyle  D(N) \lessapprox 1

and similarly the statement that {D(N)} is of subpolynomial size is simply

\displaystyle  D(N) \approx 1.

Many modern approaches to bounding quantities like {D(N)} in harmonic analysis rely on some sort of induction on scales approach in which {D(N)} is bounded using quantities such as {D(N^\theta)} for some exponents {0 < \theta < 1}. For instance, suppose one is somehow able to establish the inequality

\displaystyle  D(N) \lessapprox D(\sqrt{N}) \ \ \ \ \ (4)

for all {N \geq 1}, and suppose that {D} is also known to be of polynomial size. Then this implies that {D} has a subpolynomial upper bound. Indeed, one can iterate this inequality to show that

\displaystyle  D(N) \lessapprox D(N^{1/2^k})

for any fixed {k}; using the polynomial size hypothesis one thus has

\displaystyle  D(N) \lessapprox N^{C/2^k}

for some constant {C} independent of {k}. As {k} can be arbitrarily large, we conclude that {D(N) \lesssim N^\varepsilon} for any {\varepsilon>0}, and hence {D} is of subpolynomial size. (This sort of iteration is used for instance in my paper with Bennett and Carbery to derive the multilinear restriction theorem from the multilinear Kakeya theorem.)

Exercise 7 If {D} is of polynomial size, and obeys the inequality

\displaystyle  D(N) \lessapprox D(N^{1-\varepsilon}) + N^{O(\varepsilon)}

for any fixed {\varepsilon>0}, where the implied constant in the {O(\varepsilon)} notation is independent of {\varepsilon}, show that {D} has a subpolynomial upper bound. This type of inequality is used to equate various linear estimates in harmonic analysis with their multilinear counterparts; see for instance this paper of myself, Vargas, and Vega for an early example of this method.

In more recent years, more sophisticated induction on scales arguments have emerged in which one or more auxiliary quantities besides {D(N)} also come into play. Here is one example, this time being an abstraction of a short proof of the multilinear Kakeya inequality due to Guth. Let {D(N)} be the quantity in Example 3. We define {D(N,M)} similarly to {D(N)} for any {M \geq 1}, except that we now also require that the diameter of each set {S_i} is at most {1/M}. One can then observe the following estimates:

  • (Triangle inequality) For any {N,M \geq 1}, we have

    \displaystyle  D(N,M) = M^{O(1)} D(N). \ \ \ \ \ (5)

  • (Multiplicativity) For any {N_1,N_2 = N^{O(1)}}, one has

    \displaystyle  D(N_1 N_2, M) \lessapprox D(N_1, M) D(N_2, M). \ \ \ \ \ (6)

  • (Loomis-Whitney inequality) We have

    \displaystyle  D(N,N) \lessapprox 1. \ \ \ \ \ (7)

These inequalities now imply that {D} has a subpolynomial upper bound, as we now demonstrate. Let {k} be a large natural number (independent of {N}) to be chosen later. From many iterations of (6) we have

\displaystyle  D(N, N^{1/k}) \lessapprox D(N^{1/k},N^{1/k})^k

and hence by (7) (with {N} replaced by {N^{1/k}}) and (5)

\displaystyle  D(N) \lessapprox N^{O(1/k)}

where the implied constant in the {O(1/k)} exponent does not depend on {k}. As {k} can be arbitrarily large, the claim follows. We remark that a nearly identical scheme lets one deduce decoupling estimates for the three-dimensional cone from that of the two-dimensional paraboloid; see the final section of this paper of Bourgain and Demeter.

Now we give a slightly more sophisticated example, abstracted from the proof of {L^p} decoupling of the paraboloid by Bourgain and Demeter, as described in this study guide after specialising the dimension to {2} and the exponent {p} to the endpoint {p=6} (the argument is also more or less summarised in this previous post). (In the cited papers, the argument was phrased only for the non-endpoint case {p<6}, but it has been observed independently by many experts that the argument extends with only minor modifications to the endpoint {p=6}.) Here we have a quantity {D_p(N)} that we wish to show is of subpolynomial size. For any {0 < \varepsilon < 1} and {0 \leq u \leq 1}, one can define an auxiliary quantity {A_{p,u,\varepsilon}(N)}. The precise definitions of {D_p(N)} and {A_{p,u,\varepsilon}(N)} are given in the study guide (where they are called {\mathrm{Dec}_2(1/N,p)} and {A_p(u, B(0,N^2), u, g)} respectively, setting {\delta = 1/N} and {\nu = \delta^\varepsilon}) but will not be of importance to us for this discussion. Suffice to say that the following estimates are known:

  • (Crude upper bound for {D_p}) {D_p(N)} is of polynomial size: {D_p(N) \sim N^{O(1)}}.
  • (Bilinear reduction, using parabolic rescaling) For any {0 \leq u \leq 1}, one has

    \displaystyle  D_p(N) \lessapprox D_p(N^{1-\varepsilon}) + N^{O(\varepsilon)+O(u)} A_{p,u,\varepsilon}(N). \ \ \ \ \ (8)

  • (Crude upper bound for {A_{p,u,\varepsilon}(N)}) For any {0 \leq u \leq 1} one has

    \displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)+O(u)} D_p(N) \ \ \ \ \ (9)

  • (Application of multilinear Kakeya and {L^2} decoupling) If {\varepsilon, u} are sufficiently small (e.g. both less than {1/4}), then

    \displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2u,\varepsilon}(N)^{1/2} D_p(N^{1-u})^{1/2}. \ \ \ \ \ (10)

In all of these bounds the implied constant exponents such as {O(\varepsilon)} or {O(u)} are independent of {\varepsilon} and {u}, although the implied constants in the {\lessapprox} notation can depend on both {\varepsilon} and {u}. Here we gloss over an annoying technicality in that quantities such as {N^{1-\varepsilon}}, {N^{1-u}}, or {N^u} might not be an integer (and might not divide evenly into {N}), which is needed for the application to decoupling theorems; this can be resolved by restricting the scales involved to powers of two and restricting the values of {\varepsilon, u} to certain rational values, which introduces some complications to the later arguments below which we shall simply ignore as they do not significantly affect the numerology.

It turns out that these estimates imply that {D_p(N)} is of subpolynomial size. We give the argument as follows. As {D_p(N)} is known to be of polynomial size, we have some {\eta>0} for which we have the bound

\displaystyle  D_p(N) \lessapprox N^\eta \ \ \ \ \ (11)

for all {N}. We can pick {\eta} to be the minimal exponent for which this bound is attained: thus

\displaystyle  \eta = \limsup_{N \rightarrow \infty} \frac{\log D_p(N)}{\log N}. \ \ \ \ \ (12)

We will call this the upper exponent of {D_p(N)}. We need to show that {\eta \leq 0}. We assume for contradiction that {\eta > 0}. Let {\varepsilon>0} be a sufficiently small quantity depending on {\eta} to be chosen later. From (10) we then have

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2u,\varepsilon}(N)^{1/2} N^{\eta (\frac{1}{2} - \frac{u}{2})}

for any sufficiently small {u}. A routine iteration then gives

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2^k u,\varepsilon}(N)^{1/2^k} N^{\eta (1 - \frac{1}{2^k} - k\frac{u}{2})}

for any {k \geq 1} that is independent of {N}, if {u} is sufficiently small depending on {k}. A key point here is that the implied constant in the exponent {O(\varepsilon)} is uniform in {k} (the constant comes from summing a convergent geometric series). We now use the crude bound (9) followed by (11) and conclude that

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{\eta (1 - k\frac{u}{2}) + O(\varepsilon) + O(u)}.

Applying (8) we then have

\displaystyle  D_p(N) \lessapprox N^{\eta(1-\varepsilon)} + N^{\eta (1 - k\frac{u}{2}) + O(\varepsilon) + O(u)}.

If we choose {k} sufficiently large depending on {\eta} (which was assumed to be positive), then the negative term {-\eta k \frac{u}{2}} will dominate the {O(u)} term. If we then pick {u} sufficiently small depending on {k}, then finally {\varepsilon} sufficiently small depending on all previous quantities, we will obtain {D_p(N) \lessapprox N^{\eta'}} for some {\eta'} strictly less than {\eta}, contradicting the definition of {\eta}. Thus {\eta} cannot be positive, and hence {D_p(N)} has a subpolynomial upper bound as required.

Exercise 8 Show that one still obtains a subpolynomial upper bound if the estimate (10) is replaced with

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2u,\varepsilon}(N)^{1-\theta} D_p(N)^{\theta}

for some constant {0 \leq \theta < 1/2}, so long as we also improve (9) to

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} D_p(N^{1-u}).

(This variant of the argument lets one handle the non-endpoint cases {2 < p < 6} of the decoupling theorem for the paraboloid.)

To establish decoupling estimates for the moment curve, restricting to the endpoint case {p = d(d+1)} for sake of discussion, an even more sophisticated induction on scales argument was deployed by Bourgain, Demeter, and Guth. The proof is discussed in this previous blog post, but let us just describe an abstract version of the induction on scales argument. To bound the quantity {D_p(N) = D_{d(d+1)}(N)}, some auxiliary quantities {A_{t,q,s,\varepsilon}(N)} are introduced for various exponents {1 \leq t \leq \infty} and {0 \leq q,s \leq 1} and {\varepsilon>0}, with the following bounds:

  • (Crude upper bound for {D}) {D_p(N)} is of polynomial size: {D_p(N) \sim N^{O(1)}}.
  • (Multilinear reduction, using non-isotropic rescaling) For any {0 \leq q,s \leq 1} and {1 \leq t \leq \infty}, one has

    \displaystyle  D_p(N) \lessapprox D_p(N^{1-\varepsilon}) + N^{O(\varepsilon)+O(q)+O(s)} A_{t,q,s,\varepsilon}(N). \ \ \ \ \ (13)

  • (Crude upper bound for {A_{t,q,s,\varepsilon}(N)}) For any {0 \leq q,s \leq 1} and {1 \leq t \leq \infty} one has

    \displaystyle  A_{t,q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)+O(q)+O(s)} D_p(N) \ \ \ \ \ (14)

  • (Hölder) For {0 \leq q, s \leq 1} and {1 \leq t_0 \leq t_1 \leq \infty} one has

    \displaystyle  A_{t_0,q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{t_1,q,s,\varepsilon}(N) \ \ \ \ \ (15)

    and also

    \displaystyle  A_{t_\theta,q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{t_0,q,s,\varepsilon}(N)^{1-\theta} A_{t_1,q,s,\varepsilon}(N)^\theta \ \ \ \ \ (16)

    whenever {0 \leq \theta \leq 1}, where {\frac{1}{t_\theta} = \frac{1-\theta}{t_0} + \frac{\theta}{t_1}}.

  • (Rescaled decoupling hypothesis) For {0 \leq q,s \leq 1}, one has

    \displaystyle  A_{p,q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} D_p(N^{1-q}). \ \ \ \ \ (17)

  • (Lower dimensional decoupling) If {1 \leq k \leq d-1} and {q \leq s/k}, then

    \displaystyle  A_{k(k+1),q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{k(k+1),s/k,s,\varepsilon}(N). \ \ \ \ \ (18)

  • (Multilinear Kakeya) If {1 \leq k \leq d-1} and {0 \leq q \leq 1}, then

    \displaystyle  A_{kp/d,q,kq,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{kp/d,q,(k+1)q,\varepsilon}(N). \ \ \ \ \ (19)

It is now substantially less obvious that these estimates can be combined to demonstrate that {D(N)} is of subpolynomial size; nevertheless this can be done. A somewhat complicated arrangement of the argument (involving some rather unmotivated choices of expressions to induct over) appears in my previous blog post; I give an alternate proof later in this post.

These examples indicate a general strategy to establish that some quantity {D(N)} is of subpolynomial size, by

  • (i) Introducing some family of related auxiliary quantities, often parameterised by several further parameters;
  • (ii) establishing as many bounds between these quantities and the original quantity {D(N)} as possible; and then
  • (iii) appealing to some sort of “induction on scales” to conclude.

The first two steps (i), (ii) depend very much on the harmonic analysis nature of the quantities {D(N)} and the related auxiliary quantities, and the estimates in (ii) will typically be proven from various harmonic analysis inputs such as Hölder’s inequality, rescaling arguments, decoupling estimates, or Kakeya type estimates. The final step (iii) requires no knowledge of where these quantities come from in harmonic analysis, but the iterations involved can become extremely complicated.

In this post I would like to observe that one can clean up and made more systematic this final step (iii) by passing to upper exponents (12) to eliminate the role of the parameter {N} (and also “tropicalising” all the estimates), and then taking similar limit superiors to eliminate some other less important parameters, until one is left with a simple linear programming problem (which, among other things, could be amenable to computer-assisted proving techniques). This method is analogous to that of passing to a simpler asymptotic limit object in many other areas of mathematics (for instance using the Furstenberg correspondence principle to pass from a combinatorial problem to an ergodic theory problem, as discussed in this previous post). We use the limit superior exclusively in this post, but many of the arguments here would also apply with one of the other generalised limit functionals discussed in this previous post, such as ultrafilter limits.

For instance, if {\eta} is the upper exponent of a quantity {D(N)} of polynomial size obeying (4), then a comparison of the upper exponent of both sides of (4) one arrives at the scalar inequality

\displaystyle  \eta \leq \frac{1}{2} \eta

from which it is immediate that {\eta \leq 0}, giving the required subpolynomial upper bound. Notice how the passage to upper exponents converts the {\lessapprox} estimate to a simpler inequality {\leq}.

Exercise 9 Repeat Exercise 7 using this method.

Similarly, given the quantities {D(N,M)} obeying the axioms (5), (6), (7), and assuming that {D(N)} is of polynomial size (which is easily verified for the application at hand), we see that for any real numbers {a, u \geq 0}, the quantity {D(N^a,N^u)} is also of polynomial size and hence has some upper exponent {\eta(a,u)}; meanwhile {D(N)} itself has some upper exponent {\eta}. By reparameterising we have the homogeneity

\displaystyle  \eta(\lambda a, \lambda u) = \lambda \eta(a,u)

for any {\lambda \geq 0}. Also, comparing the upper exponents of both sides of the axioms (5), (6), (7) we arrive at the inequalities

\displaystyle  \eta(1,u) = \eta + O(u)

\displaystyle  \eta(a_1+a_2,u) \leq \eta(a_1,u) + \eta(a_2,u)

\displaystyle  \eta(1,1) \leq 0.

For any natural number {k}, the third inequality combined with homogeneity gives {\eta(1/k,1/k)}, which when combined with the second inequality gives {\eta(1,1/k) \leq k \eta(1/k,1/k) \leq 0}, which on combination with the first estimate gives {\eta \leq O(1/k)}. Sending {k} to infinity we obtain {\eta \leq 0} as required.

Now suppose that {D_p(N)}, {A_{p,u,\varepsilon}(N)} obey the axioms (8), (9), (10). For any fixed {u,\varepsilon}, the quantity {A_{p,u,\varepsilon}(N)} is of polynomial size (thanks to (9) and the polynomial size of {D_6}), and hence has some upper exponent {\eta(u,\varepsilon)}; similarly {D_p(N)} has some upper exponent {\eta}. (Actually, strictly speaking our axioms only give an upper bound on {A_{p,u,\varepsilon}} so we have to temporarily admit the possibility that {\eta(u,\varepsilon)=-\infty}, though this will soon be eliminated anyway.) Taking upper exponents of all the axioms we then conclude that

\displaystyle  \eta \leq \max( (1-\varepsilon) \eta, \eta(u,\varepsilon) + O(\varepsilon) + O(u) ) \ \ \ \ \ (20)

\displaystyle  \eta(u,\varepsilon) \leq \eta + O(\varepsilon) + O(u)

\displaystyle  \eta(u,\varepsilon) \leq \frac{1}{2} \eta(2u,\varepsilon) + \frac{1}{2} \eta (1-u) + O(\varepsilon)

for all {0 \leq u \leq 1} and {0 \leq \varepsilon \leq 1}.

Assume for contradiction that {\eta>0}, then {(1-\varepsilon) \eta < \eta}, and so the statement (20) simplifies to

\displaystyle  \eta \leq \eta(u,\varepsilon) + O(\varepsilon) + O(u).

At this point we can eliminate the role of {\varepsilon} and simplify the system by taking a second limit superior. If we write

\displaystyle  \eta(u) := \limsup_{\varepsilon \rightarrow 0} \eta(u,\varepsilon)

then on taking limit superiors of the previous inequalities we conclude that

\displaystyle  \eta(u) \leq \eta + O(u)

\displaystyle  \eta(u) \leq \frac{1}{2} \eta(2u) + \frac{1}{2} \eta (1-u) \ \ \ \ \ (21)

\displaystyle  \eta \leq \eta(u) + O(u)

for all {u}; in particular {\eta(u) = \eta + O(u)}. We take advantage of this by taking a further limit superior (or “upper derivative”) in the limit {u \rightarrow 0} to eliminate the role of {u} and simplify the system further. If we define

\displaystyle  \alpha := \limsup_{u \rightarrow 0^+} \frac{\eta(u)-\eta}{u},

so that {\alpha} is the best constant for which {\eta(u) \leq \eta + \alpha u + o(u)} as {u \rightarrow 0}, then {\alpha} is finite, and by inserting this “Taylor expansion” into the right-hand side of (21) and conclude that

\displaystyle  \alpha \leq \alpha - \frac{1}{2} \eta.

This leads to a contradiction when {\eta>0}, and hence {\eta \leq 0} as desired.

Exercise 10 Redo Exercise 8 using this method.

The same strategy now clarifies how to proceed with the more complicated system of quantities {A_{t,q,s,\varepsilon}(N)} obeying the axioms (13)(19) with {D_p(N)} of polynomial size. Let {\eta} be the exponent of {D_p(N)}. From (14) we see that for fixed {t,q,s,\varepsilon}, each {A_{t,q,s,\varepsilon}(N)} is also of polynomial size (at least in upper bound) and so has some exponent {a( t,q,s,\varepsilon)} (which for now we can permit to be {-\infty}). Taking upper exponents of all the various axioms we can now eliminate {N} and arrive at the simpler axioms

\displaystyle  \eta \leq \max( (1-\varepsilon) \eta, a(t,q,s,\varepsilon) + O(\varepsilon) + O(q) + O(s) )

\displaystyle  a(t,q,s,\varepsilon) \leq \eta + O(\varepsilon) + O(q) + O(s)

\displaystyle  a(t_0,q,s,\varepsilon) \leq a(t_1,q,s,\varepsilon) + O(\varepsilon)

\displaystyle  a(t_\theta,q,s,\varepsilon) \leq (1-\theta) a(t_0,q,s,\varepsilon) + \theta a(t_1,q,s,\varepsilon) + O(\varepsilon)

\displaystyle  a(d(d+1),q,s,\varepsilon) \leq \eta(1-q) + O(\varepsilon)

for all {0 \leq q,s \leq 1}, {1 \leq t \leq \infty}, {1 \leq t_0 \leq t_1 \leq \infty} and {0 \leq \theta \leq 1}, with the lower dimensional decoupling inequality

\displaystyle  a(k(k+1),q,s,\varepsilon) \leq a(k(k+1),s/k,s,\varepsilon) + O(\varepsilon)

for {1 \leq k \leq d-1} and {q \leq s/k}, and the multilinear Kakeya inequality

\displaystyle  a(k(d+1),q,kq,\varepsilon) \leq a(k(d+1),q,(k+1)q,\varepsilon)

for {1 \leq k \leq d-1} and {0 \leq q \leq 1}.

As before, if we assume for sake of contradiction that {\eta>0} then the first inequality simplifies to

\displaystyle  \eta \leq a(t,q,s,\varepsilon) + O(\varepsilon) + O(q) + O(s).

We can then again eliminate the role of {\varepsilon} by taking a second limit superior as {\varepsilon \rightarrow 0}, introducing

\displaystyle  a(t,q,s) := \limsup_{\varepsilon \rightarrow 0} a(t,q,s,\varepsilon)

and thus getting the simplified axiom system

\displaystyle  a(t,q,s) \leq \eta + O(q) + O(s) \ \ \ \ \ (22)

\displaystyle  a(t_0,q,s) \leq a(t_1,q,s)

\displaystyle  a(t_\theta,q,s) \leq (1-\theta) a(t_0,q,s) + \theta a(t_1,q,s)

\displaystyle  a(d(d+1),q,s) \leq \eta(1-q)

\displaystyle  \eta \leq a(t,q,s) + O(q) + O(s) \ \ \ \ \ (23)

and also

\displaystyle  a(k(k+1),q,s) \leq a(k(k+1),s/k,s)

for {1 \leq k \leq d-1} and {q \leq s/k}, and

\displaystyle  a(k(d+1),q,kq) \leq a(k(d+1),q,(k+1)q)

for {1 \leq k \leq d-1} and {0 \leq q \leq 1}.

In view of the latter two estimates it is natural to restrict attention to the quantities {a(t,q,kq)} for {1 \leq k \leq d+1}. By the axioms (22), these quantities are of the form {\eta + O(q)}. We can then eliminate the role of {q} by taking another limit superior

\displaystyle  \alpha_k(t) := \limsup_{q \rightarrow 0} \frac{a(t,q,kq)-\eta}{q}.

The axioms now simplify to

\displaystyle  \alpha_k(t) = O(1)

\displaystyle  \alpha_k(t_0) \leq \alpha_k(t_1) \ \ \ \ \ (24)

\displaystyle  \alpha_k(t_\theta) \leq (1-\theta) \alpha_k(t_0) + \theta \alpha_k(t_1) \ \ \ \ \ (25)

\displaystyle  \alpha_k(d(d+1)) \leq -\eta \ \ \ \ \ (26)

and

\displaystyle  \alpha_j(k(k+1)) \leq \frac{j}{k} \alpha_k(k(k+1)) \ \ \ \ \ (27)

for {1 \leq k \leq d-1} and {k \leq j \leq d}, and

\displaystyle  \alpha_k(k(d+1)) \leq \alpha_{k+1}(k(d+1)) \ \ \ \ \ (28)

for {1 \leq k \leq d-1}.

It turns out that the inequality (27) is strongest when {j=k+1}, thus

\displaystyle  \alpha_{k+1}(k(k+1)) \leq \frac{k+1}{k} \alpha_k(k(k+1)) \ \ \ \ \ (29)

for {1 \leq k \leq d-1}.

From the last two inequalities (28), (29) we see that a special role is likely to be played by the exponents

\displaystyle  \beta_k := \alpha_k(k(k-1))

for {2 \leq k \leq d} and

\displaystyle \gamma_k := \alpha_k(k(d+1))

for {1 \leq k \leq d}. From the convexity (25) and a brief calculation we have

\displaystyle  \alpha_{k+1}(k(d+1)) \leq \frac{1}{d-k+1} \alpha_{k+1}(k(k+1))

\displaystyle + \frac{d-k}{d-k+1} \alpha_{k+1}((k+1)(d+1)),

for {1 \leq k \leq d-1}, hence from (28) we have

\displaystyle  \gamma_k \leq \frac{1}{d-k+1} \beta_{k+1} + \frac{d-k}{d-k+1} \gamma_{k+1}. \ \ \ \ \ (30)

Similarly, from (25) and a brief calculation we have

\displaystyle  \alpha_k(k(k+1)) \leq \frac{(d-k)(k-1)}{(k+1)(d-k+2)} \alpha_k( k(k-1))

\displaystyle  + \frac{2(d+1)}{(k+1)(d-k+2)} \alpha_k(k(d+1))

for {2 \leq k \leq d-1}; the same bound holds for {k=1} if we drop the term with the {(k-1)} factor, thanks to (24). Thus from (29) we have

\displaystyle  \beta_{k+1} \leq \frac{(d-k)(k-1)}{k(d-k+2)} \beta_k + \frac{2(d+1)}{k(d-k+2)} \gamma_k, \ \ \ \ \ (31)

for {1 \leq k \leq d-1}, again with the understanding that we omit the first term on the right-hand side when {k=1}. Finally, (26) gives

\displaystyle  \gamma_d \leq -\eta.

Let us write out the system of equations we have obtained in full:

\displaystyle  \beta_2 \leq 2 \gamma_1 \ \ \ \ \ (32)

\displaystyle  \gamma_1 \leq \frac{1}{d} \beta_2 + \frac{d-1}{d} \gamma_2 \ \ \ \ \ (33)

\displaystyle  \beta_3 \leq \frac{d-2}{2d} \beta_2 + \frac{2(d+1)}{2d} \gamma_2 \ \ \ \ \ (34)

\displaystyle  \gamma_2 \leq \frac{1}{d-1} \beta_3 + \frac{d-2}{d-1} \gamma_3 \ \ \ \ \ (35)

\displaystyle  \beta_4 \leq \frac{2(d-3)}{3(d-1)} \beta_3 + \frac{2(d+1)}{3(d-1)} \gamma_3

\displaystyle  \gamma_3 \leq \frac{1}{d-2} \beta_4 + \frac{d-3}{d-2} \gamma_4

\displaystyle  ...

\displaystyle  \beta_d \leq \frac{d-2}{(d-1) 3} \beta_{d-1} + \frac{2(d+1)}{(d-1) 3} \gamma_{d-1}

\displaystyle  \gamma_{d-1} \leq \frac{1}{2} \beta_d + \frac{1}{2} \gamma_d \ \ \ \ \ (36)

\displaystyle  \gamma_d \leq -\eta. \ \ \ \ \ (37)

We can then eliminate the variables one by one. Inserting (33) into (32) we obtain

\displaystyle  \beta_2 \leq \frac{2}{d} \beta_2 + \frac{2(d-1)}{d} \gamma_2

which simplifies to

\displaystyle  \beta_2 \leq \frac{2(d-1)}{d-2} \gamma_2.

Inserting this into (34) gives

\displaystyle  \beta_3 \leq 2 \gamma_2

which when combined with (35) gives

\displaystyle  \beta_3 \leq \frac{2}{d-1} \beta_3 + \frac{2(d-2)}{d-1} \gamma_3

which simplifies to

\displaystyle  \beta_3 \leq \frac{2(d-2)}{d-3} \gamma_3.

Iterating this we get

\displaystyle  \beta_{k+1} \leq 2 \gamma_k

for all {1 \leq k \leq d-1} and

\displaystyle  \beta_k \leq \frac{2(d-k+1)}{d-k} \gamma_k

for all {2 \leq k \leq d-1}. In particular

\displaystyle  \beta_d \leq 2 \gamma_{d-1}

which on insertion into (36), (37) gives

\displaystyle  \beta_d \leq \beta_d - \eta

which is absurd if {\eta>0}. Thus {\eta \leq 0} and so {D_p(N)} must be of subpolynomial growth.

Remark 11 (This observation is essentially due to Heath-Brown.) If we let {x} denote the column vector with entries {\beta_2,\dots,\beta_d,\gamma_1,\dots,\gamma_{d-1}} (arranged in whatever order one pleases), then the above system of inequalities (32)(36) (using (37) to handle the appearance of {\gamma_d} in (36)) reads

\displaystyle  x \leq Px + \eta v \ \ \ \ \ (38)

for some explicit square matrix {P} with non-negative coefficients, where the inequality denotes pointwise domination, and {v} is an explicit vector with non-positive coefficients that reflects the effect of (37). It is possible to show (using (24), (26)) that all the coefficients of {x} are negative (assuming the counterfactual situation {\eta>0} of course). Then we can iterate this to obtain

\displaystyle  x \leq P^k x + \eta \sum_{j=0}^{k-1} P^j v

for any natural number {k}. This would lead to an immediate contradiction if the Perron-Frobenius eigenvalue of {P} exceeds {1} because {P^k x} would now grow exponentially; this is typically the situation for “non-endpoint” applications such as proving decoupling inequalities away from the endpoint. In the endpoint situation discussed above, the Perron-Frobenius eigenvalue is {1}, with {v} having a non-trivial projection to this eigenspace, so the sum {\sum_{j=0}^{k-1} \eta P^j v} now grows at least linearly, which still gives the required contradiction for any {\eta>0}. So it is important to gather “enough” inequalities so that the relevant matrix {P} has a Perron-Frobenius eigenvalue greater than or equal to {1} (and in the latter case one needs non-trivial injection of an induction hypothesis into an eigenspace corresponding to an eigenvalue {1}). More specifically, if {\rho} is the spectral radius of {P} and {w^T} is a left Perron-Frobenius eigenvector, that is to say a non-negative vector, not identically zero, such that {w^T P = \rho w^T}, then by taking inner products of (38) with {w} we obtain

\displaystyle  w^T x \leq \rho w^T x + \eta w^T v.

If {\rho > 1} this leads to a contradiction since {w^T x} is negative and {w^T v} is non-positive. When {\rho = 1} one still gets a contradiction as long as {w^T v} is strictly negative.

Remark 12 (This calculation is essentially due to Guo and Zorin-Kranich.) Here is a concrete application of the Perron-Frobenius strategy outlined above to the system of inequalities (32)(37). Consider the weighted sum

\displaystyle  W := \sum_{k=2}^d (k-1) \beta_k + \sum_{k=1}^{d-1} 2k \gamma_k;

I had secretly calculated the weights {k-1}, {2k} as coming from the left Perron-Frobenius eigenvector of the matrix {P} described in the previous remark, but for this calculation the precise provenance of the weights is not relevant. Applying the inequalities (31), (30) we see that {W} is bounded by

\displaystyle  \sum_{k=2}^d (k-1) (\frac{(d-k+1)(k-2)}{(k-1)(d-k+3)} \beta_{k-1} + \frac{2(d+1)}{(k-1)(d-k+3)} \gamma_{k-1})

\displaystyle  + \sum_{k=1}^{d-1} 2k(\frac{1}{d-k+1} \beta_{k+1} + \frac{d-k}{d-k+1} \gamma_{k+1})

(with the convention that the {\beta_1} term is absent); this simplifies after some calculation to the bound

\displaystyle  W \leq W + \frac{1}{2} \gamma_d

and this and (37) then leads to the required contradiction.

Exercise 13

  • (i) Extend the above analysis to also cover the non-endpoint case {d^2 < p < d(d+1)}. (One will need to establish the claim {\alpha_k(t) \leq -\eta} for {t \leq p}.)
  • (ii) Modify the argument to deal with the remaining cases {2 < p \leq d^2} by dropping some of the steps.

Archives