You are currently browsing the category archive for the ‘expository’ category.

At the most recent MSRI board of trustees meeting on Mar 7 (conducted online, naturally), Nicolas Jewell (a Professor of Biostatistics and Statistics at Berkeley, also affiliated with the Berkeley School of Public Health and the London School of Health and Tropical Disease), gave a presentation on the current coronavirus epidemic entitled “2019-2020 Novel Coronavirus outbreak: mathematics of epidemics, and what it can and cannot tell us”.  The presentation (updated with Mar 18 data), hosted by David Eisenbud (the director of MSRI), together with a question and answer session, is now on Youtube:

(I am on this board, but could not make it to this particular meeting; I caught up on the presentation later, and thought it would of interest to several readers of this blog.)  While there is some mathematics in the presentation, it is relatively non-technical.

In the modern theory of additive combinatorics, a large role is played by the Gowers uniformity norms {\|f\|_{U^k(G)}}, where {k \geq 1}, {G = (G,+)} is a finite abelian group, and {f: G \rightarrow {\bf C}} is a function (one can also consider these norms in finite approximate groups such as {[N] = \{1,\dots,N\}} instead of finite groups, but we will focus on the group case here for simplicity). These norms can be defined by the formula

\displaystyle \|f\|_{U^k(G)} := (\mathop{\bf E}_{x,h_1,\dots,h_k \in G} \Delta_{h_1} \dots \Delta_{h_k} f(x))^{1/2^k}

where we use the averaging notation

\displaystyle \mathop{\bf E}_{x \in A} f(x) := \frac{1}{|A|} \sum_{x \in A} f(x)

for any non-empty finite set {A} (with {|A|} denoting the cardinality of {A}), and {\Delta_h} is the multiplicative discrete derivative operator

\displaystyle \Delta_h f(x) := f(x+h) \overline{f(x)}.

One reason why these norms play an important role is that they control various multilinear averages. We give two sample examples here:

Proposition 1 Let {G = {\bf Z}/N{\bf Z}}.

  • (i) If {a_1,\dots,a_k} are distinct elements of {G} for some {k \geq 2}, and {f_1,\dots,f_k: G \rightarrow {\bf C}} are {1}-bounded functions (thus {|f_j(x)| \leq 1} for all {j=1,\dots,k} and {x \in G}), then

    \displaystyle \mathop{\bf E}_{x, h \in G} f_1(x+a_1 h) \dots f_k(x+a_k h) \leq \|f_i\|_{U^{k-1}(G)} \ \ \ \ \ (1)

     

    for any {i=1,\dots,k}.

  • (ii) If {f_1,f_2,f_3: G \rightarrow {\bf C}} are {1}-bounded, then one has

    \displaystyle \mathop{\bf E}_{x, h \in G} f_1(x) f_2(x+h) f_3(x+h^2) \ll \|f_3\|_{U^4(G)} + N^{-1/4}.

We establish these claims a little later in this post.

In some more recent literature (e.g., this paper of Conlon, Fox, and Zhao), the role of Gowers norms have been replaced by (generalisations) of the cut norm, a concept originating from graph theory. In this blog post, it will be convenient to define these cut norms in the language of probability theory (using boldface to denote random variables).

Definition 2 (Cut norm) Let {{\bf X}_1,\dots,{\bf X}_k, {\bf Y}_1,\dots,{\bf Y}_l} be independent random variables with {k,l \geq 0}; to avoid minor technicalities we assume that these random variables are discrete and take values in a finite set. Given a random variable {{\bf F} = F( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} of these independent random variables, we define the cut norm

\displaystyle \| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} := \sup | \mathop{\bf E} {\bf F} {\bf B}_1 \dots {\bf B}_k |

where the supremum ranges over all choices {{\bf B}_1,\dots,{\bf B}_k} of random variables {{\bf B}_i = B_i( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} that are {1}-bounded (thus {|{\bf B}_i| \leq 1} surely), and such that {{\bf B}_i} does not depend on {{\bf X}_i}.

If {l=0}, we abbreviate {\| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )}} as {\| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k )}}.

Strictly speaking, the cut norm is only a cut semi-norm when {k=0,1}, but we will abuse notation by referring to it as a norm nevertheless.

Example 3 If {G = (V_1,V_2,E)} is a bipartite graph, and {\mathbf{v_1}}, {\mathbf{v_2}} are independent random variables chosen uniformly from {V_1,V_2} respectively, then

\displaystyle \| 1_E(\mathbf{v_1},\mathbf{v_2}) \|_{\mathrm{CUT}(\mathbf{v_1}, \mathbf{v_2})}

\displaystyle = \sup_{\|f\|_\infty, \|g\|_\infty \leq 1} |\mathop{\bf E}_{v_1 \in V_1, v_2 \in V_2} 1_E(v_1,v_2) f(v_1) g(v_2)|

where the supremum ranges over all {1}-bounded functions {f: V_1 \rightarrow [-1,1]}, {g: V_2 \rightarrow [-1,1]}. The right hand side is essentially the cut norm of the graph {G}, as defined for instance by Frieze and Kannan.

The cut norm is basically an expectation when {k=0,1}:

Example 4 If {k=0}, we see from definition that

\displaystyle \| {\bf F} \|_{\mathrm{CUT}( ; {\bf Y}_1,\dots,{\bf Y}_l )} =| \mathop{\bf E} {\bf F} |.

If {k=1}, one easily checks that

\displaystyle \| {\bf F} \|_{\mathrm{CUT}( {\bf X}; {\bf Y}_1,\dots,{\bf Y}_l )} = \mathop{\bf E} | \mathop{\bf E}_{\bf X} {\bf F} |,

where {\mathop{\bf E}_{\bf X} {\bf F} = \mathop{\bf E}( {\bf F} | {\bf Y}_1,\dots,{\bf Y}_l )} is the conditional expectation of {{\bf F}} to the {\sigma}-algebra generated by all the variables other than {{\bf X}}, i.e., the {\sigma}-algebra generated by {{\bf Y}_1,\dots,{\bf Y}_l}. In particular, if {{\bf X}, {\bf Y}_1,\dots,{\bf Y}_l} are independent random variables drawn uniformly from {X,Y_1,\dots,Y_l} respectively, then

\displaystyle \| F( {\bf X}; {\bf Y}_1,\dots, {\bf Y}_l) \|_{\mathrm{CUT}( {\bf X}; {\bf Y}_1,\dots,{\bf Y}_l )}

\displaystyle = \mathop{\bf E}_{y_1 \in Y_1,\dots, y_l \in Y_l} |\mathop{\bf E}_{x \in X} F(x; y_1,\dots,y_l)|.

Here are some basic properties of the cut norm:

Lemma 5 (Basic properties of cut norm) Let {{\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l} be independent discrete random variables, and {{\bf F} = F({\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l)} a function of these variables.

  • (i) (Permutation invariance) The cut norm {\| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )}} is invariant with respect to permutations of the {{\bf X}_1,\dots,{\bf X}_k}, or permutations of the {{\bf Y}_1,\dots,{\bf Y}_l}.
  • (ii) (Conditioning) One has

    \displaystyle \| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} = \mathop{\bf E} \| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k )}

    where on the right-hand side we view, for each realisation {y_1,\dots,y_l} of {{\bf Y}_1,\dots,{\bf Y}_l}, {{\bf F}} as a function {F( {\bf X}_1,\dots,{\bf X}_k; y_1,\dots,y_l)} of the random variables {{\bf X}_1,\dots, {\bf X}_k} alone, thus the right-hand side may be expanded as

    \displaystyle \sum_{y_1,\dots,y_l} \| F( {\bf X}_1,\dots,{\bf X}_k; y_1,\dots,y_l) \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k )}

    \displaystyle \times \mathop{\bf P}( Y_1=y_1,\dots,Y_l=y_l).

  • (iii) (Monotonicity) If {k \geq 1}, we have

    \displaystyle \| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} \geq \| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_{k-1}; {\bf X}_k, {\bf Y}_1,\dots,{\bf Y}_l )}.

  • (iv) (Multiplicative invariances) If {{\bf B} = B({\bf X}_1,\dots,{\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l)} is a {1}-bounded function that does not depend on one of the {{\bf X}_i}, then

    \displaystyle \| {\bf B} {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} \leq \| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )}.

    In particular, if we additionally assume {|{\bf B}|=1}, then

    \displaystyle \| {\bf B} {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} = \| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )}.

  • (v) (Cauchy-Schwarz) If {k \geq 1}, one has

    \displaystyle \| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} \leq \| \Box_{{\bf X}_1, {\bf X}'_1} {\bf F} \|_{\mathrm{CUT}( {\bf X}_2, \dots, {\bf X}_k; {\bf X}_1, {\bf X}'_1, {\bf Y}_1,\dots,{\bf Y}_l )}^{1/2}

    where {{\bf X}'_1} is a copy of {{\bf X}_1} that is independent of {{\bf X}_1,\dots,{\bf X}_k,{\bf Y}_1,\dots,{\bf Y}_l} and {\Box_{{\bf X}_1, {\bf X}'_1} {\bf F}} is the random variable

    \displaystyle \Box_{{\bf X}_1, {\bf X}'_1} {\bf F} := F( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )

    \displaystyle \times \overline{F}( {\bf X}'_1, {\bf X}_2, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l ).

  • (vi) (Averaging) If {k \geq 1} and {{\bf F} = \mathop{\bf E}_{\bf Z} {\bf F}_{\bf Z}}, where {{\bf Z}} is another random variable independent of {{\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l}, and {{\bf F}_{\bf Z} = F_{\bf Z}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} is a random variable depending on both {{\bf Z}} and {{\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l}, then

    \displaystyle \| {\bf F} \|_{\mathrm{CUT}( {\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )} \leq \| {\bf F}_{\bf Z} \|_{\mathrm{CUT}( ({\bf X}_1, {\bf Z}), {\bf X}_2, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l )}

Proof: The claims (i), (ii) are clear from expanding out all the definitions. The claim (iii) also easily follows from the definitions (the left-hand side involves a supremum over a more general class of multipliers {{\bf B}_1,\dots,{\bf B}_{k}}, while the right-hand side omits the {{\bf B}_k} multiplier), as does (iv) (the multiplier {{\bf B}} can be absorbed into one of the multipliers in the definition of the cut norm). The claim (vi) follows by expanding out the definitions, and observing that all of the terms in the supremum appearing in the left-hand side also appear as terms in the supremum on the right-hand side. It remains to prove (v). By definition, the left-hand side is the supremum over all quantities of the form

\displaystyle |{\bf E} {\bf F} {\bf B}_1 \dots {\bf B}_k|

where the {{\bf B}_i} are {1}-bounded functions of {{\bf X}_1, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l} that do not depend on {{\bf X}_i}. We average out in the {{\bf X}_1} direction (that is, we condition out the variables {{\bf X}_2, \dots, {\bf X}_k; {\bf Y}_1,\dots,{\bf Y}_l}), and pull out the factor {{\bf B}_1} (which does not depend on {{\bf X}_1}), to write this as

\displaystyle |{\bf E} {\bf B}_1 {\bf E}_{{\bf X}_1}( {\bf F} {\bf B}_2 \dots {\bf B}_k )|,

which by Cauchy-Schwarz is bounded by

\displaystyle ( |{\bf E} |{\bf E}_{{\bf X}_1}( {\bf F} {\bf B}_2 \dots {\bf B}_k )|^2)^{1/2},

which can be expanded using the copy {{\bf X}_1} as

\displaystyle |{\bf E} \Box_{{\bf X}_1,{\bf X}'_1} ({\bf F} {\bf B}_2 \dots {\bf B}_k) |^{1/2}.

Expanding

\displaystyle \Box_{{\bf X}_1,{\bf X}'_1} ({\bf F} {\bf B}_2 \dots {\bf B}_k) = (\Box_{{\bf X}_1,{\bf X}'_1} {\bf F}) (\Box_{{\bf X}_1,{\bf X}'_1} {\bf B}_2) \dots (\Box_{{\bf X}_1,{\bf X}'_1} {\bf B}_k)

and noting that each {\Box_{{\bf X}_1,{\bf X}'_1} {\bf B}_i} is {1}-bounded and independent of {{\bf X}_i} for {i=2,\dots,k}, we obtain the claim. \Box

Now we can relate the cut norm to Gowers uniformity norms:

Lemma 6 Let {G} be a finite abelian group, let {{\bf x}, {\bf h}_1,\dots,{\bf h}_k} be independent random variables uniformly drawn from {G} for some {k \geq 0}, and let {f: G \rightarrow {\bf C}}. Then

\displaystyle \| f({\bf x} + {\bf h}_1 + \dots + {\bf h}_k) \|_{\mathrm{CUT}( {\bf h}_1,\dots,{\bf h}_k, {\bf x} )} \leq \|f\|_{U^{k+1}(G)} \ \ \ \ \ (2)

and similarly (if {k \geq 1})

\displaystyle \| f({\bf x} + {\bf h}_1 + \dots + {\bf h}_k) \|_{\mathrm{CUT}( {\bf h}_1,\dots,{\bf h}_k; {\bf x} )} \leq \|f\|_{U^{k}(G)} \ \ \ \ \ (3)

If {f} is additionally assumed to be {1}-bounded, we have the converse inequalities

\displaystyle \|f\|_{U^{k+1}(G)}^{2^{k+1}} \leq \| f({\bf x} + {\bf h}_1 + \dots + {\bf h}_k) \|_{\mathrm{CUT}( {\bf h}_1,\dots,{\bf h}_k, {\bf x} )} \ \ \ \ \ (4)

and (if {k \geq 1})

\displaystyle \|f\|_{U^{k}(G)}^{2^{k}} \leq \| f({\bf x} + {\bf h}_1 + \dots + {\bf h}_k) \|_{\mathrm{CUT}( {\bf h}_1,\dots,{\bf h}_k; {\bf x} )}. \ \ \ \ \ (5)

 

Proof: Applying Lemma 5(v) {k} times, we can bound

\displaystyle \| f({\bf x} + {\bf h}_1 + \dots + {\bf h}_k) \|_{\mathrm{CUT}( {\bf h_1},\dots,{\bf h_k}, {\bf x} )}

by

\displaystyle \| \Box_{{\bf h}_k,{\bf h}'_k} \dots \Box_{{\bf h}_1,{\bf h}'_1} (f({\bf x} + {\bf h}_1 + \dots + {\bf h}_k)) \|_{\mathrm{CUT}( {\bf x}; {\bf h}_1, {\bf h}'_1, \dots, {\bf h}_k, {\bf h}'_k )}^{1/2^k} \ \ \ \ \ (6)

where {{\bf h}'_1,\dots,{\bf h}'_k} are independent copies of {{\bf h}_1,\dots,{\bf h}_k} that are also independent of {{\bf x}}. The expression inside the norm can also be written as

\displaystyle \Delta_{{\bf h}_k - {\bf h}'_k} \dots \Delta_{{\bf h}_1 - {\bf h}'_1} f({\bf x} + {\bf h}'_1 + \dots + {\bf h}'_k)

so by Example 4 one can write (6) as

\displaystyle |\mathop{\bf E}_{h_1,\dots,h_k,h'_1,\dots,h'_k \in G} |\mathop{\bf E}_{x \in G} \Delta_{h_k - h'_k} \dots \Delta_{h_1 - h'_1} f(x+h'_1+\dots+h'_k)||^{1/2^k}

which after some change of variables simplifies to

\displaystyle |\mathop{\bf E}_{h_1,\dots,h_k \in G} |\mathop{\bf E}_{x \in G} \Delta_{h_k} \dots \Delta_{h_1} f(x)||^{1/2^k}

which by Cauchy-Schwarz is bounded by

\displaystyle |\mathop{\bf E}_{h_1,\dots,h_k \in G} |\mathop{\bf E}_{x \in G} \Delta_{h_k} \dots \Delta_{h_1} f(x)|^2|^{1/2^{k+1}}

which one can rearrange as

\displaystyle |\mathop{\bf E}_{h_1,\dots,h_k,h_{k+1},x \in G} \Delta_{h_{k+1}} \Delta_{h_k} \dots \Delta_{h_1} f(x)|^{1/2^{k+1}}

giving (2). A similar argument bounds

\displaystyle \| f({\bf x} + {\bf h}_1 + \dots + {\bf h}_k) \|_{\mathrm{CUT}( {\bf h_1},\dots,{\bf h_k}; {\bf x} )}

by

\displaystyle |\mathop{\bf E}_{h_1,\dots,h_k \in G} \mathop{\bf E}_{x \in G} \Delta_{h_k} \dots \Delta_{h_1} f(x)|^{1/2^k}

which gives (3).

For (4), we can reverse the above steps and expand {\|f\|_{U^{k+1}(G)}^{2^{k+1}}} as

\displaystyle \mathop{\bf E}_{h_1,\dots,h_k \in G} |\mathop{\bf E}_{x \in G} \Delta_{h_k} \dots \Delta_{h_1} f(x)|^2

which we can write as

\displaystyle |\mathop{\bf E}_{h_1,\dots,h_k \in G} b(h_1,\dots,h_k) \mathop{\bf E}_{x \in G} \Delta_{h_k} \dots \Delta_{h_1} f(x)|

for some {1}-bounded function {b}. This can in turn be expanded as

\displaystyle |\mathop{\bf E}_{h_1,\dots,h_k,x \in G} f(x+h_1+\dots+h_k) b(h_1,\dots,h_k) \prod_{i=1}^k b_i(x,h_1,\dots,h_k)|

for some {1}-bounded functions {b_i} that do not depend on {h_i}. By Example 4, this can be written as

\displaystyle \| f({\bf x} + {\bf h_1}+\dots+{\bf h}_k) b({\bf h}_1,\dots,{\bf h}_k) \prod_{i=1}^k b_i(x,h_1,\dots,h_k) \|_{\mathrm{CUT}(; {\bf h}_1,\dots,{\bf h}_k, {\bf x})}

which by several applications of Theorem 5(iii) and then Theorem 5(iv) can be bounded by

\displaystyle \| f({\bf x} + {\bf h_1}+\dots+{\bf h}_k) \|_{\mathrm{CUT}( {\bf h}_1,\dots,{\bf h}_k, {\bf x})},

giving (4). A similar argument gives (5). \Box

Now we can prove Proposition 1. We begin with part (i). By permutation we may assume {i=k}, then by translation we may assume {a_k=0}. Replacing {x} by {x+h_1+\dots+h_{k-1}} and {h} by {h - a_1^{-1} h_1 - \dots - a_{k-1}^{-1} h_{k-1}}, we can write the left-hand side of (1) as

\displaystyle \mathop{\bf E}_{x,h,h_1,\dots,h_{k-1} \in G} f_k(x+h_1+\dots+h_{k-1}) \prod_{i=1}^{k-1} b_i(x,h,h_1,\dots,h_{k-1})

where

\displaystyle b_i(x,h,h_1,\dots,h_{k-1})

\displaystyle := f_i( x + h_1+\dots+h_{k-1}+ a_i(h - a_1^{-1} h_1 - \dots - a_k^{-1} h_{k-1}))

is a {1}-bounded function that does not depend on {h_i}. Taking {{\bf x}, {\bf h}, {\bf h}_1,\dots,{\bf h}_k} to be independent random variables drawn uniformly from {G}, the left-hand side of (1) can then be written as

\displaystyle \mathop{\bf E} f_k({\bf x}+{\bf h}_1+\dots+{\bf h}_{k-1}) \prod_{i=1}^{k-1} b_i({\bf x},{\bf h},{\bf h}_1,\dots,{\bf h}_{k-1})

which by Example 4 is bounded in magnitude by

\displaystyle \| f_k({\bf x}+{\bf h}_1+\dots+{\bf h}_{k-1}) \prod_{i=1}^{k-1} b_i({\bf x},{\bf h},{\bf h}_1,\dots,{\bf h}_{k-1}) \|_{\mathrm{CUT}(; {\bf h}_1,\dots,{\bf h}_{k-1}, {\bf x}, {\bf h})}.

After many applications of Lemma 5(iii), (iv), this is bounded by

\displaystyle \| f_k({\bf x}+{\bf h_1}+\dots+{\bf h_{k-1}}) \|_{\mathrm{CUT}({\bf h}_1,\dots,{\bf h}_{k-1}; {\bf x}, {\bf h})}

By Lemma 5(ii) we may drop the {{\bf h}} variable, and then the claim follows from Lemma 6.

For part (ii), we replace {x} by {x+a-h^2} and {h} by {h-a+b} to write the left-hand side as

\displaystyle \mathop{\bf E}_{x, a,b,h \in G} f_1(x+a-h^2) f_2(x+h+b-h^2) f_3(x+a+(h-a+b)^2-h^2);

the point here is that the first factor does not involve {b}, the second factor does not involve {a}, and the third factor has no quadratic terms in {h}. Letting {{\bf x}, {\bf a}, {\bf b}, {\bf h}} be independent variables drawn uniformly from {G}, we can use Example 4 to bound this in magnitude by

\displaystyle \| f_1({\bf x}+{\bf a}-{\bf h}^2) f_2({\bf x}+{\bf h}+{\bf b}-{\bf h}^2)

\displaystyle f_3( {\bf x}+{\bf a}+({\bf h}-{\bf a}+{\bf b})^2-{\bf h}^2 ) \|_{\mathrm{CUT}(; {\bf x}, {\bf a}, {\bf b}, {\bf h})}

which by Lemma 5(i),(iii),(iv) is bounded by

\displaystyle \| f_3( {\bf x}+{\bf a}+({\bf h}-{\bf a}+{\bf b})^2 - {\bf h}^2 ) \|_{\mathrm{CUT}({\bf a}, {\bf b}; {\bf x}, {\bf h})}

and then by Lemma 5(v) we may bound this by

\displaystyle \| \Box_{{\bf a}, {\bf a}'} \Box_{{\bf b}, {\bf b}'} f_3( {\bf x}+{\bf a}+({\bf h}-{\bf a}+{\bf b})^2 - {\bf h}^2 ) \|_{\mathrm{CUT}(;{\bf a}, {\bf a}', {\bf b}, {\bf b}', {\bf x}, {\bf h})}^{1/4}

which by Example 4 is

\displaystyle |\mathop{\bf E} \Box_{{\bf a}, {\bf a}'} \Box_{{\bf b}, {\bf b}'} f_3( {\bf x}+{\bf a}+({\bf h}-{\bf a}+{\bf b})^2 - {\bf h}^2 )|^{1/4}

Now the expression inside the expectation is the product of four factors, each of which is {f_3} or {\overline{f}_3} applied to an affine form {{\bf x} + {\bf c} + {\bf a} {\bf h}} where {{\bf c}} depends on {{\bf a}, {\bf a}', {\bf b}, {\bf b}'} and {{\bf a}} is one of {2({\bf b}-{\bf a})}, {2({\bf b}'-{\bf a})}, {2({\bf b}-{\bf a}')}, {2({\bf b}'-{\bf a}')}. With probability {1-O(1/N)}, the four different values of {{\bf a}} are distinct, and then by part (i) we have

\displaystyle |\mathop{\bf E}(\Box_{{\bf a}, {\bf a}'} \Box_{{\bf b}, {\bf b}'} f_3( {\bf x}+{\bf a}+({\bf h}-{\bf a}+{\bf b})^2 - {\bf h}^2 )|{\bf a}, {\bf a}', {\bf b}, {\bf b}')| \leq \|f_3\|_{U^4({\bf Z}/N{\bf Z})}.

When they are not distinct, we can instead bound this quantity by {1}. Taking expectations in {{\bf a}, {\bf a}', {\bf b}, {\bf b}'}, we obtain the claim. \Box

The analogue of the inverse {U^2} theorem for cut norms is the following claim (which I learned from Ben Green):

Lemma 7 ({U^2}-type inverse theorem) Let {\mathbf{x}, \mathbf{h}} be independent random variables drawn from a finite abelian group {G}, and let {f: G \rightarrow {\bf C}} be {1}-bounded. Then we have

\displaystyle \| f(\mathbf{x} + \mathbf{h}) \|_{\mathrm{CUT}(\mathbf{x}, \mathbf{h})} = \sup_{\xi \in\hat G} \| f(\mathbf{x}) e(\xi \cdot \mathbf{x}) \|_{\mathrm{CUT}(\mathbf{x})}

where {\hat G} is the group of homomorphisms {\xi: x \mapsto \xi \cdot x} is a homomorphism from {G} to {{\bf R}/{\bf Z}}, and {e(\theta) := e^{2\pi i \theta}}.

Proof: Suppose first that {\| f(\mathbf{x} + \mathbf{h}) \|_{\mathrm{CUT}(\mathbf{x}, \mathbf{h})} > \delta} for some {\delta}, then by definition

\displaystyle |\mathop{\bf E}_{x,h \in G} f(x+h) a(x) b(h)| > \delta

for some {1}-bounded {a,b: G \rightarrow {\bf C}}. By Fourier expansion, the left-hand side is also

\displaystyle \sum_{\xi \in \hat G} \hat f(-\xi) \hat a(\xi) \hat b(\xi)

where {\hat f(\xi) := \mathop{\bf E}_{x \in G} f(x) e(-\xi \cdot x)}. From Plancherel’s theorem we have

\displaystyle \sum_{\xi \in \hat G} |\hat a(\xi)|^2, \sum_{\xi \in \hat G} |\hat b(\xi)|^2 \leq 1

hence by Hölder’s inequality one has {|\hat f(-\xi)| > \delta} for some {\xi \in \hat G}, and hence

\displaystyle \sup_{\xi \in\hat G} \| f(\mathbf{x}) e(\xi \cdot \mathbf{x}) \|_{\mathrm{CUT}(\mathbf{x})} > \delta. \ \ \ \ \ (7)

Conversely, suppose (7) holds. Then there is {\xi \in \hat G} such that

\displaystyle \| f(\mathbf{x}) e(\xi \cdot \mathbf{x}) \|_{\mathrm{CUT}(\mathbf{x})} > \delta

which on substitution and Example 4 implies

\displaystyle \| f(\mathbf{x}+\mathbf{h}) e(\xi \cdot (\mathbf{x}+\mathbf{h})) \|_{\mathrm{CUT}(;\mathbf{x}, \mathbf{h})} > \delta.

The term {e(\xi \cdot (\mathbf{x}+\mathbf{h}))} splits into the product of a factor {e(\xi \cdot \mathbf{x})} not depending on {\mathbf{h}}, and a factor {e(\xi \cdot \mathbf{h})} not depending on {\mathbf{x}}. Applying Lemma 5(iii), (iv) we conclude that

\displaystyle \| f(\mathbf{x}+\mathbf{h}) \|_{\mathrm{CUT}(\mathbf{x}, \mathbf{h})} > \delta.

The claim follows. \Box

The higher order inverse theorems are much less trivial (and the optimal quantitative bounds are not currently known). However, there is a useful degree lowering argument, due to Peluse and Prendiville, that can allow one to lower the order of a uniformity norm in some cases. We give a simple version of this argument here:

Lemma 8 (Degree lowering argument, special case) Let {G} be a finite abelian group, let {Y} be a non-empty finite set, and let {f: G \rightarrow {\bf C}} be a function of the form {f(x) := \mathop{\bf E}_{y \in Y} F_y(x)} for some {1}-bounded functions {F_y: G \rightarrow {\bf C}} indexed by {y \in Y}. Suppose that

\displaystyle \|f\|_{U^k(G)} \geq \delta

for some {k \geq 2} and {0 < \delta \leq 1}. Then one of the following claims hold (with implied constants allowed to depend on {k}):

  • (i) (Degree lowering) one has {\|f\|_{U^{k-1}(G)} \gg \delta^{O(1)}}.
  • (ii) (Non-zero frequency) There exist {h_1,\dots,h_{k-2} \in G} and non-zero {\xi \in \hat G} such that

    \displaystyle |\mathop{\bf E}_{x \in G, y \in Y} \Delta_{h_1} \dots \Delta_{h_{k-2}} F_y(x) e( \xi \cdot x )| \gg \delta^{O(1)}.

There are more sophisticated versions of this argument in which the frequency {\xi} is “minor arc” rather than “zero frequency”, and then the Gowers norms are localised to suitable large arithmetic progressions; this is implicit in the above-mentioned paper of Peluse and Prendiville.

Proof: One can write

\displaystyle \|f\|_{U^k(G)}^{2^k} = \mathop{\bf E}_{h_1,\dots,h_{k-2} \in G} \|\Delta_{h_1} \dots \Delta_{h_{k-2}} f \|_{U^2(G)}^4

and hence we conclude that

\displaystyle \|\Delta_{h_1} \dots \Delta_{h_{k-2}} f \|_{U^2(G)} \gg \delta^{O(1)}

for a set {\Sigma} of tuples {(h_1,\dots,h_{k-2}) \in G^{k-2}} of density {h_1,\dots,h_{k-2}}. Applying Lemma 6 and Lemma 7, we see that for each such tuple, there exists {\phi(h_1,\dots,h_{k-2}) \in \hat G} such that

\displaystyle \| \Delta_{h_1} \dots \Delta_{h_{k-2}} f({\bf x}) e( \phi(h_1,\dots,h_{k-2}) \cdot {\bf x} ) \|_{\mathrm{CUT}({\bf x})} \gg \delta^{O(1)}, \ \ \ \ \ (8)

where {{\bf x}} is drawn uniformly from {G}.

Let us adopt the convention that {e( \phi( _1,\dots,h_{k-2}) \cdot {\bf x} ) } vanishes for {(h_1,\dots,h_{k-2})} not in {\Sigma}, then from Lemma 5(ii) we have

\displaystyle \| \Delta_{{\bf h}_1} \dots \Delta_{{\bf h}_{k-2}} f({\bf x}) e( \phi({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot {\bf x} ) \|_{\mathrm{CUT}({\bf x}; {\bf h}_1,\dots, {\bf h}_{k-2})} \gg \delta^{O(1)},

where {{\bf h}_1,\dots,{\bf h}_{k-2}} are independent random variables drawn uniformly from {G} and also independent of {{\bf x}}. By repeated application of Lemma 5(iii) we then have

\displaystyle \| \Delta_{{\bf h}_1} \dots \Delta_{{\bf h}_{k-2}} f({\bf x}) e( \phi({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot {\bf x} ) \|_{\mathrm{CUT}({\bf x},{\bf h}_1,\dots, {\bf h}_{k-2})} \gg \delta^{O(1)}.

Expanding out {\Delta_{h_1} \dots \Delta_{h_{k-2}} f({\bf x})} and using Lemma 5(iv) repeatedly we conclude that

\displaystyle \| f({\bf x} + {\bf h}_1 + \dots + {\bf h}_{k-2}) e( \phi({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot {\bf x} ) \|_{\mathrm{CUT}({\bf x},{\bf h}_1,\dots, {\bf h}_{k-2})} \gg \delta^{O(1)}.

From definition of {f} we then have

\displaystyle \| {\bf E}_{y \in Y} F_y({\bf x} + {\bf h}_1 + \dots + {\bf h}_{k-2}) e( \phi({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot {\bf x} ) \|_{\mathrm{CUT}({\bf x},{\bf h}_1,\dots, {\bf h}_{k-2})}

\displaystyle \gg \delta^{O(1)}.

By Lemma 5(vi), we see that the left-hand side is less than

\displaystyle \| F_{\bf y}({\bf x} + {\bf h}_1 + \dots + {\bf h}_{k-2}) e( \phi({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot {\bf x} ) \|_{\mathrm{CUT}(({\bf x}, {\bf y}),{\bf h}_1,\dots, {\bf h}_{k-2})},

where {{\bf y}} is drawn uniformly from {Y}, independently of {{\bf x}, {\bf h}_1,\dots,{\bf h}_{k-2}}. By repeated application of Lemma 5(i), (v) repeatedly, we conclude that

\displaystyle \| \Box_{{\bf h}_1, {\bf h}'_1} \dots \Box_{{\bf h}_{k-2}, {\bf h}'_{k-2}} (F_{\bf y}({\bf x} + {\bf h}_1 + \dots + {\bf h}_{k-2}) e( \phi({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot {\bf x} )) \|_{\mathrm{CUT}(({\bf x},{\bf y}); {\bf h}_1,{\bf h}'_1,\dots, {\bf h}_{k-2}, {\bf h}'_{k-2})} \gg \delta^{O(1)},

where {{\bf h}'_1,\dots,{\bf h}'_{k-2}} are independent copies of {{\bf h}_1,\dots,{\bf h}_{k-2}} that are also independent of {{\bf x}}, {{\bf y}}. By Lemma 5(ii) and Example 4 we conclude that

\displaystyle |\mathop{\bf E}( \Box_{{\bf h}_1, {\bf h}'_1} \dots \Box_{{\bf h}_{k-2}, {\bf h}'_{k-2}} (F_{\bf y}({\bf x} + {\bf h}_1 + \dots + {\bf h}_{k-2}) e( \phi({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot {\bf x} )) | {\bf h}_1,{\bf h}'_1,\dots, {\bf h}_{k-2}, {\bf h}'_{k-2}) )| \gg \delta^{O(1)} \ \ \ \ \ (9)

with probability {\gg \delta^{O(1)}}.

The left-hand side can be rewritten as

\displaystyle |\mathop{\bf E}_{x \in G, y \in Y} \Delta_{{\bf h}_1 - {\bf h}'_1} \dots \Delta_{{\bf h}_{k-2} - {\bf h}'_{k-2}} F_y( x + {\bf h}'_1 + \dots + {\bf h}'_{k-2})

\displaystyle e( \delta_{{\bf h}_1, {\bf h}'_1} \dots \delta_{{\bf h}_{k-2}, {\bf h}'_{k-2}} \phi({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot x )|

where {\delta_{{\bf h}_1, {\bf h}'_1}} is the additive version of {\Box_{{\bf h}_1, {\bf h}'_1}}, thus

\displaystyle \delta_{{\bf h}_1, {\bf h}'_1} \phi({\bf h}_1,\dots,{\bf h}_{k-2}) := \phi({\bf h}_1,\dots,{\bf h}_{k-2}) - \phi({\bf h}'_1,\dots,{\bf h}_{k-2}).

Translating {x}, we can simplify this a little to

\displaystyle |\mathop{\bf E}_{x \in G, y \in Y} \Delta_{{\bf h}_1 - {\bf h}'_1} \dots \Delta_{{\bf h}_k - {\bf h}'_k} F_y( x ) e( \delta_{{\bf h}_1, {\bf h}'_1} \dots \delta_{{\bf h}_{k-2}, {\bf h}'_{k-2}} \phi({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot x )|

If the frequency {\delta_{{\bf h}_1, {\bf h}'_1} \dots \delta_{{\bf h}_{k-2}, {\bf h}'_{k-2}} \phi({\bf h}_1,\dots,{\bf h}_{k-2})} is ever non-vanishing in the event (9) then conclusion (ii) applies. We conclude that

\displaystyle \delta_{{\bf h}_1, {\bf h}'_1} \dots \delta_{{\bf h}_{k-2}, {\bf h}'_{k-2}} \phi({\bf h}_1,\dots,{\bf h}_{k-2}) = 0

with probability {\gg \delta^{O(1)}}. In particular, by the pigeonhole principle, there exist {h'_1,\dots,h'_{k-2} \in G} such that

\displaystyle \delta_{{\bf h}_1, h'_1} \dots \delta_{{\bf h}_{k-2}, h'_{k-2}} \phi({\bf h}_1,\dots,{\bf h}_{k-2}) = 0

with probability {\gg \delta^{O(1)}}. Expanding this out, we obtain a representation of the form

\displaystyle \phi({\bf h}_1,\dots,{\bf h}_{k-2}) = \sum_{i=1}^{k-2} \phi_i({\bf h}_1,\dots,{\bf h}_{k-2})

holding with probability {\gg \delta^{O(1)}}, where the {\phi_i: G^{k-2} \rightarrow {\bf R}/{\bf Z}} are functions that do not depend on the {i^{th}} coordinate. From (8) we conclude that

\displaystyle \| \Delta_{h_1} \dots \Delta_{h_{k-2}} f({\bf x}) e( \sum_{i=1}^{k-2} \phi_i(h_1,\dots,h_{k-2}) \cdot {\bf x} ) \|_{\mathrm{CUT}({\bf x})} \gg \delta^{O(1)}

for {\gg \delta^{O(1)}} of the tuples {(h_1,\dots,h_{k-2}) \in G^{k-2}}. Thus by Lemma 5(ii)

\displaystyle \| \Delta_{{\bf h}_1} \dots \Delta_{{\bf h}_{k-2}} f({\bf x}) e( \sum_{i=1}^{k-2} \phi_i({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot {\bf x} ) \|_{\mathrm{CUT}({\bf x}; {\bf h}_1,\dots,{\bf h}_{k-2})} \gg \delta^{O(1)}.

By repeated application of Lemma 5(iii) we then have

\displaystyle \| \Delta_{{\bf h}_1} \dots \Delta_{{\bf h}_{k-2}} f({\bf x}) e( \sum_{i=1}^{k-2} \phi_i({\bf h}_1,\dots,{\bf h}_{k-2}) \cdot {\bf x} ) \|_{\mathrm{CUT}({\bf x}, {\bf h}_1,\dots,{\bf h}_{k-2})} \gg \delta^{O(1)}

and then by repeated application of Lemma 5(iv)

\displaystyle \| f({\bf x} + {\bf h}_1 + \dots + {\bf h}_{k-2}) \|_{\mathrm{CUT}({\bf x}, {\bf h}_1,\dots,{\bf h}_{k-2})} \gg \delta^{O(1)}

and then the conclusion (i) follows from Lemma 6. \Box

As an application of degree lowering, we give an inverse theorem for the average in Proposition 1(ii), first established by Bourgain-Chang and later reproved by Peluse (by different methods from those given here):

Proposition 9 Let {G = {\bf Z}/N{\bf Z}} be a cyclic group of prime order. Suppose that one has {1}-bounded functions {f_1,f_2,f_3: G \rightarrow {\bf C}} such that

\displaystyle |\mathop{\bf E}_{x, h \in G} f_1(x) f_2(x+h) f_3(x+h^2)| \geq \delta \ \ \ \ \ (10)

for some {\delta > 0}. Then either {N \ll \delta^{-O(1)}}, or one has

\displaystyle |\mathop{\bf E}_{x \in G} f_1(x)|, |\mathop{\bf E}_{x \in G} f_2(x)| \gg \delta^{O(1)}.

We remark that a modification of the arguments below also give {|\mathop{\bf E}_{x \in G} f_3(x)| \gg \delta^{O(1)}}.

Proof: The left-hand side of (10) can be written as

\displaystyle |\mathop{\bf E}_{x \in G} F(x) f_3(x)|

where {F} is the dual function

\displaystyle F(x) := \mathop{\bf E}_{h \in G} f_1(x-h^2) f_2(x-h^2+h).

By Cauchy-Schwarz one thus has

\displaystyle |\mathop{\bf E}_{x \in G} F(x) \overline{F}(x)| \geq \delta^2

and hence by Proposition 1, we either have {N \ll \delta^{-O(1)}} (in which case we are done) or

\displaystyle \|F\|_{U^4(G)} \gg \delta^2.

Writing {F = \mathop{\bf E}_{h \in G} F_h} with {F_h(x) := f_1(x-h^2) f_2(x-h^2+h)}, we conclude that either {\|F\|_{U^3(G)} \gg \delta^{O(1)}}, or that

\displaystyle |\mathop{\bf E}_{x,h \in G} \Delta_{h_1} \Delta_{h_2} F_h(x) e(\xi x / N )| \gg \delta^{O(1)}

for some {h_1,h_2 \in G} and non-zero {\xi \in G}. The left-hand side can be rewritten as

\displaystyle |\mathop{\bf E}_{x,h \in G} g_1(x-h^2) g_2(x-h^2+h) e(\xi x/N)|

where {g_1 = \Delta_{h_1} \Delta_{h_2} f_1} and {g_2 = \Delta_{h_1} \Delta_{h_2} f_2}. We can rewrite this in turn as

\displaystyle |\mathop{\bf E}_{x,y \in G} g_1(x) g_2(y) e(\xi (x + (y-x)^2) / N)|

which is bounded by

\displaystyle \| e(\xi({\bf x} + ({\bf y}-{\bf x})^2)/N) \|_{\mathrm{CUT}({\bf x}, {\bf y})}

where {{\bf x}, {\bf y}} are independent random variables drawn uniformly from {G}. Applying Lemma 5(v), we conclude that

\displaystyle \| \Box_{{\bf y}, {\bf y}'} e(\xi({\bf x} + ({\bf y}-{\bf x})^2)/N) \|_{\mathrm{CUT}({\bf x}; {\bf y}, {\bf y}')} \gg \delta^{O(1)}.

However, a routine Gauss sum calculation reveals that the left-hand side is {O(N^{-c})} for some absolute constant {c>0} because {\xi} is non-zero, so that {N \ll \delta^{-O(1)}}. The only remaining case to consider is when

\displaystyle \|F\|_{U^3(G)} \gg \delta^{O(1)}.

Repeating the above arguments we then conclude that

\displaystyle \|F\|_{U^2(G)} \gg \delta^{O(1)},

and then

\displaystyle \|F\|_{U^1(G)} \gg \delta^{O(1)}.

The left-hand side can be computed to equal {|\mathop{\bf E}_{x \in G} f_1(x)| |\mathop{\bf E}_{x \in G} f_2(x)|}, and the claim follows. \Box

This argument was given for the cyclic group setting, but the argument can also be applied to the integers (see Peluse-Prendiville) and can also be used to establish an analogue over the reals (that was first obtained by Bourgain).

Define the Collatz map {\mathrm{Col}: {\bf N}+1 \rightarrow {\bf N}+1} on the natural numbers {{\bf N}+1 = \{1,2,\dots\}} by setting {\mathrm{Col}(N)} to equal {3N+1} when {N} is odd and {N/2} when {N} is even, and let {\mathrm{Col}^{\bf N}(N) := \{ N, \mathrm{Col}(N), \mathrm{Col}^2(N), \dots \}} denote the forward Collatz orbit of {N}. The notorious Collatz conjecture asserts that {1 \in \mathrm{Col}^{\bf N}(N)} for all {N \in {\bf N}+1}. Equivalently, if we define the backwards Collatz orbit {(\mathrm{Col}^{\bf N})^*(N) := \{ M \in {\bf N}+1: N \in \mathrm{Col}^{\bf N}(M) \}} to be all the natural numbers {M} that encounter {N} in their forward Collatz orbit, then the Collatz conjecture asserts that {(\mathrm{Col}^{\bf N})^*(1) = {\bf N}+1}. As a partial result towards this latter statement, Krasikov and Lagarias in 2003 established the bound

\displaystyle \# \{ N \leq x: N \in (\mathrm{Col}^{\bf N})^*(1) \} \gg x^\gamma \ \ \ \ \ (1)

 

for all {x \geq 1} and {\gamma = 0.84}. (This improved upon previous values of {\gamma = 0.81} obtained by Applegate and Lagarias in 1995, {\gamma = 0.65} by Applegate and Lagarias in 1995 by a different method, {\gamma=0.48} by Wirsching in 1993, {\gamma=0.43} by Krasikov in 1989, {\gamma=0.3} by Sander in 1990, and some {\gamma>0} by Crandall in 1978.) This is still the largest value of {\gamma} for which (1) has been established. Of course, the Collatz conjecture would imply that we can take {\gamma} equal to {1}, which is the assertion that a positive density set of natural numbers obeys the Collatz conjecture. This is not yet established, although the results in my previous paper do at least imply that a positive density set of natural numbers iterates to an (explicitly computable) bounded set, so in principle the {\gamma=1} case of (1) could now be verified by an (enormous) finite computation in which one verifies that every number in this explicit bounded set iterates to {1}. In this post I would like to record a possible alternate route to this problem that depends on the distribution of a certain family of random variables that appeared in my previous paper, that I called Syracuse random variables.

Definition 1 (Syracuse random variables) For any natural number {n}, a Syracuse random variable {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} on the cyclic group {{\bf Z}/3^n{\bf Z}} is defined as a random variable of the form

\displaystyle \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) = \sum_{m=1}^n 3^{n-m} 2^{-{\mathbf a}_m-\dots-{\mathbf a}_n} \ \ \ \ \ (2)

 

where {\mathbf{a}_1,\dots,\mathbf{a_n}} are independent copies of a geometric random variable {\mathbf{Geom}(2)} on the natural numbers with mean {2}, thus

\displaystyle \mathop{\bf P}( \mathbf{a}_1=a_1,\dots,\mathbf{a}_n=a_n) = 2^{-a_1-\dots-a_n}

} for {a_1,\dots,a_n \in {\bf N}+1}. In (2) the arithmetic is performed in the ring {{\bf Z}/3^n{\bf Z}}.

Thus for instance

\displaystyle \mathrm{Syrac}({\bf Z}/3{\bf Z}) = 2^{-\mathbf{a}_1} \hbox{ mod } 3

\displaystyle \mathrm{Syrac}({\bf Z}/3^2{\bf Z}) = 2^{-\mathbf{a}_1-\mathbf{a}_2} + 3 \times 2^{-\mathbf{a}_2} \hbox{ mod } 3^2

\displaystyle \mathrm{Syrac}({\bf Z}/3^3{\bf Z}) = 2^{-\mathbf{a}_1-\mathbf{a}_2-\mathbf{a}_3} + 3 \times 2^{-\mathbf{a}_2-\mathbf{a}_3} + 3^2 \times 2^{-\mathbf{a}_3} \hbox{ mod } 3^3

and so forth. After reversing the labeling of the {\mathbf{a}_1,\dots,\mathbf{a}_n}, one could also view {\mathrm{Syrac}({\bf Z}/3^n{\bf Z})} as the mod {3^n} reduction of a {3}-adic random variable

\displaystyle \mathbf{Syrac}({\bf Z}_3) = \sum_{m=1}^\infty 3^{m-1} 2^{-{\mathbf a}_1-\dots-{\mathbf a}_m}.

The probability density function {b \mapsto \mathbf{P}( \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) = b )} of the Syracuse random variable can be explicitly computed by a recursive formula (see Lemma 1.12 of my previous paper). For instance, when {n=1}, {\mathbf{P}( \mathbf{Syrac}({\bf Z}/3{\bf Z}) = b )} is equal to {0,1/3,2/3} for {x=b,1,2 \hbox{ mod } 3} respectively, while when {n=2}, {\mathbf{P}( \mathbf{Syrac}({\bf Z}/3^2{\bf Z}) = b )} is equal to

\displaystyle 0, \frac{8}{63}, \frac{16}{63}, 0, \frac{11}{63}, \frac{4}{63}, 0, \frac{2}{63}, \frac{22}{63}

when {b=0,\dots,8 \hbox{ mod } 9} respectively.

The relationship of these random variables to the Collatz problem can be explained as follows. Let {2{\bf N}+1 = \{1,3,5,\dots\}} denote the odd natural numbers, and define the Syracuse map {\mathrm{Syr}: 2{\bf N}+1 \rightarrow 2{\bf N}+1} by

\displaystyle \mathrm{Syr}(N) := \frac{3n+1}{2^{\nu_2(3N+1)}}

where the {2}valuation {\nu_2(3n+1) \in {\bf N}} is the number of times {2} divides {3N+1}. We can define the forward orbit {\mathrm{Syr}^{\bf N}(n)} and backward orbit {(\mathrm{Syr}^{\bf N})^*(N)} of the Syracuse map as before. It is not difficult to then see that the Collatz conjecture is equivalent to the assertion {(\mathrm{Syr}^{\bf N})^*(1) = 2{\bf N}+1}, and that the assertion (1) for a given {\gamma} is equivalent to the assertion

\displaystyle \# \{ N \leq x: N \in (\mathrm{Syr}^{\bf N})^*(1) \} \gg x^\gamma \ \ \ \ \ (3)

 

for all {x \geq 1}, where {N} is now understood to range over odd natural numbers. A brief calculation then shows that for any odd natural number {N} and natural number {n}, one has

\displaystyle \mathrm{Syr}^n(N) = 3^n 2^{-a_1-\dots-a_n} N + \sum_{m=1}^n 3^{n-m} 2^{-a_m-\dots-a_n}

where the natural numbers {a_1,\dots,a_n} are defined by the formula

\displaystyle a_i := \nu_2( 3 \mathrm{Syr}^{i-1}(N) + 1 ),

so in particular

\displaystyle \mathrm{Syr}^n(N) = \sum_{m=1}^n 3^{n-m} 2^{-a_m-\dots-a_n} \hbox{ mod } 3^n.

Heuristically, one expects the {2}-valuation {a = \nu_2(N)} of a typical odd number {N} to be approximately distributed according to the geometric distribution {\mathbf{Geom}(2)}, so one therefore expects the residue class {\mathrm{Syr}^n(N) \hbox{ mod } 3^n} to be distributed approximately according to the random variable {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}.

The Syracuse random variables {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} will always avoid multiples of three (this reflects the fact that {\mathrm{Syr}(N)} is never a multiple of three), but attains any non-multiple of three in {{\bf Z}/3^n{\bf Z}} with positive probability. For any natural number {n}, set

\displaystyle c_n := \inf_{b \in {\bf Z}/3^n{\bf Z}: 3 \nmid b} \mathbf{P}( \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) = b ).

Equivalently, {c_n} is the greatest quantity for which we have the inequality

\displaystyle \sum_{(a_1,\dots,a_n) \in S_{n,N}} 2^{-a_1-\dots-a_m} \geq c_n \ \ \ \ \ (4)

 

for all integers {N} not divisible by three, where {S_{n,N} \subset ({\bf N}+1)^n} is the set of all tuples {(a_1,\dots,a_n)} for which

\displaystyle N = \sum_{m=1}^n 3^{m-1} 2^{-a_1-\dots-a_m} \hbox{ mod } 3^n.

Thus for instance {c_0=1}, {c_1 = 1/3}, and {c_2 = 2/63}. On the other hand, since all the probabilities {\mathbf{P}( \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) = b)} sum to {1} as {b \in {\bf Z}/3^n{\bf Z}} ranges over the non-multiples of {3}, we have the trivial upper bound

\displaystyle c_n \leq \frac{3}{2} 3^{-n}.

There is also an easy submultiplicativity result:

Lemma 2 For any natural numbers {n_1,n_2}, we have

\displaystyle c_{n_1+n_2-1} \geq c_{n_1} c_{n_2}.

Proof: Let {N} be an integer not divisible by {3}, then by (4) we have

\displaystyle \sum_{(a_1,\dots,a_{n_1}) \in S_{n_1,N}} 2^{-a_1-\dots-a_{n_1}} \geq c_{n_1}.

If we let {S'_{n_1,N}} denote the set of tuples {(a_1,\dots,a_{n_1-1})} that can be formed from the tuples in {S_{n_1,N}} by deleting the final component {a_{n_1}} from each tuple, then we have

\displaystyle \sum_{(a_1,\dots,a_{n_1-1}) \in S'_{n_1,N}} 2^{-a_1-\dots-a_{n_1-1}} \geq c_{n_1}. \ \ \ \ \ (5)

 

Next, observe that if {(a_1,\dots,a_{n_1-1}) \in S'_{n_1,N}}, then

\displaystyle N = \sum_{m=1}^{n_1-1} 3^{m-1} 2^{-a_1-\dots-a_m} + 3^{n_1-1} 2^{-a_1-\dots-a_{n_1-1}} M

with {M = M_{N,n_1,a_1,\dots,a_{n_1-1}}} an integer not divisible by three. By definition of {S_{n_2,M}} and a relabeling, we then have

\displaystyle M = \sum_{m=1}^{n_2} 3^{m-1} 2^{-a_{n_1}-\dots-a_{m+n_1-1}} \hbox{ mod } 3^{n_2}

for all {(a_{n_1},\dots,a_{n_1+n_2-1}) \in S_{n_2,M}}. For such tuples we then have

\displaystyle N = \sum_{m=1}^{n_1+n_2-1} 3^{m-1} 2^{-a_1-\dots-a_{n_1+n_2-1}} \hbox{ mod } 3^{n_1+n_2-1}

so that {(a_1,\dots,a_{n_1+n_2-1}) \in S_{n_1+n_2-1,N}}. Since

\displaystyle \sum_{(a_{n_1},\dots,a_{n_1+n_2-1}) \in S_{n_2,M}} 2^{-a_{n_1}-\dots-a_{n_1+n_2-1}} \geq c_{n_2}

for each {M}, the claim follows. \Box

From this lemma we see that {c_n = 3^{-\beta n + o(n)}} for some absolute constant {\beta \geq 1}. Heuristically, we expect the Syracuse random variables to be somewhat approximately equidistributed amongst the multiples of {{\bf Z}/3^n{\bf Z}} (in Proposition 1.4 of my previous paper I prove a fine scale mixing result that supports this heuristic). As a consequence it is natural to conjecture that {\beta=1}. I cannot prove this, but I can show that this conjecture would imply that we can take the exponent {\gamma} in (1), (3) arbitrarily close to one:

Proposition 3 Suppose that {\beta=1} (that is to say, {c_n = 3^{-n+o(n)}} as {n \rightarrow \infty}). Then

\displaystyle \# \{ N \leq x: N \in (\mathrm{Syr}^{\bf N})^*(1) \} \gg x^{1-o(1)}

as {x \rightarrow \infty}, or equivalently

\displaystyle \# \{ N \leq x: N \in (\mathrm{Col}^{\bf N})^*(1) \} \gg x^{1-o(1)}

as {x \rightarrow \infty}. In other words, (1), (3) hold for all {\gamma < 1}.

I prove this proposition below the fold. A variant of the argument shows that for any value of {\beta}, (1), (3) holds whenever {\gamma < f(\beta)}, where {f: [0,1] \rightarrow [0,1]} is an explicitly computable function with {f(\beta) \rightarrow 1} as {\beta \rightarrow 1}. In principle, one could then improve the Krasikov-Lagarias result {\gamma = 0.84} by getting a sufficiently good upper bound on {\beta}, which is in principle achievable numerically (note for instance that Lemma 2 implies the bound {c_n \leq 3^{-\beta(n-1)}} for any {n}, since {c_{kn-k+1} \geq c_n^k} for any {k}).

Read the rest of this entry »

Just a brief post to record some notable papers in my fields of interest that appeared on the arXiv recently.

  • A sharp square function estimate for the cone in {\bf R}^3“, by Larry Guth, Hong Wang, and Ruixiang Zhang.  This paper establishes an optimal (up to epsilon losses) square function estimate for the three-dimensional light cone that was essentially conjectured by Mockenhaupt, Seeger, and Sogge, which has a number of other consequences including Sogge’s local smoothing conjecture for the wave equation in two spatial dimensions, which in turn implies the (already known) Bochner-Riesz, restriction, and Kakeya conjectures in two dimensions.   Interestingly, modern techniques such as polynomial partitioning and decoupling estimates are not used in this argument; instead, the authors mostly rely on an induction on scales argument and Kakeya type estimates.  Many previous authors (including myself) were able to get weaker estimates of this type by an induction on scales method, but there were always significant inefficiencies in doing so; in particular knowing the sharp square function estimate at smaller scales did not imply the sharp square function estimate at the given larger scale.  The authors here get around this issue by finding an even stronger estimate that implies the square function estimate, but behaves significantly better with respect to induction on scales.
  • On the Chowla and twin primes conjectures over {\mathbb F}_q[T]“, by Will Sawin and Mark Shusterman.  This paper resolves a number of well known open conjectures in analytic number theory, such as the Chowla conjecture and the twin prime conjecture (in the strong form conjectured by Hardy and Littlewood), in the case of function fields where the field is a prime power q=p^j which is fixed (in contrast to a number of existing results in the “large q” limit) but has a large exponent j.  The techniques here are orthogonal to those used in recent progress towards the Chowla conjecture over the integers (e.g., in this previous paper of mine); the starting point is an algebraic observation that in certain function fields, the Mobius function behaves like a quadratic Dirichlet character along certain arithmetic progressions.  In principle, this reduces problems such as Chowla’s conjecture to problems about estimating sums of Dirichlet characters, for which more is known; but the task is still far from trivial.
  • Bounds for sets with no polynomial progressions“, by Sarah Peluse.  This paper can be viewed as part of a larger project to obtain quantitative density Ramsey theorems of Szemeredi type.  For instance, Gowers famously established a relatively good quantitative bound for Szemeredi’s theorem that all dense subsets of integers contain arbitrarily long arithmetic progressions a, a+r, \dots, a+(k-1)r.  The corresponding question for polynomial progressions a+P_1(r), \dots, a+P_k(r) is considered more difficult for a number of reasons.  One of them is that dilation invariance is lost; a dilation of an arithmetic progression is again an arithmetic progression, but a dilation of a polynomial progression will in general not be a polynomial progression with the same polynomials P_1,\dots,P_k.  Another issue is that the ranges of the two parameters a,r are now at different scales.  Peluse gets around these difficulties in the case when all the polynomials P_1,\dots,P_k have distinct degrees, which is in some sense the opposite case to that considered by Gowers (in particular, she avoids the need to obtain quantitative inverse theorems for high order Gowers norms; which was recently obtained in this integer setting by Manners but with bounds that are probably not strong enough to for the bounds in Peluse’s results, due to a degree lowering argument that is available in this case).  To resolve the first difficulty one has to make all the estimates rather uniform in the coefficients of the polynomials P_j, so that one can still run a density increment argument efficiently.  To resolve the second difficulty one needs to find a quantitative concatenation theorem for Gowers uniformity norms.  Many of these ideas were developed in previous papers of Peluse and Peluse-Prendiville in simpler settings.
  • On blow up for the energy super critical defocusing non linear Schrödinger equations“, by Frank Merle, Pierre Raphael, Igor Rodnianski, and Jeremie Szeftel.  This paper (when combined with two companion papers) resolves a long-standing problem as to whether finite time blowup occurs for the defocusing supercritical nonlinear Schrödinger equation (at least in certain dimensions and nonlinearities).  I had a previous paper establishing a result like this if one “cheated” by replacing the nonlinear Schrodinger equation by a system of such equations, but remarkably they are able to tackle the original equation itself without any such cheating.  Given the very analogous situation with Navier-Stokes, where again one can create finite time blowup by “cheating” and modifying the equation, it does raise hope that finite time blowup for the incompressible Navier-Stokes and Euler equations can be established…  In fact the connection may not just be at the level of analogy; a surprising key ingredient in the proofs here is the observation that a certain blowup ansatz for the nonlinear Schrodinger equation is governed by solutions to the (compressible) Euler equation, and finite time blowup examples for the latter can be used to construct finite time blowup examples for the former.

Earlier this month, Hao Huang (who, incidentally, was a graduate student here at UCLA) gave a remarkably short proof of a long-standing problem in theoretical computer science known as the sensitivity conjecture. See for instance this blog post of Gil Kalai for further discussion and links to many other online discussions of this result. One formulation of the theorem proved is as follows. Define the {n}-dimensional hypercube graph {Q_n} to be the graph with vertex set {({\bf Z}/2{\bf Z})^n}, and with every vertex {v \in ({\bf Z}/2{\bf Z})^n} joined to the {n} vertices {v + e_1,\dots,v+e_n}, where {e_1,\dots,e_n} is the standard basis of {({\bf Z}/2{\bf Z})^n}.

Theorem 1 (Lower bound on maximum degree of induced subgraphs of hypercube) Let {E} be a set of at least {2^{n-1}+1} vertices in {Q_n}. Then there is a vertex in {E} that is adjacent (in {Q_n}) to at least {\sqrt{n}} other vertices in {E}.

The bound {\sqrt{n}} (or more precisely, {\lceil \sqrt{n} \rceil}) is completely sharp, as shown by Chung, Furedi, Graham, and Seymour; we describe this example below the fold. When combined with earlier reductions of Gotsman-Linial and Nisan-Szegedy; we give these below the fold also.

Let {A = (a_{vw})_{v,w \in ({\bf Z}/2{\bf Z})^n}} be the adjacency matrix of {Q_n} (where we index the rows and columns directly by the vertices in {({\bf Z}/2{\bf Z})^n}, rather than selecting some enumeration {1,\dots,2^n}), thus {a_{vw}=1} when {w = v+e_i} for some {i=1,\dots,n}, and {a_{vw}=0} otherwise. The above theorem then asserts that if {E} is a set of at least {2^{n-1}+1} vertices, then the {E \times E} minor {(a_{vw})_{v,w \in E}} of {A} has a row (or column) that contains at least {\sqrt{n}} non-zero entries.

The key step to prove this theorem is the construction of rather curious variant {\tilde A} of the adjacency matrix {A}:

Proposition 2 There exists a {({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})^n} matrix {\tilde A = (\tilde a_{vw})_{v,w \in ({\bf Z}/2{\bf Z})^n}} which is entrywise dominated by {A} in the sense that

\displaystyle  |\tilde a_{vw}| \leq a_{vw} \hbox{ for all } v,w \in ({\bf Z}/2{\bf Z})^n \ \ \ \ \ (1)

and such that {\tilde A} has {\sqrt{n}} as an eigenvalue with multiplicity {2^{n-1}}.

Assuming this proposition, the proof of Theorem 1 can now be quickly concluded. If we view {\tilde A} as a linear operator on the {2^n}-dimensional space {\ell^2(({\bf Z}/2{\bf Z})^n)} of functions of {({\bf Z}/2{\bf Z})^n}, then by hypothesis this space has a {2^{n-1}}-dimensional subspace {V} on which {\tilde A} acts by multiplication by {\sqrt{n}}. If {E} is a set of at least {2^{n-1}+1} vertices in {Q_n}, then the space {\ell^2(E)} of functions on {E} has codimension at most {2^{n-1}-1} in {\ell^2(({\bf Z}/2{\bf Z})^n)}, and hence intersects {V} non-trivially. Thus the {E \times E} minor {\tilde A_E} of {\tilde A} also has {\sqrt{n}} as an eigenvalue (this can also be derived from the Cauchy interlacing inequalities), and in particular this minor has operator norm at least {\sqrt{n}}. By Schur’s test, this implies that one of the rows or columns of this matrix has absolute values summing to at least {\sqrt{n}}, giving the claim.

Remark 3 The argument actually gives a strengthening of Theorem 1: there exists a vertex {v_0} of {E} with the property that for every natural number {k}, there are at least {n^{k/2}} paths of length {k} in the restriction {Q_n|_E} of {Q_n} to {E} that start from {v_0}. Indeed, if we let {(u_v)_{v \in E}} be an eigenfunction of {\tilde A} on {\ell^2(E)}, and let {v_0} be a vertex in {E} that maximises the value of {|u_{v_0}|}, then for any {k} we have that the {v_0} component of {\tilde A_E^k (u_v)_{v \in E}} is equal to {n^{k/2} |u_{v_0}|}; on the other hand, by the triangle inequality, this component is at most {|u_{v_0}|} times the number of length {k} paths in {Q_n|_E} starting from {v_0}, giving the claim.

This argument can be viewed as an instance of a more general “interlacing method” to try to control the behaviour of a graph {G} on all large subsets {E} by first generating a matrix {\tilde A} on {G} with very good spectral properties, which are then partially inherited by the {E \times E} minor of {\tilde A} by interlacing inequalities. In previous literature using this method (see e.g., this survey of Haemers, or this paper of Wilson), either the original adjacency matrix {A}, or some non-negatively weighted version of that matrix, was used as the controlling matrix {\tilde A}; the novelty here is the use of signed controlling matrices. It will be interesting to see what further variants and applications of this method emerge in the near future. (Thanks to Anurag Bishoi in the comments for these references.)

The “magic” step in the above argument is constructing {\tilde A}. In Huang’s paper, {\tilde A} is constructed recursively in the dimension {n} in a rather simple but mysterious fashion. Very recently, Roman Karasev gave an interpretation of this matrix in terms of the exterior algebra on {{\bf R}^n}. In this post I would like to give an alternate interpretation in terms of the operation of twisted convolution, which originated in the theory of the Heisenberg group in quantum mechanics.

Firstly note that the original adjacency matrix {A}, when viewed as a linear operator on {\ell^2(({\bf Z}/2{\bf Z})^n)}, is a convolution operator

\displaystyle  A f = f * \mu

where

\displaystyle \mu(x) := \sum_{i=1}^n 1_{x=e_i}

is the counting measure on the standard basis {e_1,\dots,e_n}, and {*} denotes the ordinary convolution operation

\displaystyle  f * g(x) := \sum_{y \in ({\bf Z}/2{\bf Z})^n} f(y) g(x-y) = \sum_{y_1+y_2 = x} f(y_1) g(y_2).

As is well known, this operation is commutative and associative. Thus for instance the square {A^2} of the adjacency operator {A} is also a convolution operator

\displaystyle  A^2 f = f * (\mu * \mu)(x)

where the convolution kernel {\mu * \mu} is moderately complicated:

\displaystyle  \mu*\mu(x) = n \times 1_{x=0} + \sum_{1 \leq i < j \leq n} 2 \times 1_{x = e_i + e_j}.

The factor {2} in this expansion comes from combining the two terms {1_{x=e_i} * 1_{x=e_j}} and {1_{x=e_j} * 1_{x=e_i}}, which both evaluate to {1_{x=e_i+e_j}}.

More generally, given any bilinear form {B: ({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})^n \rightarrow {\bf Z}/2{\bf Z}}, one can define the twisted convolution

\displaystyle  f *_B g(x) := \sum_{y \in ({\bf Z}/2{\bf Z})^n} (-1)^{B(y,x-y)} f(y) g(x-y)

\displaystyle  = \sum_{y_1+y_2=x} (-1)^{B(y_1,y_2)} f(y_1) g(y_2)

of two functions {f,g \in \ell^2(({\bf Z}/2{\bf Z})^n)}. This operation is no longer commutative (unless {B} is symmetric). However, it remains associative; indeed, one can easily compute that

\displaystyle  (f *_B g) *_B h(x) = f *_B (g *_B h)(x)

\displaystyle = \sum_{y_1+y_2+y_3=x} (-1)^{B(y_1,y_2)+B(y_1,y_3)+B(y_2,y_3)} f(y_1) g(y_2) h(y_3).

In particular, if we define the twisted convolution operator

\displaystyle  A_B f(x) := f *_B \mu(x)

then the square {A_B^2} is also a twisted convolution operator

\displaystyle  A_B^2 f = f *_B (\mu *_B \mu)

and the twisted convolution kernel {\mu *_B \mu} can be computed as

\displaystyle  \mu *_B \mu(x) = (\sum_{i=1}^n (-1)^{B(e_i,e_i)}) 1_{x=0}

\displaystyle + \sum_{1 \leq i < j \leq n} ((-1)^{B(e_i,e_j)} + (-1)^{B(e_j,e_i)}) 1_{x=e_i+e_j}.

For general bilinear forms {B}, this twisted convolution is just as messy as {\mu * \mu} is. But if we take the specific bilinear form

\displaystyle  B(x,y) := \sum_{1 \leq i < j \leq n} x_i y_j \ \ \ \ \ (2)

then {B(e_i,e_i)=0} for {1 \leq i \leq n} and {B(e_i,e_j)=1, B(e_j,e_i)=0} for {1 \leq i < j \leq n}, and the above twisted convolution simplifies to

\displaystyle  \mu *_B \mu(x) = n 1_{x=0}

and now {A_B^2} is very simple:

\displaystyle  A_B^2 f = n f.

Thus the only eigenvalues of {A_B} are {+\sqrt{n}} and {-\sqrt{n}}. The matrix {A_B} is entrywise dominated by {A} in the sense of (1), and in particular has trace zero; thus the {+\sqrt{n}} and {-\sqrt{n}} eigenvalues must occur with equal multiplicity, so in particular the {+\sqrt{n}} eigenvalue occurs with multiplicity {2^{n-1}} since the matrix has dimensions {2^n \times 2^n}. This establishes Proposition 2.

Remark 4 Twisted convolution {*_B} is actually just a component of ordinary convolution, but not on the original group {({\bf Z}/2{\bf Z})^n}; instead it relates to convolution on a Heisenberg group extension of this group. More specifically, define the Heisenberg group {H} to be the set of pairs {(x, t) \in ({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})} with group law

\displaystyle  (x,t) \cdot (y,s) := (x+y, t+s+B(x,y))

and inverse operation

\displaystyle  (x,t)^{-1} = (-x, -t+B(x,x))

(one can dispense with the negative signs here if desired, since we are in characteristic two). Convolution on {H} is defined in the usual manner: one has

\displaystyle  F*G( (x,t) ) := \sum_{(y,s) \in H} F(y,s) G( (y,s)^{-1} (x,t) )

for any {F,G \in \ell^2(H)}. Now if {f \in \ell^2(({\bf Z}/2{\bf Z})^n)} is a function on the original group {({\bf Z}/2{\bf Z})^n}, we can define the lift {\tilde f \in \ell^2(H)} by the formula

\displaystyle  \tilde f(x,t) := (-1)^t f(x)

and then by chasing all the definitions one soon verifies that

\displaystyle  \tilde f * \tilde g = 2 \widetilde{f *_B g}

for any {f,g \in \ell^2(({\bf Z}/2{\bf Z})^n)}, thus relating twisted convolution {*_B} to Heisenberg group convolution {*}.

Remark 5 With the twisting by the specific bilinear form {B} given by (2), convolution by {1_{x=e_i}} and {1_{x=e_j}} now anticommute rather than commute. This makes the twisted convolution algebra {(\ell^2(({\bf Z}/2{\bf Z})^n), *_B)} isomorphic to a Clifford algebra {Cl({\bf R}^n,I_n)} (the real or complex algebra generated by formal generators {v_1,\dots,v_n} subject to the relations {(v_iv_j+v_jv_i)/2 = 1_{i=j}} for {i,j=1,\dots,n}) rather than the commutative algebra more familiar to abelian Fourier analysis. This connection to Clifford algebra (also observed independently by Tom Mrowka and by Daniel Matthews) may be linked to the exterior algebra interpretation of the argument in the recent preprint of Karasev mentioned above.

Remark 6 One could replace the form (2) in this argument by any other bilinear form {B'} that obeyed the relations {B'(e_i,e_i)=0} and {B'(e_i,e_j) + B'(e_j,e_i)=1} for {i \neq j}. However, this additional level of generality does not add much; any such {B'} will differ from {B} by an antisymmetric form {C} (so that {C(x,x) = 0} for all {x}, which in characteristic two implied that {C(x,y) = C(y,x)} for all {x,y}), and such forms can always be decomposed as {C(x,y) = C'(x,y) + C'(y,x)}, where {C'(x,y) := \sum_{i<j} C(e_i,e_j) x_i y_j}. As such, the matrices {A_B} and {A_{B'}} are conjugate, with the conjugation operator being the diagonal matrix with entries {(-1)^{C'(x,x)}} at each vertex {x}.

Remark 7 (Added later) This remark combines the two previous remarks. One can view any of the matrices {A_{B'}} in Remark 6 as components of a single canonical matrix {A_{Cl}} that is still of dimensions {({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})^n}, but takes values in the Clifford algebra {Cl({\bf R}^n,I_n)} from Remark 5; with this “universal algebra” perspective, one no longer needs to make any arbitrary choices of form {B}. More precisely, let {\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))} denote the vector space of functions {f: ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n)} from the hypercube to the Clifford algebra; as a real vector space, this is a {2^{2n}} dimensional space, isomorphic to the direct sum of {2^n} copies of {\ell^2(({\bf Z}/2{\bf Z})^n)}, as the Clifford algebra is itself {2^n} dimensional. One can then define a canonical Clifford adjacency operator {A_{Cl}} on this space by

\displaystyle  A_{Cl} f(x) := \sum_{i=1}^n f(x+e_i) v_i

where {v_1,\dots,v_n} are the generators of {Cl({\bf R}^n,I_n)}. This operator can either be identified with a Clifford-valued {2^n \times 2^n} matrix or as a real-valued {2^{2n} \times 2^{2n}} matrix. In either case one still has the key algebraic relations {A_{Cl}^2 = n} and {\mathrm{tr} A_{Cl} = 0}, ensuring that when viewed as a real {2^{2n} \times 2^{2n}} matrix, half of the eigenvalues are equal to {+\sqrt{n}} and half equal to {-\sqrt{n}}. One can then use this matrix in place of any of the {A_{B'}} to establish Theorem 1 (noting that Schur’s test continues to work for Clifford-valued matrices because of the norm structure on {Cl({\bf R}^n,I_n)}).

To relate {A_{Cl}} to the real {2^n \times 2^n} matrices {A_{B'}}, first observe that each point {x} in the hypercube {({\bf Z}/2{\bf Z})^n} can be associated with a one-dimensional real subspace {\ell_x} (i.e., a line) in the Clifford algebra {Cl({\bf R}^n,I_n)} by the formula

\displaystyle  \ell_{e_{i_1} + \dots + e_{i_k}} := \mathrm{span}_{\bf R}( v_{i_1} \dots v_{i_k} )

for any {i_1,\dots,i_k \in \{1,\dots,n\}} (note that this definition is well-defined even if the {i_1,\dots,i_k} are out of order or contain repetitions). This can be viewed as a discrete line bundle over the hypercube. Since {\ell_{x+e_i} = \ell_x e_i} for any {i}, we see that the {2^n}-dimensional real linear subspace {V} of {\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))} of sections of this bundle, that is to say the space of functions {f: ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n)} such that {f(x) \in \ell_x} for all {x \in ({\bf Z}/2{\bf Z})^n}, is an invariant subspace of {A_{Cl}}. (Indeed, using the left-action of the Clifford algebra on {\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))}, which commutes with {A_{Cl}}, one can naturally identify {\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))} with {Cl({\bf R}^n,I_n) \otimes V}, with the left action of {Cl({\bf R}^n,I_n)} acting purely on the first factor and {A_{Cl}} acting purely on the second factor.) Any trivialisation of this line bundle lets us interpret the restriction {A_{Cl}|_V} of {A_{Cl}} to {V} as a real {2^n \times 2^n} matrix. In particular, given one of the bilinear forms {B'} from Remark 6, we can identify {V} with {\ell^2(({\bf Z}/2{\bf Z})^n)} by identifying any real function {f \in \ell^2( ({\bf Z}/2{\bf Z})^n)} with the lift {\tilde f \in V} defined by

\displaystyle  \tilde f(e_{i_1} + \dots + e_{i_k}) := (-1)^{\sum_{1 \leq j < j' \leq k} B'(e_{i_j}, e_{i_{j'}})}

\displaystyle f(e_{i_1} + \dots + e_{i_k}) v_{i_1} \dots v_{i_k}

whenever {1 \leq i_1 < \dots < i_k \leq n}. A somewhat tedious computation using the properties of {B'} then eventually gives the intertwining identity

\displaystyle  A_{Cl} \tilde f = \widetilde{A_{B'} f}

and so {A_{B'}} is conjugate to {A_{Cl}|_V}.

Read the rest of this entry »

Let {\Omega} be some domain (such as the real numbers). For any natural number {p}, let {L(\Omega^p)_{sym}} denote the space of symmetric real-valued functions {F^{(p)}: \Omega^p \rightarrow {\bf R}} on {p} variables {x_1,\dots,x_p \in \Omega}, thus

\displaystyle F^{(p)}(x_{\sigma(1)},\dots,x_{\sigma(p)}) = F^{(p)}(x_1,\dots,x_p)

for any permutation {\sigma: \{1,\dots,p\} \rightarrow \{1,\dots,p\}}. For instance, for any natural numbers {k,p}, the elementary symmetric polynomials

\displaystyle e_k^{(p)}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} x_{i_1} \dots x_{i_k}

will be an element of {L({\bf R}^p)_{sym}}. With the pointwise product operation, {L(\Omega^p)_{sym}} becomes a commutative real algebra. We include the case {p=0}, in which case {L(\Omega^0)_{sym}} consists solely of the real constants.

Given two natural numbers {k,p}, one can “lift” a symmetric function {F^{(k)} \in L(\Omega^k)_{sym}} of {k} variables to a symmetric function {[F^{(k)}]_{k \rightarrow p} \in L(\Omega^p)_{sym}} of {p} variables by the formula

\displaystyle [F^{(k)}]_{k \rightarrow p}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} F^{(k)}(x_{i_1}, \dots, x_{i_k})

\displaystyle = \frac{1}{k!} \sum_\pi F^{(k)}( x_{\pi(1)}, \dots, x_{\pi(k)} )

where {\pi} ranges over all injections from {\{1,\dots,k\}} to {\{1,\dots,p\}} (the latter formula making it clearer that {[F^{(k)}]_{k \rightarrow p}} is symmetric). Thus for instance

\displaystyle [F^{(1)}(x_1)]_{1 \rightarrow p} = \sum_{i=1}^p F^{(1)}(x_i)

\displaystyle [F^{(2)}(x_1,x_2)]_{2 \rightarrow p} = \sum_{1 \leq i < j \leq p} F^{(2)}(x_i,x_j)

and

\displaystyle e_k^{(p)}(x_1,\dots,x_p) = [x_1 \dots x_k]_{k \rightarrow p}.

Also we have

\displaystyle [1]_{k \rightarrow p} = \binom{p}{k} = \frac{p(p-1)\dots(p-k+1)}{k!}.

With these conventions, we see that {[F^{(k)}]_{k \rightarrow p}} vanishes for {p=0,\dots,k-1}, and is equal to {F} if {k=p}. We also have the transitivity

\displaystyle [F^{(k)}]_{k \rightarrow p} = \frac{1}{\binom{p-k}{p-l}} [[F^{(k)}]_{k \rightarrow l}]_{l \rightarrow p}

if {k \leq l \leq p}.

The lifting map {[]_{k \rightarrow p}} is a linear map from {L(\Omega^k)_{sym}} to {L(\Omega^p)_{sym}}, but it is not a ring homomorphism. For instance, when {\Omega={\bf R}}, one has

\displaystyle [x_1]_{1 \rightarrow p} [x_1]_{1 \rightarrow p} = (\sum_{i=1}^p x_i)^2 \ \ \ \ \ (1)

 

\displaystyle = \sum_{i=1}^p x_i^2 + 2 \sum_{1 \leq i < j \leq p} x_i x_j

\displaystyle = [x_1^2]_{1 \rightarrow p} + 2 [x_1 x_2]_{1 \rightarrow p}

\displaystyle \neq [x_1^2]_{1 \rightarrow p}.

In general, one has the identity

\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow p} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow p} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (2)

 

\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow p}

for all natural numbers {k,l,p} and {F^{(k)} \in L(\Omega^k)_{sym}}, {G^{(l)} \in L(\Omega^l)_{sym}}, where {\pi, \rho} range over all injections {\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}, {\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}} with {\pi(\{1,\dots,k\}) \cup \rho(\{1,\dots,l\}) = \{1,\dots,m\}}. Combinatorially, the identity (2) follows from the fact that given any injections {\tilde \pi: \{1,\dots,k\} \rightarrow \{1,\dots,p\}} and {\tilde \rho: \{1,\dots,l\} \rightarrow \{1,\dots,p\}} with total image {\tilde \pi(\{1,\dots,k\}) \cup \tilde \rho(\{1,\dots,l\})} of cardinality {m}, one has {k,l \leq m \leq k+l}, and furthermore there exist precisely {m!} triples {(\pi, \rho, \sigma)} of injections {\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}, {\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}}, {\sigma: \{1,\dots,m\} \rightarrow \{1,\dots,p\}} such that {\tilde \pi = \sigma \circ \pi} and {\tilde \rho = \sigma \circ \rho}.

Example 1 When {\Omega = {\bf R}}, one has

\displaystyle [x_1 x_2]_{2 \rightarrow p} [x_1]_{1 \rightarrow p} = [\frac{1}{2! 1!}( 2 x_1^2 x_2 + 2 x_1 x_2^2 )]_{2 \rightarrow p} + [\frac{1}{2! 1!} 6 x_1 x_2 x_3]_{3 \rightarrow p}

\displaystyle = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow p} + [3x_1 x_2 x_3]_{3 \rightarrow p}

which is just a restatement of the identity

\displaystyle (\sum_{i < j} x_i x_j) (\sum_k x_k) = \sum_{i<j} x_i^2 x_j + x_i x_j^2 + \sum_{i < j < k} 3 x_i x_j x_k.

Note that the coefficients appearing in (2) do not depend on the final number of variables {p}. We may therefore abstract the role of {p} from the law (2) by introducing the real algebra {L(\Omega^*)_{sym}} of formal sums

\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}

where for each {k}, {F^{(k)}} is an element of {L(\Omega^k)_{sym}} (with only finitely many of the {F^{(k)}} being non-zero), and with the formal symbol {[]_{k \rightarrow *}} being formally linear, thus

\displaystyle [F^{(k)}]_{k \rightarrow *} + [G^{(k)}]_{k \rightarrow *} := [F^{(k)} + G^{(k)}]_{k \rightarrow *}

and

\displaystyle c [F^{(k)}]_{k \rightarrow *} := [cF^{(k)}]_{k \rightarrow *}

for {F^{(k)}, G^{(k)} \in L(\Omega^k)_{sym}} and scalars {c \in {\bf R}}, and with multiplication given by the analogue

\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow *} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow *} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (3)

 

\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow *}

of (2). Thus for instance, in this algebra {L(\Omega^*)_{sym}} we have

\displaystyle [x_1]_{1 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2]_{1 \rightarrow *} + 2 [x_1 x_2]_{2 \rightarrow *}

and

\displaystyle [x_1 x_2]_{2 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow *} + [3 x_1 x_2 x_3]_{3 \rightarrow *}.

Informally, {L(\Omega^*)_{sym}} is an abstraction (or “inverse limit”) of the concept of a symmetric function of an unspecified number of variables, which are formed by summing terms that each involve only a bounded number of these variables at a time. One can check (somewhat tediously) that {L(\Omega^*)_{sym}} is indeed a commutative real algebra, with a unit {[1]_{0 \rightarrow *}}. (I do not know if this algebra has previously been studied in the literature; it is somewhat analogous to the abstract algebra of finite linear combinations of Schur polynomials, with multiplication given by a Littlewood-Richardson rule. )

For natural numbers {p}, there is an obvious specialisation map {[]_{* \rightarrow p}} from {L(\Omega^*)_{sym}} to {L(\Omega^p)_{sym}}, defined by the formula

\displaystyle [\sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}]_{* \rightarrow p} := \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}.

Thus, for instance, {[]_{* \rightarrow p}} maps {[x_1]_{1 \rightarrow *}} to {[x_1]_{1 \rightarrow p}} and {[x_1 x_2]_{2 \rightarrow *}} to {[x_1 x_2]_{2 \rightarrow p}}. From (2) and (3) we see that this map {[]_{* \rightarrow p}: L(\Omega^*)_{sym} \rightarrow L(\Omega^p)_{sym}} is an algebra homomorphism, even though the maps {[]_{k \rightarrow *}: L(\Omega^k)_{sym} \rightarrow L(\Omega^*)_{sym}} and {[]_{k \rightarrow p}: L(\Omega^k)_{sym} \rightarrow L(\Omega^p)_{sym}} are not homomorphisms. By inspecting the {p^{th}} component of {L(\Omega^*)_{sym}} we see that the homomorphism {[]_{* \rightarrow p}} is in fact surjective.

Now suppose that we have a measure {\mu} on the space {\Omega}, which then induces a product measure {\mu^p} on every product space {\Omega^p}. To avoid degeneracies we will assume that the integral {\int_\Omega \mu} is strictly positive. Assuming suitable measurability and integrability hypotheses, a function {F \in L(\Omega^p)_{sym}} can then be integrated against this product measure to produce a number

\displaystyle \int_{\Omega^p} F\ d\mu^p.

In the event that {F} arises as a lift {[F^{(k)}]_{k \rightarrow p}} of another function {F^{(k)} \in L(\Omega^k)_{sym}}, then from Fubini’s theorem we obtain the formula

\displaystyle \int_{\Omega^p} F\ d\mu^p = \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}.

Thus for instance, if {\Omega={\bf R}},

\displaystyle \int_{{\bf R}^p} [x_1]_{1 \rightarrow p}\ d\mu^p = p (\int_{\bf R} x\ d\mu(x)) (\int_{\bf R} \mu)^{p-1} \ \ \ \ \ (4)

 

and

\displaystyle \int_{{\bf R}^p} [x_1 x_2]_{2 \rightarrow p}\ d\mu^p = \binom{p}{2} (\int_{{\bf R}^2} x_1 x_2\ d\mu(x_1) d\mu(x_2)) (\int_{\bf R} \mu)^{p-2}. \ \ \ \ \ (5)

 

On summing, we see that if

\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}

is an element of the formal algebra {L(\Omega^*)_{sym}}, then

\displaystyle \int_{\Omega^p} [F^{(*)}]_{* \rightarrow p}\ d\mu^p = \sum_{k=0}^\infty \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}. \ \ \ \ \ (6)

 

Note that by hypothesis, only finitely many terms on the right-hand side are non-zero.

Now for a key observation: whereas the left-hand side of (6) only makes sense when {p} is a natural number, the right-hand side is meaningful when {p} takes a fractional value (or even when it takes negative or complex values!), interpreting the binomial coefficient {\binom{p}{k}} as a polynomial {\frac{p(p-1) \dots (p-k+1)}{k!}} in {p}. As such, this suggests a way to introduce a “virtual” concept of a symmetric function on a fractional power space {\Omega^p} for such values of {p}, and even to integrate such functions against product measures {\mu^p}, even if the fractional power {\Omega^p} does not exist in the usual set-theoretic sense (and {\mu^p} similarly does not exist in the usual measure-theoretic sense). More precisely, for arbitrary real or complex {p}, we now define {L(\Omega^p)_{sym}} to be the space of abstract objects

\displaystyle F^{(p)} = [F^{(*)}]_{* \rightarrow p} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}

with {F^{(*)} \in L(\Omega^*)_{sym}} and {[]_{* \rightarrow p}} (and {[]_{k \rightarrow p}} now interpreted as formal symbols, with the structure of a commutative real algebra inherited from {L(\Omega^*)_{sym}}, thus

\displaystyle [F^{(*)}]_{* \rightarrow p} + [G^{(*)}]_{* \rightarrow p} := [F^{(*)} + G^{(*)}]_{* \rightarrow p}

\displaystyle c [F^{(*)}]_{* \rightarrow p} := [c F^{(*)}]_{* \rightarrow p}

\displaystyle [F^{(*)}]_{* \rightarrow p} [G^{(*)}]_{* \rightarrow p} := [F^{(*)} G^{(*)}]_{* \rightarrow p}.

In particular, the multiplication law (2) continues to hold for such values of {p}, thanks to (3). Given any measure {\mu} on {\Omega}, we formally define a measure {\mu^p} on {\Omega^p} with regards to which we can integrate elements {F^{(p)}} of {L(\Omega^p)_{sym}} by the formula (6) (providing one has sufficient measurability and integrability to make sense of this formula), thus providing a sort of “fractional dimensional integral” for symmetric functions. Thus, for instance, with this formalism the identities (4), (5) now hold for fractional values of {p}, even though the formal space {{\bf R}^p} no longer makes sense as a set, and the formal measure {\mu^p} no longer makes sense as a measure. (The formalism here is somewhat reminiscent of the technique of dimensional regularisation employed in the physical literature in order to assign values to otherwise divergent integrals. See also this post for an unrelated abstraction of the integration concept involving integration over supercommutative variables (and in particular over fermionic variables).)

Example 2 Suppose {\mu} is a probability measure on {\Omega}, and {X: \Omega \rightarrow {\bf R}} is a random variable; on any power {\Omega^k}, we let {X_1,\dots,X_k: \Omega^k \rightarrow {\bf R}} be the usual independent copies of {X} on {\Omega^k}, thus {X_j(\omega_1,\dots,\omega_k) := X(\omega_j)} for {(\omega_1,\dots,\omega_k) \in \Omega^k}. Then for any real or complex {p}, the formal integral

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p

can be evaluated by first using the identity

\displaystyle [X_1]_{1 \rightarrow p}^2 = [X_1^2]_{1 \rightarrow p} + 2[X_1 X_2]_{2 \rightarrow p}

(cf. (1)) and then using (6) and the probability measure hypothesis {\int_\Omega\ d\mu = 1} to conclude that

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = \binom{p}{1} \int_{\Omega} X^2\ d\mu + 2 \binom{p}{2} \int_{\Omega^2} X_1 X_2\ d\mu^2

\displaystyle = p (\int_\Omega X^2\ d\mu - (\int_\Omega X\ d\mu)^2) + p^2 (\int_\Omega X\ d\mu)^2

or in probabilistic notation

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2. \ \ \ \ \ (7)

 

For {p} a natural number, this identity has the probabilistic interpretation

\displaystyle \mathbf{E}( X_1 + \dots + X_p)^2 = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2 \ \ \ \ \ (8)

 

whenever {X_1,\dots,X_p} are jointly independent copies of {X}, which reflects the well known fact that the sum {X_1 + \dots + X_p} has expectation {p \mathbf{E} X} and variance {p \mathbf{Var}(X)}. One can thus view (7) as an abstract generalisation of (8) to the case when {p} is fractional, negative, or even complex, despite the fact that there is no sensible way in this case to talk about {p} independent copies {X_1,\dots,X_p} of {X} in the standard framework of probability theory.

In this particular case, the quantity (7) is non-negative for every nonnegative {p}, which looks plausible given the form of the left-hand side. Unfortunately, this sort of non-negativity does not always hold; for instance, if {X} has mean zero, one can check that

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^4\ d\mu^p = p \mathbf{Var}(X^2) + p(3p-2) (\mathbf{E}(X^2))^2

and the right-hand side can become negative for {p < 2/3}. This is a shame, because otherwise one could hope to start endowing {L(X^p)_{sym}} with some sort of commutative von Neumann algebra type structure (or the abstract probability structure discussed in this previous post) and then interpret it as a genuine measure space rather than as a virtual one. (This failure of positivity is related to the fact that the characteristic function of a random variable, when raised to the {p^{th}} power, need not be a characteristic function of any random variable once {p} is no longer a natural number: “fractional convolution” does not preserve positivity!) However, one vestige of positivity remains: if {F: \Omega \rightarrow {\bf R}} is non-negative, then so is

\displaystyle \int_{\Omega^p} [F]_{1 \rightarrow p}\ d\mu^p = p (\int_\Omega F\ d\mu) (\int_\Omega\ d\mu)^{p-1}.

One can wonder what the point is to all of this abstract formalism and how it relates to the rest of mathematics. For me, this formalism originated implicitly in an old paper I wrote with Jon Bennett and Tony Carbery on the multilinear restriction and Kakeya conjectures, though we did not have a good language for working with it at the time, instead working first with the case of natural number exponents {p} and appealing to a general extrapolation theorem to then obtain various identities in the fractional {p} case. The connection between these fractional dimensional integrals and more traditional integrals ultimately arises from the simple identity

\displaystyle (\int_\Omega\ d\mu)^p = \int_{\Omega^p}\ d\mu^p

(where the right-hand side should be viewed as the fractional dimensional integral of the unit {[1]_{0 \rightarrow p}} against {\mu^p}). As such, one can manipulate {p^{th}} powers of ordinary integrals using the machinery of fractional dimensional integrals. A key lemma in this regard is

Lemma 3 (Differentiation formula) Suppose that a positive measure {\mu = \mu(t)} on {\Omega} depends on some parameter {t} and varies by the formula

\displaystyle \frac{d}{dt} \mu(t) = a(t) \mu(t) \ \ \ \ \ (9)

 

for some function {a(t): \Omega \rightarrow {\bf R}}. Let {p} be any real or complex number. Then, assuming sufficient smoothness and integrability of all quantities involved, we have

\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}\ d\mu(t)^p = \int_{\Omega^p} F^{(p)} [a(t)]_{1 \rightarrow p}\ d\mu(t)^p \ \ \ \ \ (10)

 

for all {F^{(p)} \in L(\Omega^p)_{sym}} that are independent of {t}. If we allow {F^{(p)}(t)} to now depend on {t} also, then we have the more general total derivative formula

\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}(t)\ d\mu(t)^p \ \ \ \ \ (11)

 

\displaystyle = \int_{\Omega^p} \frac{d}{dt} F^{(p)}(t) + F^{(p)}(t) [a(t)]_{1 \rightarrow p}\ d\mu(t)^p,

again assuming sufficient amounts of smoothness and regularity.

Proof: We just prove (10), as (11) then follows by same argument used to prove the usual product rule. By linearity it suffices to verify this identity in the case {F^{(p)} = [F^{(k)}]_{k \rightarrow p}} for some symmetric function {F^{(k)} \in L(\Omega^k)_{sym}} for a natural number {k}. By (6), the left-hand side of (10) is then

\displaystyle \frac{d}{dt} [\binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu(t)^k) (\int_\Omega\ d\mu(t))^{p-k}]. \ \ \ \ \ (12)

 

Differentiating under the integral sign using (9) we have

\displaystyle \frac{d}{dt} \int_\Omega\ d\mu(t) = \int_\Omega\ a(t)\ d\mu(t)

and similarly

\displaystyle \frac{d}{dt} \int_{\Omega^k} F^{(k)}\ d\mu(t)^k = \int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu(t)^k

where {a_1,\dots,a_k} are the standard {k} copies of {a = a(t)} on {\Omega^k}:

\displaystyle a_j(\omega_1,\dots,\omega_k) := a(\omega_j).

By the product rule, we can thus expand (12) as

\displaystyle \binom{p}{k} (\int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu^k ) (\int_\Omega\ d\mu)^{p-k}

\displaystyle + \binom{p}{k} (p-k) (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ a\ d\mu) (\int_\Omega\ d\mu)^{p-k-1}

where we have suppressed the dependence on {t} for brevity. Since {\binom{p}{k} (p-k) = \binom{p}{k+1} (k+1)}, we can write this expression using (6) as

\displaystyle \int_{\Omega^p} [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p}\ d\mu^p

where {F^{(k)} \ast a \in L(\Omega^{k+1})_{sym}} is the symmetric function

\displaystyle F^{(k)} \ast a(\omega_1,\dots,\omega_{k+1}) := \sum_{j=1}^{k+1} F^{(k)}(\omega_1,\dots,\omega_{j-1},\omega_{j+1} \dots \omega_{k+1}) a(\omega_j).

But from (2) one has

\displaystyle [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p} = [F^{(k)}]_{k \rightarrow p} [a]_{1 \rightarrow p}

and the claim follows. \Box

Remark 4 It is also instructive to prove this lemma in the special case when {p} is a natural number, in which case the fractional dimensional integral {\int_{\Omega^p} F^{(p)}\ d\mu(t)^p} can be interpreted as a classical integral. In this case, the identity (10) is immediate from applying the product rule to (9) to conclude that

\displaystyle \frac{d}{dt} d\mu(t)^p = [a(t)]_{1 \rightarrow p} d\mu(t)^p.

One could in fact derive (10) for arbitrary real or complex {p} from the case when {p} is a natural number by an extrapolation argument; see the appendix of my paper with Bennett and Carbery for details.

Let us give a simple PDE application of this lemma as illustration:

Proposition 5 (Heat flow monotonicity) Let {u: [0,+\infty) \times {\bf R}^d \rightarrow {\bf R}} be a solution to the heat equation {u_t = \Delta u} with initial data {\mu_0} a rapidly decreasing finite non-negative Radon measure, or more explicitly

\displaystyle u(t,x) = \frac{1}{(4\pi t)^{d/2}} \int_{{\bf R}^d} e^{-|x-y|^2/4t}\ d\mu_0(y)

for al {t>0}. Then for any {p>0}, the quantity

\displaystyle Q_p(t) := t^{\frac{d}{2} (p-1)} \int_{{\bf R}^d} u(t,x)^p\ dx

is monotone non-decreasing in {t \in (0,+\infty)} for {1 < p < \infty}, constant for {p=1}, and monotone non-increasing for {0 < p < 1}.

Proof: By a limiting argument we may assume that {d\mu_0} is absolutely continuous, with Radon-Nikodym derivative a test function; this is more than enough regularity to justify the arguments below.

For any {(t,x) \in (0,+\infty) \times {\bf R}^d}, let {\mu(t,x)} denote the Radon measure

\displaystyle d\mu(t,x)(y) := \frac{1}{(4\pi)^{d/2}} e^{-|x-y|^2/4t}\ d\mu_0(y).

Then the quantity {Q_p(t)} can be written as a fractional dimensional integral

\displaystyle Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx.

Observe that

\displaystyle \frac{\partial}{\partial t} d\mu(t,x) = \frac{|x-y|^2}{4t^2} d\mu(t,x)

and thus by Lemma 3 and the product rule

\displaystyle \frac{d}{dt} Q_p(t) = -\frac{d}{2t} Q_p(t) + t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [\frac{|x-y|^2}{4t^2}]_{1 \rightarrow p} d\mu(t,x)^p\ dx \ \ \ \ \ (13)

 

where we use {y} for the variable of integration in the factor space {{\bf R}^d} of {({\bf R}^d)^p}.

To simplify this expression we will take advantage of integration by parts in the {x} variable. Specifically, in any direction {x_j}, we have

\displaystyle \frac{\partial}{\partial x_j} d\mu(t,x) = -\frac{x_j-y_j}{2t} d\mu(t,x)

and hence by Lemma 3

\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx = - \int_{({\bf R}^d)^p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.

Multiplying by {x_j} and integrating by parts, we see that

\displaystyle d Q_p(t) = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

\displaystyle = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

where we use the Einstein summation convention in {j}. Similarly, if {F_j(y)} is any reasonable function depending only on {y}, we have

\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}[F_j(y)]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

\displaystyle = - \int_{({\bf R}^d)^p} [F_j(y)]_{1 \rightarrow p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

and hence on integration by parts

\displaystyle 0 = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [F_j(y) \frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.

We conclude that

\displaystyle \frac{d}{2t} Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} (x_j - [F_j(y)]_{1 \rightarrow p}) [\frac{(x_j-y_j)}{4t}]_{1 \rightarrow p} d\mu(t,x)^p\ dx

and thus by (13)

\displaystyle \frac{d}{dt} Q_p(t) = \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}

\displaystyle [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p} - (x_j - [F_j(y)]_{1 \rightarrow p}) [x_j - y_j]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.

The choice of {F_j} that then achieves the most cancellation turns out to be {F_j(y) = \frac{1}{p} y_j} (this cancels the terms that are linear or quadratic in the {x_j}), so that {x_j - [F_j(y)]_{1 \rightarrow p} = \frac{1}{p} [x_j - y_j]_{1 \rightarrow p}}. Repeating the calculations establishing (7), one has

\displaystyle \int_{({\bf R}^d)^p} [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p}\ d\mu^p = p \mathop{\bf E} |x-Y|^2 (\int_{{\bf R}^d}\ d\mu)^{p}

and

\displaystyle \int_{({\bf R}^d)^p} [x_j-y_j]_{1 \rightarrow p} [x_j-y_j]_{1 \rightarrow p}\ d\mu^p

\displaystyle = (p \mathbf{Var}(x-Y) + p^2 |\mathop{\bf E} x-Y|^2) (\int_{{\bf R}^d}\ d\mu)^{p}

where {Y} is the random variable drawn from {{\bf R}^d} with the normalised probability measure {\mu / \int_{{\bf R}^d}\ d\mu}. Since {\mathop{\bf E} |x-Y|^2 = \mathbf{Var}(x-Y) + |\mathop{\bf E} x-Y|^2}, one thus has

\displaystyle \frac{d}{dt} Q_p(t) = (p-1) \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \mathbf{Var}(x-Y) (\int_{{\bf R}^d}\ d\mu)^{p}\ dx. \ \ \ \ \ (14)

 

This expression is clearly non-negative for {p>1}, equal to zero for {p=1}, and positive for {0 < p < 1}, giving the claim. (One could simplify {\mathbf{Var}(x-Y)} here as {\mathbf{Var}(Y)} if desired, though it is not strictly necessary to do so for the proof.) \Box

Remark 6 As with Remark 4, one can also establish the identity (14) first for natural numbers {p} by direct computation avoiding the theory of fractional dimensional integrals, and then extrapolate to the case of more general values of {p}. This particular identity is also simple enough that it can be directly established by integration by parts without much difficulty, even for fractional values of {p}.

A more complicated version of this argument establishes the non-endpoint multilinear Kakeya inequality (without any logarithmic loss in a scale parameter {R}); this was established in my previous paper with Jon Bennett and Tony Carbery, but using the “natural number {p} first” approach rather than using the current formalism of fractional dimensional integration. However, the arguments can be translated into this formalism without much difficulty; we do so below the fold. (To simplify the exposition slightly we will not address issues of establishing enough regularity and integrability to justify all the manipulations, though in practice this can be done by standard limiting arguments.)

Read the rest of this entry »

The following situation is very common in modern harmonic analysis: one has a large scale parameter {N} (sometimes written as {N=1/\delta} in the literature for some small scale parameter {\delta}, or as {N=R} for some large radius {R}), which ranges over some unbounded subset of {[1,+\infty)} (e.g. all sufficiently large real numbers {N}, or all powers of two), and one has some positive quantity {D(N)} depending on {N} that is known to be of polynomial size in the sense that

\displaystyle  C^{-1} N^{-C} \leq D(N) \leq C N^C \ \ \ \ \ (1)

for all {N} in the range and some constant {C>0}, and one wishes to obtain a subpolynomial upper bound for {D(N)}, by which we mean an upper bound of the form

\displaystyle  D(N) \leq C_\varepsilon N^\varepsilon \ \ \ \ \ (2)

for all {\varepsilon>0} and all {N} in the range, where {C_\varepsilon>0} can depend on {\varepsilon} but is independent of {N}. In many applications, this bound is nearly tight in the sense that one can easily establish a matching lower bound

\displaystyle  D(N) \geq C_\varepsilon N^{-\varepsilon}

in which case the property of having a subpolynomial upper bound is equivalent to that of being subpolynomial size in the sense that

\displaystyle  C_\varepsilon N^{-\varepsilon} \leq D(N) \leq C_\varepsilon N^\varepsilon \ \ \ \ \ (3)

for all {\varepsilon>0} and all {N} in the range. It would naturally be of interest to tighten these bounds further, for instance to show that {D(N)} is polylogarithmic or even bounded in size, but a subpolynomial bound is already sufficient for many applications.

Let us give some illustrative examples of this type of problem:

Example 1 (Kakeya conjecture) Here {N} ranges over all of {[1,+\infty)}. Let {d \geq 2} be a fixed dimension. For each {N \geq 1}, we pick a maximal {1/N}-separated set of directions {\Omega_N \subset S^{d-1}}. We let {D(N)} be the smallest constant for which one has the Kakeya inequality

\displaystyle  \| \sum_{\omega \in \Omega_N} 1_{T_\omega} \|_{L^{\frac{d}{d-1}}({\bf R}^d)} \leq D(N),

where {T_\omega} is a {1/N \times 1}-tube oriented in the direction {\omega}. The Kakeya maximal function conjecture is then equivalent to the assertion that {D(N)} has a subpolynomial upper bound (or equivalently, is of subpolynomial size). Currently this is only known in dimension {d=2}.

Example 2 (Restriction conjecture for the sphere) Here {N} ranges over all of {[1,+\infty)}. Let {d \geq 2} be a fixed dimension. We let {D(N)} be the smallest constant for which one has the restriction inequality

\displaystyle  \| \widehat{fd\sigma} \|_{L^{\frac{2d}{d-1}}(B(0,N))} \leq D(N) \| f \|_{L^\infty(S^{d-1})}

for all bounded measurable functions {f} on the unit sphere {S^{d-1}} equipped with surface measure {d\sigma}, where {B(0,N)} is the ball of radius {N} centred at the origin. The restriction conjecture of Stein for the sphere is then equivalent to the assertion that {D(N)} has a subpolynomial upper bound (or equivalently, is of subpolynomial size). Currently this is only known in dimension {d=2}.

Example 3 (Multilinear Kakeya inequality) Again {N} ranges over all of {[1,+\infty)}. Let {d \geq 2} be a fixed dimension, and let {S_1,\dots,S_d} be compact subsets of the sphere {S^{d-1}} which are transverse in the sense that there is a uniform lower bound {|\omega_1 \wedge \dots \wedge \omega_d| \geq c > 0} for the wedge product of directions {\omega_i \in S_i} for {i=1,\dots,d} (equivalently, there is no hyperplane through the origin that intersects all of the {S_i}). For each {N \geq 1}, we let {D(N)} be the smallest constant for which one has the multilinear Kakeya inequality

\displaystyle  \| \mathrm{geom} \sum_{T \in {\mathcal T}_i} 1_{T} \|_{L^{\frac{d}{d-1}}(B(0,N))} \leq D(N) \mathrm{geom} \# {\mathcal T}_i,

where for each {i=1,\dots,d}, {{\mathcal T}_i} is a collection of infinite tubes in {{\bf R}^d} of radius {1} oriented in a direction in {S_i}, which are separated in the sense that for any two tubes {T,T'} in {{\mathcal T}_i}, either the directions of {T,T'} differ by an angle of at least {1/N}, or {T,T'} are disjoint; and {\mathrm{geom} = \mathrm{geom}_{1 \leq i \leq d}} is our notation for the geometric mean

\displaystyle  \mathrm{geom} a_i := (a_1 \dots a_d)^{1/d}.

The multilinear Kakeya inequality of Bennett, Carbery, and myself establishes that {D(N)} is of subpolynomial size; a later argument of Guth improves this further by showing that {D(N)} is bounded (and in fact comparable to {1}).

Example 4 (Multilinear restriction theorem) Once again {N} ranges over all of {[1,+\infty)}. Let {d \geq 2} be a fixed dimension, and let {S_1,\dots,S_d} be compact subsets of the sphere {S^{d-1}} which are transverse as in the previous example. For each {N \geq 1}, we let {D(N)} be the smallest constant for which one has the multilinear restriction inequality

\displaystyle  \| \mathrm{geom} \widehat{f_id\sigma} \|_{L^{\frac{2d}{d-1}}(B(0,N))} \leq D(N) \| f \|_{L^2(S^{d-1})}

for all bounded measurable functions {f_i} on {S_i} for {i=1,\dots,d}. Then the multilinear restriction theorem of Bennett, Carbery, and myself establishes that {D(N)} is of subpolynomial size; it is known to be bounded for {d=2} (as can be easily verified from Plancherel’s theorem), but it remains open whether it is bounded for any {d>2}.

Example 5 (Decoupling for the paraboloid) {N} now ranges over the square numbers. Let {d \geq 2}, and subdivide the unit cube {[0,1]^{d-1}} into {N^{(d-1)/2}} cubes {Q} of sidelength {1/N^{1/2}}. For any {g \in L^1([0,1]^{d-1})}, define the extension operators

\displaystyle  E_{[0,1]^{d-1}} g( x', x_d ) := \int_{[0,1]^{d-1}} e^{2\pi i (x' \cdot \xi + x_d |\xi|^2)} g(\xi)\ d\xi

and

\displaystyle  E_Q g( x', x_d ) := \int_{Q} e^{2\pi i (x' \cdot \xi + x_d |\xi|^2)} g(\xi)\ d\xi

for {x' \in {\bf R}^{d-1}} and {x_d \in {\bf R}}. We also introduce the weight function

\displaystyle  w_{B(0,N)}(x) := (1 + \frac{|x|}{N})^{-100d}.

For any {p}, let {D_p(N)} be the smallest constant for which one has the decoupling inequality

\displaystyle  \| E_{[0,1]^{d-1}} g \|_{L^p(w_{B(0,N)})} \leq D_p(N) (\sum_Q \| E_Q g \|_{L^p(w_{B(0,N)})}^2)^{1/2}.

The decoupling theorem of Bourgain and Demeter asserts that {D_p(N)} is of subpolynomial size for all {p} in the optimal range {2 \leq p \leq \frac{2(d+1)}{d-1}}.

Example 6 (Decoupling for the moment curve) {N} now ranges over the natural numbers. Let {d \geq 2}, and subdivide {[0,1]} into {N} intervals {J} of length {1/N}. For any {g \in L^1([0,1])}, define the extension operators

\displaystyle  E_{[0,1]} g(x_1,\dots,x_d) = \int_{[0,1]} e^{2\pi i ( x_1 \xi + x_2 \xi^2 + \dots + x_d \xi^d} g(\xi)\ d\xi

and more generally

\displaystyle  E_J g(x_1,\dots,x_d) = \int_{[0,1]} e^{2\pi i ( x_1 \xi + x_2 \xi^2 + \dots + x_d \xi^d} g(\xi)\ d\xi

for {(x_1,\dots,x_d) \in {\bf R}^d}. For any {p}, let {D_p(N)} be the smallest constant for which one has the decoupling inequality

\displaystyle  \| E_{[0,1]} g \|_{L^p(w_{B(0,N^d)})} \leq D_p(N) (\sum_J \| E_J g \|_{L^p(w_{B(0,N^d)})}^2)^{1/2}.

It was shown by Bourgain, Demeter, and Guth that {D_p(N)} is of subpolynomial size for all {p} in the optimal range {2 \leq p \leq d(d+1)}, which among other things implies the Vinogradov main conjecture (as discussed in this previous post).

It is convenient to use asymptotic notation to express these estimates. We write {X \lesssim Y}, {X = O(Y)}, or {Y \gtrsim X} to denote the inequality {|X| \leq CY} for some constant {C} independent of the scale parameter {N}, and write {X \sim Y} for {X \lesssim Y \lesssim X}. We write {X = o(Y)} to denote a bound of the form {|X| \leq c(N) Y} where {c(N) \rightarrow 0} as {N \rightarrow \infty} along the given range of {N}. We then write {X \lessapprox Y} for {X \lesssim N^{o(1)} Y}, and {X \approx Y} for {X \lessapprox Y \lessapprox X}. Then the statement that {D(N)} is of polynomial size can be written as

\displaystyle  D(N) \sim N^{O(1)},

while the statement that {D(N)} has a subpolynomial upper bound can be written as

\displaystyle  D(N) \lessapprox 1

and similarly the statement that {D(N)} is of subpolynomial size is simply

\displaystyle  D(N) \approx 1.

Many modern approaches to bounding quantities like {D(N)} in harmonic analysis rely on some sort of induction on scales approach in which {D(N)} is bounded using quantities such as {D(N^\theta)} for some exponents {0 < \theta < 1}. For instance, suppose one is somehow able to establish the inequality

\displaystyle  D(N) \lessapprox D(\sqrt{N}) \ \ \ \ \ (4)

for all {N \geq 1}, and suppose that {D} is also known to be of polynomial size. Then this implies that {D} has a subpolynomial upper bound. Indeed, one can iterate this inequality to show that

\displaystyle  D(N) \lessapprox D(N^{1/2^k})

for any fixed {k}; using the polynomial size hypothesis one thus has

\displaystyle  D(N) \lessapprox N^{C/2^k}

for some constant {C} independent of {k}. As {k} can be arbitrarily large, we conclude that {D(N) \lesssim N^\varepsilon} for any {\varepsilon>0}, and hence {D} is of subpolynomial size. (This sort of iteration is used for instance in my paper with Bennett and Carbery to derive the multilinear restriction theorem from the multilinear Kakeya theorem.)

Exercise 7 If {D} is of polynomial size, and obeys the inequality

\displaystyle  D(N) \lessapprox D(N^{1-\varepsilon}) + N^{O(\varepsilon)}

for any fixed {\varepsilon>0}, where the implied constant in the {O(\varepsilon)} notation is independent of {\varepsilon}, show that {D} has a subpolynomial upper bound. This type of inequality is used to equate various linear estimates in harmonic analysis with their multilinear counterparts; see for instance this paper of myself, Vargas, and Vega for an early example of this method.

In more recent years, more sophisticated induction on scales arguments have emerged in which one or more auxiliary quantities besides {D(N)} also come into play. Here is one example, this time being an abstraction of a short proof of the multilinear Kakeya inequality due to Guth. Let {D(N)} be the quantity in Example 3. We define {D(N,M)} similarly to {D(N)} for any {M \geq 1}, except that we now also require that the diameter of each set {S_i} is at most {1/M}. One can then observe the following estimates:

  • (Triangle inequality) For any {N,M \geq 1}, we have

    \displaystyle  D(N,M) = M^{O(1)} D(N). \ \ \ \ \ (5)

  • (Multiplicativity) For any {N_1,N_2 = N^{O(1)}}, one has

    \displaystyle  D(N_1 N_2, M) \lessapprox D(N_1, M) D(N_2, M). \ \ \ \ \ (6)

  • (Loomis-Whitney inequality) We have

    \displaystyle  D(N,N) \lessapprox 1. \ \ \ \ \ (7)

These inequalities now imply that {D} has a subpolynomial upper bound, as we now demonstrate. Let {k} be a large natural number (independent of {N}) to be chosen later. From many iterations of (6) we have

\displaystyle  D(N, N^{1/k}) \lessapprox D(N^{1/k},N^{1/k})^k

and hence by (7) (with {N} replaced by {N^{1/k}}) and (5)

\displaystyle  D(N) \lessapprox N^{O(1/k)}

where the implied constant in the {O(1/k)} exponent does not depend on {k}. As {k} can be arbitrarily large, the claim follows. We remark that a nearly identical scheme lets one deduce decoupling estimates for the three-dimensional cone from that of the two-dimensional paraboloid; see the final section of this paper of Bourgain and Demeter.

Now we give a slightly more sophisticated example, abstracted from the proof of {L^p} decoupling of the paraboloid by Bourgain and Demeter, as described in this study guide after specialising the dimension to {2} and the exponent {p} to the endpoint {p=6} (the argument is also more or less summarised in this previous post). (In the cited papers, the argument was phrased only for the non-endpoint case {p<6}, but it has been observed independently by many experts that the argument extends with only minor modifications to the endpoint {p=6}.) Here we have a quantity {D_p(N)} that we wish to show is of subpolynomial size. For any {0 < \varepsilon < 1} and {0 \leq u \leq 1}, one can define an auxiliary quantity {A_{p,u,\varepsilon}(N)}. The precise definitions of {D_p(N)} and {A_{p,u,\varepsilon}(N)} are given in the study guide (where they are called {\mathrm{Dec}_2(1/N,p)} and {A_p(u, B(0,N^2), u, g)} respectively, setting {\delta = 1/N} and {\nu = \delta^\varepsilon}) but will not be of importance to us for this discussion. Suffice to say that the following estimates are known:

  • (Crude upper bound for {D_p}) {D_p(N)} is of polynomial size: {D_p(N) \sim N^{O(1)}}.
  • (Bilinear reduction, using parabolic rescaling) For any {0 \leq u \leq 1}, one has

    \displaystyle  D_p(N) \lessapprox D_p(N^{1-\varepsilon}) + N^{O(\varepsilon)+O(u)} A_{p,u,\varepsilon}(N). \ \ \ \ \ (8)

  • (Crude upper bound for {A_{p,u,\varepsilon}(N)}) For any {0 \leq u \leq 1} one has

    \displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)+O(u)} D_p(N) \ \ \ \ \ (9)

  • (Application of multilinear Kakeya and {L^2} decoupling) If {\varepsilon, u} are sufficiently small (e.g. both less than {1/4}), then

    \displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2u,\varepsilon}(N)^{1/2} D_p(N^{1-u})^{1/2}. \ \ \ \ \ (10)

In all of these bounds the implied constant exponents such as {O(\varepsilon)} or {O(u)} are independent of {\varepsilon} and {u}, although the implied constants in the {\lessapprox} notation can depend on both {\varepsilon} and {u}. Here we gloss over an annoying technicality in that quantities such as {N^{1-\varepsilon}}, {N^{1-u}}, or {N^u} might not be an integer (and might not divide evenly into {N}), which is needed for the application to decoupling theorems; this can be resolved by restricting the scales involved to powers of two and restricting the values of {\varepsilon, u} to certain rational values, which introduces some complications to the later arguments below which we shall simply ignore as they do not significantly affect the numerology.

It turns out that these estimates imply that {D_p(N)} is of subpolynomial size. We give the argument as follows. As {D_p(N)} is known to be of polynomial size, we have some {\eta>0} for which we have the bound

\displaystyle  D_p(N) \lessapprox N^\eta \ \ \ \ \ (11)

for all {N}. We can pick {\eta} to be the minimal exponent for which this bound is attained: thus

\displaystyle  \eta = \limsup_{N \rightarrow \infty} \frac{\log D_p(N)}{\log N}. \ \ \ \ \ (12)

We will call this the upper exponent of {D_p(N)}. We need to show that {\eta \leq 0}. We assume for contradiction that {\eta > 0}. Let {\varepsilon>0} be a sufficiently small quantity depending on {\eta} to be chosen later. From (10) we then have

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2u,\varepsilon}(N)^{1/2} N^{\eta (\frac{1}{2} - \frac{u}{2})}

for any sufficiently small {u}. A routine iteration then gives

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2^k u,\varepsilon}(N)^{1/2^k} N^{\eta (1 - \frac{1}{2^k} - k\frac{u}{2})}

for any {k \geq 1} that is independent of {N}, if {u} is sufficiently small depending on {k}. A key point here is that the implied constant in the exponent {O(\varepsilon)} is uniform in {k} (the constant comes from summing a convergent geometric series). We now use the crude bound (9) followed by (11) and conclude that

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{\eta (1 - k\frac{u}{2}) + O(\varepsilon) + O(u)}.

Applying (8) we then have

\displaystyle  D_p(N) \lessapprox N^{\eta(1-\varepsilon)} + N^{\eta (1 - k\frac{u}{2}) + O(\varepsilon) + O(u)}.

If we choose {k} sufficiently large depending on {\eta} (which was assumed to be positive), then the negative term {-\eta k \frac{u}{2}} will dominate the {O(u)} term. If we then pick {u} sufficiently small depending on {k}, then finally {\varepsilon} sufficiently small depending on all previous quantities, we will obtain {D_p(N) \lessapprox N^{\eta'}} for some {\eta'} strictly less than {\eta}, contradicting the definition of {\eta}. Thus {\eta} cannot be positive, and hence {D_p(N)} has a subpolynomial upper bound as required.

Exercise 8 Show that one still obtains a subpolynomial upper bound if the estimate (10) is replaced with

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{p,2u,\varepsilon}(N)^{1-\theta} D_p(N)^{\theta}

for some constant {0 \leq \theta < 1/2}, so long as we also improve (9) to

\displaystyle  A_{p,u,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} D_p(N^{1-u}).

(This variant of the argument lets one handle the non-endpoint cases {2 < p < 6} of the decoupling theorem for the paraboloid.)

To establish decoupling estimates for the moment curve, restricting to the endpoint case {p = d(d+1)} for sake of discussion, an even more sophisticated induction on scales argument was deployed by Bourgain, Demeter, and Guth. The proof is discussed in this previous blog post, but let us just describe an abstract version of the induction on scales argument. To bound the quantity {D_p(N) = D_{d(d+1)}(N)}, some auxiliary quantities {A_{t,q,s,\varepsilon}(N)} are introduced for various exponents {1 \leq t \leq \infty} and {0 \leq q,s \leq 1} and {\varepsilon>0}, with the following bounds:

  • (Crude upper bound for {D}) {D_p(N)} is of polynomial size: {D_p(N) \sim N^{O(1)}}.
  • (Multilinear reduction, using non-isotropic rescaling) For any {0 \leq q,s \leq 1} and {1 \leq t \leq \infty}, one has

    \displaystyle  D_p(N) \lessapprox D_p(N^{1-\varepsilon}) + N^{O(\varepsilon)+O(q)+O(s)} A_{t,q,s,\varepsilon}(N). \ \ \ \ \ (13)

  • (Crude upper bound for {A_{t,q,s,\varepsilon}(N)}) For any {0 \leq q,s \leq 1} and {1 \leq t \leq \infty} one has

    \displaystyle  A_{t,q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)+O(q)+O(s)} D_p(N) \ \ \ \ \ (14)

  • (Hölder) For {0 \leq q, s \leq 1} and {1 \leq t_0 \leq t_1 \leq \infty} one has

    \displaystyle  A_{t_0,q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{t_1,q,s,\varepsilon}(N) \ \ \ \ \ (15)

    and also

    \displaystyle  A_{t_\theta,q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{t_0,q,s,\varepsilon}(N)^{1-\theta} A_{t_1,q,s,\varepsilon}(N)^\theta \ \ \ \ \ (16)

    whenever {0 \leq \theta \leq 1}, where {\frac{1}{t_\theta} = \frac{1-\theta}{t_0} + \frac{\theta}{t_1}}.

  • (Rescaled decoupling hypothesis) For {0 \leq q,s \leq 1}, one has

    \displaystyle  A_{p,q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} D_p(N^{1-q}). \ \ \ \ \ (17)

  • (Lower dimensional decoupling) If {1 \leq k \leq d-1} and {q \leq s/k}, then

    \displaystyle  A_{k(k+1),q,s,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{k(k+1),s/k,s,\varepsilon}(N). \ \ \ \ \ (18)

  • (Multilinear Kakeya) If {1 \leq k \leq d-1} and {0 \leq q \leq 1}, then

    \displaystyle  A_{kp/d,q,kq,\varepsilon}(N) \lessapprox N^{O(\varepsilon)} A_{kp/d,q,(k+1)q,\varepsilon}(N). \ \ \ \ \ (19)

It is now substantially less obvious that these estimates can be combined to demonstrate that {D(N)} is of subpolynomial size; nevertheless this can be done. A somewhat complicated arrangement of the argument (involving some rather unmotivated choices of expressions to induct over) appears in my previous blog post; I give an alternate proof later in this post.

These examples indicate a general strategy to establish that some quantity {D(N)} is of subpolynomial size, by

  • (i) Introducing some family of related auxiliary quantities, often parameterised by several further parameters;
  • (ii) establishing as many bounds between these quantities and the original quantity {D(N)} as possible; and then
  • (iii) appealing to some sort of “induction on scales” to conclude.

The first two steps (i), (ii) depend very much on the harmonic analysis nature of the quantities {D(N)} and the related auxiliary quantities, and the estimates in (ii) will typically be proven from various harmonic analysis inputs such as Hölder’s inequality, rescaling arguments, decoupling estimates, or Kakeya type estimates. The final step (iii) requires no knowledge of where these quantities come from in harmonic analysis, but the iterations involved can become extremely complicated.

In this post I would like to observe that one can clean up and made more systematic this final step (iii) by passing to upper exponents (12) to eliminate the role of the parameter {N} (and also “tropicalising” all the estimates), and then taking similar limit superiors to eliminate some other less important parameters, until one is left with a simple linear programming problem (which, among other things, could be amenable to computer-assisted proving techniques). This method is analogous to that of passing to a simpler asymptotic limit object in many other areas of mathematics (for instance using the Furstenberg correspondence principle to pass from a combinatorial problem to an ergodic theory problem, as discussed in this previous post). We use the limit superior exclusively in this post, but many of the arguments here would also apply with one of the other generalised limit functionals discussed in this previous post, such as ultrafilter limits.

For instance, if {\eta} is the upper exponent of a quantity {D(N)} of polynomial size obeying (4), then a comparison of the upper exponent of both sides of (4) one arrives at the scalar inequality

\displaystyle  \eta \leq \frac{1}{2} \eta

from which it is immediate that {\eta \leq 0}, giving the required subpolynomial upper bound. Notice how the passage to upper exponents converts the {\lessapprox} estimate to a simpler inequality {\leq}.

Exercise 9 Repeat Exercise 7 using this method.

Similarly, given the quantities {D(N,M)} obeying the axioms (5), (6), (7), and assuming that {D(N)} is of polynomial size (which is easily verified for the application at hand), we see that for any real numbers {a, u \geq 0}, the quantity {D(N^a,N^u)} is also of polynomial size and hence has some upper exponent {\eta(a,u)}; meanwhile {D(N)} itself has some upper exponent {\eta}. By reparameterising we have the homogeneity

\displaystyle  \eta(\lambda a, \lambda u) = \lambda \eta(a,u)

for any {\lambda \geq 0}. Also, comparing the upper exponents of both sides of the axioms (5), (6), (7) we arrive at the inequalities

\displaystyle  \eta(1,u) = \eta + O(u)

\displaystyle  \eta(a_1+a_2,u) \leq \eta(a_1,u) + \eta(a_2,u)

\displaystyle  \eta(1,1) \leq 0.

For any natural number {k}, the third inequality combined with homogeneity gives {\eta(1/k,1/k)}, which when combined with the second inequality gives {\eta(1,1/k) \leq k \eta(1/k,1/k) \leq 0}, which on combination with the first estimate gives {\eta \leq O(1/k)}. Sending {k} to infinity we obtain {\eta \leq 0} as required.

Now suppose that {D_p(N)}, {A_{p,u,\varepsilon}(N)} obey the axioms (8), (9), (10). For any fixed {u,\varepsilon}, the quantity {A_{p,u,\varepsilon}(N)} is of polynomial size (thanks to (9) and the polynomial size of {D_6}), and hence has some upper exponent {\eta(u,\varepsilon)}; similarly {D_p(N)} has some upper exponent {\eta}. (Actually, strictly speaking our axioms only give an upper bound on {A_{p,u,\varepsilon}} so we have to temporarily admit the possibility that {\eta(u,\varepsilon)=-\infty}, though this will soon be eliminated anyway.) Taking upper exponents of all the axioms we then conclude that

\displaystyle  \eta \leq \max( (1-\varepsilon) \eta, \eta(u,\varepsilon) + O(\varepsilon) + O(u) ) \ \ \ \ \ (20)

\displaystyle  \eta(u,\varepsilon) \leq \eta + O(\varepsilon) + O(u)

\displaystyle  \eta(u,\varepsilon) \leq \frac{1}{2} \eta(2u,\varepsilon) + \frac{1}{2} \eta (1-u) + O(\varepsilon)

for all {0 \leq u \leq 1} and {0 \leq \varepsilon \leq 1}.

Assume for contradiction that {\eta>0}, then {(1-\varepsilon) \eta < \eta}, and so the statement (20) simplifies to

\displaystyle  \eta \leq \eta(u,\varepsilon) + O(\varepsilon) + O(u).

At this point we can eliminate the role of {\varepsilon} and simplify the system by taking a second limit superior. If we write

\displaystyle  \eta(u) := \limsup_{\varepsilon \rightarrow 0} \eta(u,\varepsilon)

then on taking limit superiors of the previous inequalities we conclude that

\displaystyle  \eta(u) \leq \eta + O(u)

\displaystyle  \eta(u) \leq \frac{1}{2} \eta(2u) + \frac{1}{2} \eta (1-u) \ \ \ \ \ (21)

\displaystyle  \eta \leq \eta(u) + O(u)

for all {u}; in particular {\eta(u) = \eta + O(u)}. We take advantage of this by taking a further limit superior (or “upper derivative”) in the limit {u \rightarrow 0} to eliminate the role of {u} and simplify the system further. If we define

\displaystyle  \alpha := \limsup_{u \rightarrow 0^+} \frac{\eta(u)-\eta}{u},

so that {\alpha} is the best constant for which {\eta(u) \leq \eta + \alpha u + o(u)} as {u \rightarrow 0}, then {\alpha} is finite, and by inserting this “Taylor expansion” into the right-hand side of (21) and conclude that

\displaystyle  \alpha \leq \alpha - \frac{1}{2} \eta.

This leads to a contradiction when {\eta>0}, and hence {\eta \leq 0} as desired.

Exercise 10 Redo Exercise 8 using this method.

The same strategy now clarifies how to proceed with the more complicated system of quantities {A_{t,q,s,\varepsilon}(N)} obeying the axioms (13)(19) with {D_p(N)} of polynomial size. Let {\eta} be the exponent of {D_p(N)}. From (14) we see that for fixed {t,q,s,\varepsilon}, each {A_{t,q,s,\varepsilon}(N)} is also of polynomial size (at least in upper bound) and so has some exponent {a( t,q,s,\varepsilon)} (which for now we can permit to be {-\infty}). Taking upper exponents of all the various axioms we can now eliminate {N} and arrive at the simpler axioms

\displaystyle  \eta \leq \max( (1-\varepsilon) \eta, a(t,q,s,\varepsilon) + O(\varepsilon) + O(q) + O(s) )

\displaystyle  a(t,q,s,\varepsilon) \leq \eta + O(\varepsilon) + O(q) + O(s)

\displaystyle  a(t_0,q,s,\varepsilon) \leq a(t_1,q,s,\varepsilon) + O(\varepsilon)

\displaystyle  a(t_\theta,q,s,\varepsilon) \leq (1-\theta) a(t_0,q,s,\varepsilon) + \theta a(t_1,q,s,\varepsilon) + O(\varepsilon)

\displaystyle  a(d(d+1),q,s,\varepsilon) \leq \eta(1-q) + O(\varepsilon)

for all {0 \leq q,s \leq 1}, {1 \leq t \leq \infty}, {1 \leq t_0 \leq t_1 \leq \infty} and {0 \leq \theta \leq 1}, with the lower dimensional decoupling inequality

\displaystyle  a(k(k+1),q,s,\varepsilon) \leq a(k(k+1),s/k,s,\varepsilon) + O(\varepsilon)

for {1 \leq k \leq d-1} and {q \leq s/k}, and the multilinear Kakeya inequality

\displaystyle  a(k(d+1),q,kq,\varepsilon) \leq a(k(d+1),q,(k+1)q,\varepsilon)

for {1 \leq k \leq d-1} and {0 \leq q \leq 1}.

As before, if we assume for sake of contradiction that {\eta>0} then the first inequality simplifies to

\displaystyle  \eta \leq a(t,q,s,\varepsilon) + O(\varepsilon) + O(q) + O(s).

We can then again eliminate the role of {\varepsilon} by taking a second limit superior as {\varepsilon \rightarrow 0}, introducing

\displaystyle  a(t,q,s) := \limsup_{\varepsilon \rightarrow 0} a(t,q,s,\varepsilon)

and thus getting the simplified axiom system

\displaystyle  a(t,q,s) \leq \eta + O(q) + O(s) \ \ \ \ \ (22)

\displaystyle  a(t_0,q,s) \leq a(t_1,q,s)

\displaystyle  a(t_\theta,q,s) \leq (1-\theta) a(t_0,q,s) + \theta a(t_1,q,s)

\displaystyle  a(d(d+1),q,s) \leq \eta(1-q)

\displaystyle  \eta \leq a(t,q,s) + O(q) + O(s) \ \ \ \ \ (23)

and also

\displaystyle  a(k(k+1),q,s) \leq a(k(k+1),s/k,s)

for {1 \leq k \leq d-1} and {q \leq s/k}, and

\displaystyle  a(k(d+1),q,kq) \leq a(k(d+1),q,(k+1)q)

for {1 \leq k \leq d-1} and {0 \leq q \leq 1}.

In view of the latter two estimates it is natural to restrict attention to the quantities {a(t,q,kq)} for {1 \leq k \leq d+1}. By the axioms (22), these quantities are of the form {\eta + O(q)}. We can then eliminate the role of {q} by taking another limit superior

\displaystyle  \alpha_k(t) := \limsup_{q \rightarrow 0} \frac{a(t,q,kq)-\eta}{q}.

The axioms now simplify to

\displaystyle  \alpha_k(t) = O(1)

\displaystyle  \alpha_k(t_0) \leq \alpha_k(t_1) \ \ \ \ \ (24)

\displaystyle  \alpha_k(t_\theta) \leq (1-\theta) \alpha_k(t_0) + \theta \alpha_k(t_1) \ \ \ \ \ (25)

\displaystyle  \alpha_k(d(d+1)) \leq -\eta \ \ \ \ \ (26)

and

\displaystyle  \alpha_j(k(k+1)) \leq \frac{j}{k} \alpha_k(k(k+1)) \ \ \ \ \ (27)

for {1 \leq k \leq d-1} and {k \leq j \leq d}, and

\displaystyle  \alpha_k(k(d+1)) \leq \alpha_{k+1}(k(d+1)) \ \ \ \ \ (28)

for {1 \leq k \leq d-1}.

It turns out that the inequality (27) is strongest when {j=k+1}, thus

\displaystyle  \alpha_{k+1}(k(k+1)) \leq \frac{k+1}{k} \alpha_k(k(k+1)) \ \ \ \ \ (29)

for {1 \leq k \leq d-1}.

From the last two inequalities (28), (29) we see that a special role is likely to be played by the exponents

\displaystyle  \beta_k := \alpha_k(k(k-1))

for {2 \leq k \leq d} and

\displaystyle \gamma_k := \alpha_k(k(d+1))

for {1 \leq k \leq d}. From the convexity (25) and a brief calculation we have

\displaystyle  \alpha_{k+1}(k(d+1)) \leq \frac{1}{d-k+1} \alpha_{k+1}(k(k+1))

\displaystyle + \frac{d-k}{d-k+1} \alpha_{k+1}((k+1)(d+1)),

for {1 \leq k \leq d-1}, hence from (28) we have

\displaystyle  \gamma_k \leq \frac{1}{d-k+1} \beta_{k+1} + \frac{d-k}{d-k+1} \gamma_{k+1}. \ \ \ \ \ (30)

Similarly, from (25) and a brief calculation we have

\displaystyle  \alpha_k(k(k+1)) \leq \frac{(d-k)(k-1)}{(k+1)(d-k+2)} \alpha_k( k(k-1))

\displaystyle  + \frac{2(d+1)}{(k+1)(d-k+2)} \alpha_k(k(d+1))

for {2 \leq k \leq d-1}; the same bound holds for {k=1} if we drop the term with the {(k-1)} factor, thanks to (24). Thus from (29) we have

\displaystyle  \beta_{k+1} \leq \frac{(d-k)(k-1)}{k(d-k+2)} \beta_k + \frac{2(d+1)}{k(d-k+2)} \gamma_k, \ \ \ \ \ (31)

for {1 \leq k \leq d-1}, again with the understanding that we omit the first term on the right-hand side when {k=1}. Finally, (26) gives

\displaystyle  \gamma_d \leq -\eta.

Let us write out the system of equations we have obtained in full:

\displaystyle  \beta_2 \leq 2 \gamma_1 \ \ \ \ \ (32)

\displaystyle  \gamma_1 \leq \frac{1}{d} \beta_2 + \frac{d-1}{d} \gamma_2 \ \ \ \ \ (33)

\displaystyle  \beta_3 \leq \frac{d-2}{2d} \beta_2 + \frac{2(d+1)}{2d} \gamma_2 \ \ \ \ \ (34)

\displaystyle  \gamma_2 \leq \frac{1}{d-1} \beta_3 + \frac{d-2}{d-1} \gamma_3 \ \ \ \ \ (35)

\displaystyle  \beta_4 \leq \frac{2(d-3)}{3(d-1)} \beta_3 + \frac{2(d+1)}{3(d-1)} \gamma_3

\displaystyle  \gamma_3 \leq \frac{1}{d-2} \beta_4 + \frac{d-3}{d-2} \gamma_4

\displaystyle  ...

\displaystyle  \beta_d \leq \frac{d-2}{(d-1) 3} \beta_{d-1} + \frac{2(d+1)}{(d-1) 3} \gamma_{d-1}

\displaystyle  \gamma_{d-1} \leq \frac{1}{2} \beta_d + \frac{1}{2} \gamma_d \ \ \ \ \ (36)

\displaystyle  \gamma_d \leq -\eta. \ \ \ \ \ (37)

We can then eliminate the variables one by one. Inserting (33) into (32) we obtain

\displaystyle  \beta_2 \leq \frac{2}{d} \beta_2 + \frac{2(d-1)}{d} \gamma_2

which simplifies to

\displaystyle  \beta_2 \leq \frac{2(d-1)}{d-2} \gamma_2.

Inserting this into (34) gives

\displaystyle  \beta_3 \leq 2 \gamma_2

which when combined with (35) gives

\displaystyle  \beta_3 \leq \frac{2}{d-1} \beta_3 + \frac{2(d-2)}{d-1} \gamma_3

which simplifies to

\displaystyle  \beta_3 \leq \frac{2(d-2)}{d-3} \gamma_3.

Iterating this we get

\displaystyle  \beta_{k+1} \leq 2 \gamma_k

for all {1 \leq k \leq d-1} and

\displaystyle  \beta_k \leq \frac{2(d-k+1)}{d-k} \gamma_k

for all {2 \leq k \leq d-1}. In particular

\displaystyle  \beta_d \leq 2 \gamma_{d-1}

which on insertion into (36), (37) gives

\displaystyle  \beta_d \leq \beta_d - \eta

which is absurd if {\eta>0}. Thus {\eta \leq 0} and so {D_p(N)} must be of subpolynomial growth.

Remark 11 (This observation is essentially due to Heath-Brown.) If we let {x} denote the column vector with entries {\beta_2,\dots,\beta_d,\gamma_1,\dots,\gamma_{d-1}} (arranged in whatever order one pleases), then the above system of inequalities (32)(36) (using (37) to handle the appearance of {\gamma_d} in (36)) reads

\displaystyle  x \leq Px + \eta v \ \ \ \ \ (38)

for some explicit square matrix {P} with non-negative coefficients, where the inequality denotes pointwise domination, and {v} is an explicit vector with non-positive coefficients that reflects the effect of (37). It is possible to show (using (24), (26)) that all the coefficients of {x} are negative (assuming the counterfactual situation {\eta>0} of course). Then we can iterate this to obtain

\displaystyle  x \leq P^k x + \eta \sum_{j=0}^{k-1} P^j v

for any natural number {k}. This would lead to an immediate contradiction if the Perron-Frobenius eigenvalue of {P} exceeds {1} because {P^k x} would now grow exponentially; this is typically the situation for “non-endpoint” applications such as proving decoupling inequalities away from the endpoint. In the endpoint situation discussed above, the Perron-Frobenius eigenvalue is {1}, with {v} having a non-trivial projection to this eigenspace, so the sum {\sum_{j=0}^{k-1} \eta P^j v} now grows at least linearly, which still gives the required contradiction for any {\eta>0}. So it is important to gather “enough” inequalities so that the relevant matrix {P} has a Perron-Frobenius eigenvalue greater than or equal to {1} (and in the latter case one needs non-trivial injection of an induction hypothesis into an eigenspace corresponding to an eigenvalue {1}). More specifically, if {\rho} is the spectral radius of {P} and {w^T} is a left Perron-Frobenius eigenvector, that is to say a non-negative vector, not identically zero, such that {w^T P = \rho w^T}, then by taking inner products of (38) with {w} we obtain

\displaystyle  w^T x \leq \rho w^T x + \eta w^T v.

If {\rho > 1} this leads to a contradiction since {w^T x} is negative and {w^T v} is non-positive. When {\rho = 1} one still gets a contradiction as long as {w^T v} is strictly negative.

Remark 12 (This calculation is essentially due to Guo and Zorin-Kranich.) Here is a concrete application of the Perron-Frobenius strategy outlined above to the system of inequalities (32)(37). Consider the weighted sum

\displaystyle  W := \sum_{k=2}^d (k-1) \beta_k + \sum_{k=1}^{d-1} 2k \gamma_k;

I had secretly calculated the weights {k-1}, {2k} as coming from the left Perron-Frobenius eigenvector of the matrix {P} described in the previous remark, but for this calculation the precise provenance of the weights is not relevant. Applying the inequalities (31), (30) we see that {W} is bounded by

\displaystyle  \sum_{k=2}^d (k-1) (\frac{(d-k+1)(k-2)}{(k-1)(d-k+3)} \beta_{k-1} + \frac{2(d+1)}{(k-1)(d-k+3)} \gamma_{k-1})

\displaystyle  + \sum_{k=1}^{d-1} 2k(\frac{1}{d-k+1} \beta_{k+1} + \frac{d-k}{d-k+1} \gamma_{k+1})

(with the convention that the {\beta_1} term is absent); this simplifies after some calculation to the bound

\displaystyle  W \leq W + \frac{1}{2} \gamma_d

and this and (37) then leads to the required contradiction.

Exercise 13

  • (i) Extend the above analysis to also cover the non-endpoint case {d^2 < p < d(d+1)}. (One will need to establish the claim {\alpha_k(t) \leq -\eta} for {t \leq p}.)
  • (ii) Modify the argument to deal with the remaining cases {2 < p \leq d^2} by dropping some of the steps.

I was recently asked to contribute a short comment to Nature Reviews Physics, as part of a series of articles on fluid dynamics on the occasion of the 200th anniversary (this August) of the birthday of George Stokes.  My contribution is now online as “Searching for singularities in the Navier–Stokes equations“, where I discuss the global regularity problem for Navier-Stokes and my thoughts on how one could try to construct a solution that blows up in finite time via an approximately discretely self-similar “fluid computer”.  (The rest of the series does not currently seem to be available online, but I expect they will become so shortly.)

 

Given three points {A,B,C} in the plane, the distances {|AB|, |BC|, |AC|} between them have to be non-negative and obey the triangle inequalities

\displaystyle  |AB| \leq |BC| + |AC|, |BC| \leq |AC| + |AB|, |AC| \leq |AB| + |BC|

but are otherwise unconstrained. But if one has four points {A,B,C,D} in the plane, then there is an additional constraint connecting the six distances {|AB|, |AC|, |AD|, |BC|, |BD|, |CD|} between them, coming from the Cayley-Menger determinant:

Proposition 1 (Cayley-Menger determinant) If {A,B,C,D} are four points in the plane, then the Cayley-Menger determinant

\displaystyle  \mathrm{det} \begin{pmatrix} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & |AB|^2 & |AC|^2 & |AD|^2 \\ 1 & |AB|^2 & 0 & |BC|^2 & |BD|^2 \\ 1 & |AC|^2 & |BC|^2 & 0 & |CD|^2 \\ 1 & |AD|^2 & |BD|^2 & |CD|^2 & 0 \end{pmatrix} \ \ \ \ \ (1)

vanishes.

Proof: If we view {A,B,C,D} as vectors in {{\bf R}^2}, then we have the usual cosine rule {|AB|^2 = |A|^2 + |B|^2 - 2 A \cdot B}, and similarly for all the other distances. The {5 \times 5} matrix appearing in (1) can then be written as {M+M^T-2\tilde G}, where {M} is the matrix

\displaystyle  M := \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 1 & |A|^2 & |B|^2 & |C|^2 & |D|^2 \\ 1 & |A|^2 & |B|^2 & |C|^2 & |D|^2 \\ 1 & |A|^2 & |B|^2 & |C|^2 & |D|^2 \\ 1 & |A|^2 & |B|^2 & |C|^2 & |D|^2 \end{pmatrix}

and {\tilde G} is the (augmented) Gram matrix

\displaystyle  \tilde G := \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & A \cdot A & A \cdot B & A \cdot C & A \cdot D \\ 0 & B \cdot A & B \cdot B & B \cdot C & B \cdot D \\ 0 & C \cdot A & C \cdot B & C \cdot C & C \cdot D \\ 0 & D \cdot A & D \cdot B & D \cdot C & D \cdot D \end{pmatrix}.

The matrix {M} is a rank one matrix, and so {M^T} is also. The Gram matrix {\tilde G} factorises as {\tilde G = \tilde \Sigma \tilde \Sigma^T}, where {\tilde \Sigma} is the {5 \times 2} matrix with rows {0,A,B,C,D}, and thus has rank at most {2}. Therefore the matrix {M+M^T-2\tilde G} in (1) has rank at most {1+1+2=4}, and hence has determinant zero as claimed. \Box

For instance, if we know that {|AB|=|AC|=|DB|=|DC|=1} and {|BC|=\sqrt{2}}, then in order for {A,B,C,D} to be coplanar, the remaining distance {|AD|} has to obey the equation

\displaystyle  \mathrm{det} \begin{pmatrix} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 & |AD|^2 \\ 1 & 1 & 0 & 2 & 1 \\ 1 & 1 & 2 & 0 & 1 \\ 1 & |AD|^2 & 1 & 1 & 0 \end{pmatrix} = 0.

After some calculation the left-hand side simplifies to {-4 |AD|^4 + 8 |AD|^2}, so the non-negative quantity is constrained to equal either {0} or {\sqrt{2}}. The former happens when {A,B,C} form a unit right-angled triangle with right angle at {A} and {D=A}; the latter happens when {A,B,D,C} form the vertices of a unit square traversed in that order. Any other value for {|AD|} is not compatible with the hypothesis for {A,B,C,D} lying on a plane; hence the Cayley-Menger determinant can be used as a test for planarity.

Now suppose that we have four points {A,B,C,D} on a sphere {S_R} of radius {R}, with six distances {|AB|_R, |AC|_R, |AD|_R, |BC|_R, |BD|_R, |AD|_R} now measured as lengths of arcs on the sphere. There is a spherical analogue of the Cayley-Menger determinant:

Proposition 2 (Spherical Cayley-Menger determinant) If {A,B,C,D} are four points on a sphere {S_R} of radius {R} in {{\bf R}^3}, then the spherical Cayley-Menger determinant

\displaystyle  \mathrm{det} \begin{pmatrix} 1 & \cos \frac{|AB|_R}{R} & \cos \frac{|AC|_R}{R} & \cos \frac{|AD|_R}{R} \\ \cos \frac{|AB|_R}{R} & 1 & \cos \frac{|BC|_R}{R} & \cos \frac{|BD|_R}{R} \\ \cos \frac{|AC|_R}{R} & \cos \frac{|BC|_R}{R} & 1 & \cos \frac{|CD|_R}{R} \\ \cos \frac{|AD|_R}{R} & \cos \frac{|BD|_R}{R} & \cos \frac{|CD|_R}{R} & 1 \end{pmatrix} \ \ \ \ \ (2)

vanishes.

Proof: We can assume that the sphere {S_R} is centred at the origin of {{\bf R}^3}, and view {A,B,C,D} as vectors in {{\bf R}^3} of magnitude {R}. The angle subtended by {AB} from the origin is {|AB|_R/R}, so by the cosine rule we have

\displaystyle  A \cdot B = R^{2} \cos \frac{|AB|_R}{R}.

Similarly for all the other inner products. Thus the matrix in (2) can be written as {R^{-2} G}, where {G} is the Gram matrix

\displaystyle  G := \begin{pmatrix} A \cdot A & A \cdot B & A \cdot C & A \cdot D \\ B \cdot A & B \cdot B & B \cdot C & B \cdot D \\ C \cdot A & C \cdot B & C \cdot C & C \cdot D \\ D \cdot A & D \cdot B & D \cdot C & D \cdot D \end{pmatrix}.

We can factor {G = \Sigma \Sigma^T} where {\Sigma} is the {4 \times 3} matrix with rows {A,B,C,D}. Thus {R^{-2} G} has rank at most {3} and thus the determinant vanishes as required. \Box

Just as the Cayley-Menger determinant can be used to test for coplanarity, the spherical Cayley-Menger determinant can be used to test for lying on a sphere of radius {R}. For instance, if we know that {A,B,C,D} lie on {S_R} and {|AB|_R, |AC|_R, |BC|_R, |BD|_R, |CD|_R} are all equal to {\pi R/2}, then the above proposition gives

\displaystyle  \mathrm{det} \begin{pmatrix} 1 & 0 & 0 & \cos \frac{|AD|_R}{R} \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \cos \frac{|AD|_R}{R} & 0 & 0 & 1 \end{pmatrix} = 0.

The left-hand side evaluates to {1 - \cos^2 \frac{|AD|_R}{R}}; as {|AD|_R} lies between {0} and {\pi R}, the only choices for this distance are then {0} and {\pi R}. The former happens for instance when {A} lies on the north pole {(R,0,0)}, {B = (0,R,0), C = (0,R,0)} are points on the equator with longitudes differing by 90 degrees, and {D=(R,0,0)} is also equal to the north pole; the latter occurs when {D=(-R,0,0)} is instead placed on the south pole.

The Cayley-Menger and spherical Cayley-Menger determinants look slightly different from each other, but one can transform the latter into something resembling the former by row and column operations. Indeed, the determinant (2) can be rewritten as

\displaystyle  \mathrm{det} \begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 - \cos \frac{|AB|_R}{R} & 1 - \cos \frac{|AC|_R}{R} & 1 - \cos \frac{|AD|_R}{R} \\ 1 & 1-\cos \frac{|AB|_R}{R} & 0 & 1-\cos \frac{|BC|_R}{R} & 1- \cos \frac{|BD|_R}{R} \\ 1 & 1-\cos \frac{|AC|_R}{R} & 1-\cos \frac{|BC|_R}{R} & 0 & 1-\cos \frac{|CD|_R}{R} \\ 1 & 1-\cos \frac{|AD|_R}{R} & 1-\cos \frac{|BD|_R}{R} & 1- \cos \frac{|CD|_R}{R} & 0 \end{pmatrix}

and by further row and column operations, this determinant vanishes if and only if the determinant

\displaystyle  \mathrm{det} \begin{pmatrix} \frac{1}{2R^2} & 1 & 1 & 1 & 1 \\ 1 & 0 & f_R(|AB|_R) & f_R(|AC|_R) & f_R(|AD|_R) \\ 1 & f_R(|AB|_R) & 0 & f_R(|BC|_R) & f_R(|BD|_R) \\ 1 & f_R(|AC|_R) & f_R(|BC|_R) & 0 & f_R(|CD|_R) \\ 1 & f_R(|AD|_R) & f_R(|BD|_R) & f_R(|CD|_R) & 0 \end{pmatrix} \ \ \ \ \ (3)

vanishes, where {f_R(x) := 2R^2 (1-\cos \frac{x}{R})}. In the limit {R \rightarrow \infty} (so that the curvature of the sphere {S_R} tends to zero), {|AB|_R} tends to {|AB|}, and by Taylor expansion {f_R(|AB|_R)} tends to {|AB|^2}; similarly for the other distances. Now we see that the planar Cayley-Menger determinant emerges as the limit of (3) as {R \rightarrow \infty}, as would be expected from the intuition that a plane is essentially a sphere of infinite radius.

In principle, one can now estimate the radius {R} of the Earth (assuming that it is either a sphere {S_R} or a flat plane {S_\infty}) if one is given the six distances {|AB|_R, |AC|_R, |AD|_R, |BC|_R, |BD|_R, |CD|_R} between four points {A,B,C,D} on the Earth. Of course, if one wishes to do so, one should have {A,B,C,D} rather far apart from each other, since otherwise it would be difficult to for instance distinguish the round Earth from a flat one. As an experiment, and just for fun, I wanted to see how accurate this would be with some real world data. I decided to take {A}, {B}, {C}, {D} be the cities of London, Los Angeles, Tokyo, and Dubai respectively. As an initial test, I used distances from this online flight calculator, measured in kilometers:

\displaystyle  |AB|_R = 8790

\displaystyle  |AC|_R = 9597 \mathrm{km}

\displaystyle  |AD|_R = 5488\mathrm{km}

\displaystyle  |BC|_R = 8849\mathrm{km}

\displaystyle  |BD|_R = 13435\mathrm{km}

\displaystyle  |CD|_R = 7957\mathrm{km}.

Given that the true radius of the earth was about {R_0 := 6371 \mathrm{km}} kilometers, I chose the change of variables {R = R_0/k} (so that {k=1} corresponds to the round Earth model with the commonly accepted value for the Earth’s radius, and {k=0} corresponds to the flat Earth), and obtained the following plot for (3):

In particular, the determinant does indeed come very close to vanishing when {k=1}, which is unsurprising since, as explained on the web site, the online flight calculator uses a model in which the Earth is an ellipsoid of radii close to {6371} km. There is another radius that would also be compatible with this data at {k\approx 1.3} (corresponding to an Earth of radius about {4900} km), but presumably one could rule out this as a spurious coincidence by experimenting with other quadruples of cities than the ones I selected. On the other hand, these distances are highly incompatible with the flat Earth model {k=0}; one could also see this with a piece of paper and a ruler by trying to lay down four points {A,B,C,D} on the paper with (an appropriately rescaled) version of the above distances (e.g., with {|AB| = 8.790 \mathrm{cm}}, {|AC| = 9.597 \mathrm{cm}}, etc.).

If instead one goes to the flight time calculator and uses flight travel times instead of distances, one now gets the following data (measured in hours):

\displaystyle  |AB|_R = 10\mathrm{h}\ 3\mathrm{m}

\displaystyle  |AC|_R = 11\mathrm{h}\ 49\mathrm{m}

\displaystyle  |AD|_R = 6\mathrm{h}\ 56\mathrm{m}

\displaystyle  |BC|_R = 10\mathrm{h}\ 56\mathrm{m}

\displaystyle  |BD|_R = 16\mathrm{h}\ 23\mathrm{m}

\displaystyle  |CD|_R = 9\mathrm{h}\ 52\mathrm{m}.

Assuming that planes travel at about {800} kilometers per hour, the true radius of the Earth should be about {R_1 := 8\mathrm{h}} of flight time. If one then uses the normalisation {R = R_1/k}, one obtains the following plot:

Not too surprisingly, this is basically a rescaled version of the previous plot, with vanishing near {k=1} and at {k=1.3}. (The website for the flight calculator does say it calculates short and long haul flight times slightly differently, which may be the cause of the slight discrepancies between this figure and the previous one.)

Of course, these two data sets are “cheating” since they come from a model which already presupposes what the radius of the Earth is. But one can input real world flight times between these four cities instead of the above idealised data. Here one runs into the issue that the flight time from {A} to {B} is not necessarily the same as that from {B} to {A} due to such factors as windspeed. For instance, I looked up the online flight time from Tokyo to Dubai to be 11 hours and 10 minutes, whereas the online flight time from Dubai to Tokyo was 9 hours and 50 minutes. The simplest thing to do here is take an arithmetic mean of the two times as a preliminary estimate for the flight time without windspeed factors, thus for instance the Tokyo-Dubai flight time would now be 10 hours and 30 minutes, and more generally

\displaystyle  |AB|_R = 10\mathrm{h}\ 47\mathrm{m}

\displaystyle  |AC|_R = 12\mathrm{h}\ 0\mathrm{m}

\displaystyle  |AD|_R = 7\mathrm{h}\ 17\mathrm{m}

\displaystyle  |BC|_R = 10\mathrm{h}\ 50\mathrm{m}

\displaystyle  |BD|_R = 15\mathrm{h}\ 55\mathrm{m}

\displaystyle  |CD|_R = 10\mathrm{h}\ 30\mathrm{m}.

This data is not too far off from the online calculator data, but it does distort the graph slightly (taking {R=8/k} as before):

Now one gets estimates for the radius of the Earth that are off by about a factor of {2} from the truth, although the {k=1} round Earth model still is twice as accurate as the flat Earth model {k=0}.

Given that windspeed should additively affect flight velocity rather than flight time, and the two are inversely proportional to each other, it is more natural to take a harmonic mean rather than an arithmetic mean. This gives the slightly different values

\displaystyle  |AB|_R = 10\mathrm{h}\ 51\mathrm{m}

\displaystyle  |AC|_R = 11\mathrm{h}\ 59\mathrm{m}

\displaystyle  |AD|_R = 7\mathrm{h}\ 16\mathrm{m}

\displaystyle  |BC|_R = 10\mathrm{h}\ 46\mathrm{m}

\displaystyle  |BD|_R = 15\mathrm{h}\ 54\mathrm{m}

\displaystyle  |CD|_R = 10\mathrm{h}\ 27\mathrm{m}

but one still gets essentially the same plot:

So the inaccuracies are presumably coming from some other source. (Note for instance that the true flight time from Tokyo to Dubai is about {6\%} greater than the calculator predicts, while the flight time from LA to Dubai is about {3\%} less; these sorts of errors seem to pile up in this calculation.) Nevertheless, it does seem that flight time data is (barely) enough to establish the roundness of the Earth and obtain a somewhat ballpark estimate for its radius. (I assume that the fit would be better if one could include some Southern Hemisphere cities such as Sydney or Santiago, but I was not able to find a good quadruple of widely spaced cities on both hemispheres for which there were direct flights between all six pairs.)

This is another sequel to a recent post in which I showed the Riemann zeta function {\zeta} can be locally approximated by a polynomial, in the sense that for randomly chosen {t \in [T,2T]} one has an approximation

\displaystyle  \zeta(\frac{1}{2} + it - \frac{2\pi i z}{\log T}) \approx P_t( e^{2\pi i z/N} ) \ \ \ \ \ (1)

where {N} grows slowly with {T}, and {P_t} is a polynomial of degree {N}. It turns out that in the function field setting there is an exact version of this approximation which captures many of the known features of the Riemann zeta function, namely Dirichlet {L}-functions for a random character of given modulus over a function field. This model was (essentially) studied in a fairly recent paper by Andrade, Miller, Pratt, and Trinh; I am not sure if there is any further literature on this model beyond this paper (though the number field analogue of low-lying zeroes of Dirichlet {L}-functions is certainly well studied). In this model it is possible to set {N} fixed and let {T} go to infinity, thus providing a simple finite-dimensional model problem for problems involving the statistics of zeroes of the zeta function.

In this post I would like to record this analogue precisely. We will need a finite field {{\mathbb F}} of some order {q} and a natural number {N}, and set

\displaystyle  T := q^{N+1}.

We will primarily think of {q} as being large and {N} as being either fixed or growing very slowly with {q}, though it is possible to also consider other asymptotic regimes (such as holding {q} fixed and letting {N} go to infinity). Let {{\mathbb F}[X]} be the ring of polynomials of one variable {X} with coefficients in {{\mathbb F}}, and let {{\mathbb F}[X]'} be the multiplicative semigroup of monic polynomials in {{\mathbb F}[X]}; one should view {{\mathbb F}[X]} and {{\mathbb F}[X]'} as the function field analogue of the integers and natural numbers respectively. We use the valuation {|n| := q^{\mathrm{deg}(n)}} for polynomials {n \in {\mathbb F}[X]} (with {|0|=0}); this is the analogue of the usual absolute value on the integers. We select an irreducible polynomial {Q \in {\mathbb F}[X]} of size {|Q|=T} (i.e., {Q} has degree {N+1}). The multiplicative group {({\mathbb F}[X]/Q{\mathbb F}[X])^\times} can be shown to be cyclic of order {|Q|-1=T-1}. A Dirichlet character of modulus {Q} is a completely multiplicative function {\chi: {\mathbb F}[X] \rightarrow {\bf C}} of modulus {Q}, that is periodic of period {Q} and vanishes on those {n \in {\mathbb F}[X]} not coprime to {Q}. From Fourier analysis we see that there are exactly {\phi(Q) := |Q|-1} Dirichlet characters of modulus {Q}. A Dirichlet character is said to be odd if it is not identically one on the group {{\mathbb F}^\times} of non-zero constants; there are only {\frac{1}{q-1} \phi(Q)} non-odd characters (including the principal character), so in the limit {q \rightarrow \infty} most Dirichlet characters are odd. We will work primarily with odd characters in order to be able to ignore the effect of the place at infinity.

Let {\chi} be an odd Dirichlet character of modulus {Q}. The Dirichlet {L}-function {L(s, \chi)} is then defined (for {s \in {\bf C}} of sufficiently large real part, at least) as

\displaystyle  L(s,\chi) := \sum_{n \in {\mathbb F}[X]'} \frac{\chi(n)}{|n|^s}

\displaystyle  = \sum_{m=0}^\infty q^{-sm} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \chi(n).

Note that for {m \geq N+1}, the set {n \in {\mathbb F}[X]': |n| = q^m} is invariant under shifts {h} whenever {|h| < T}; since this covers a full set of residue classes of {{\mathbb F}[X]/Q{\mathbb F}[X]}, and the odd character {\chi} has mean zero on this set of residue classes, we conclude that the sum {\sum_{n \in {\mathbb F}[X]': |n| = q^m} \chi(n)} vanishes for {m \geq N+1}. In particular, the {L}-function is entire, and for any real number {t} and complex number {z}, we can write the {L}-function as a polynomial

\displaystyle  L(\frac{1}{2} + it - \frac{2\pi i z}{\log T},\chi) = P(Z) = P_{t,\chi}(Z) := \sum_{m=0}^N c^1_m(t,\chi) Z^j

where {Z := e(z/N) = e^{2\pi i z/N}} and the coefficients {c^1_m = c^1_m(t,\chi)} are given by the formula

\displaystyle  c^1_m(t,\chi) := q^{-m/2-imt} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \chi(n).

Note that {t} can easily be normalised to zero by the relation

\displaystyle  P_{t,\chi}(Z) = P_{0,\chi}( q^{-it} Z ). \ \ \ \ \ (2)

In particular, the dependence on {t} is periodic with period {\frac{2\pi}{\log q}} (so by abuse of notation one could also take {t} to be an element of {{\bf R}/\frac{2\pi}{\log q}{\bf Z}}).

Fourier inversion yields a functional equation for the polynomial {P}:

Proposition 1 (Functional equation) Let {\chi} be an odd Dirichlet character of modulus {Q}, and {t \in {\bf R}}. There exists a phase {e(\theta)} (depending on {t,\chi}) such that

\displaystyle  a_{N-m}^1 = e(\theta) \overline{c^1_m}

for all {0 \leq m \leq N}, or equivalently that

\displaystyle  P(1/Z) = e^{i\theta} Z^{-N} \overline{P}(Z)

where {\overline{P}(Z) := \overline{P(\overline{Z})}}.

Proof: We can normalise {t=0}. Let {G} be the finite field {{\mathbb F}[X] / Q {\mathbb F}[X]}. We can write

\displaystyle  a_{N-m} = q^{-(N-m)/2} \sum_{n \in q^{N-m} + H_{N-m}} \chi(n)

where {H_j} denotes the subgroup of {G} consisting of (residue classes of) polynomials of degree less than {j}. Let {e_G: G \rightarrow S^1} be a non-trivial character of {G} whose kernel lies in the space {H_N} (this is easily achieved by pulling back a non-trivial character from the quotient {G/H_N \equiv {\mathbb F}}). We can use the Fourier inversion formula to write

\displaystyle  a_{N-m} = q^{(m-N)/2} \sum_{\xi \in G} \hat \chi(\xi) \sum_{n \in T^{N-m} + H_{N-m}} e_G( n\xi )

where

\displaystyle  \hat \chi(\xi) := q^{-N-1} \sum_{n \in G} \chi(n) e_G(-n\xi).

From change of variables we see that {\hat \chi} is a scalar multiple of {\overline{\chi}}; from Plancherel we conclude that

\displaystyle  \hat \chi = e(\theta_0) q^{-(N+1)/2} \overline{\chi} \ \ \ \ \ (3)

for some phase {e(\theta_0)}. We conclude that

\displaystyle  a_{N-m} = e(\theta_0) q^{-(2N-m+1)/2} \sum_{\xi \in G} \overline{\chi}(\xi) e_G( T^{N-j} \xi) \sum_{n \in H_{N-j}} e_G( n\xi ). \ \ \ \ \ (4)

The inner sum {\sum_{n \in H_{N-m}} e_G( n\xi )} equals {q^{N-m}} if {\xi \in H_{j+1}}, and vanishes otherwise, thus

\displaystyle a_{N-m} = e(\theta_0) q^{-(m+1)/2} \sum_{\xi \in H_{j+1}} \overline{\chi}(\xi) e_G( T^{N-m} \xi).

For {\xi} in {H_j}, {e_G(T^{N-m} \xi)=1} and the contribution of the sum vanishes as {\chi} is odd. Thus we may restrict {\xi} to {H_{m+1} \backslash H_m}, so that

\displaystyle a_{N-m} = e(\theta_0) q^{-(m+1)/2} \sum_{h \in {\mathbb F}^\times} e_G( T^{N} h) \sum_{\xi \in h T^m + H_{m}} \overline{\chi}(\xi).

By the multiplicativity of {\chi}, this factorises as

\displaystyle a_{N-m} = e(\theta_0) q^{-(m+1)/2} (\sum_{h \in {\mathbb F}^\times} \overline{\chi}(h) e_G( T^{N} h)) (\sum_{\xi \in T^m + H_{m}} \overline{\chi}(\xi)).

From the one-dimensional version of (3) (and the fact that {\chi} is odd) we have

\displaystyle  \sum_{h \in {\mathbb F}^\times} \overline{\chi}(h) e_G( T^{N} h) = e(\theta_1) q^{1/2}

for some phase {e(\theta_1)}. The claim follows. \Box

As one corollary of the functional equation, {a_N} is a phase rotation of {\overline{a_1} = 1} and thus is non-zero, so {P} has degree exactly {N}. The functional equation is then equivalent to the {N} zeroes of {P} being symmetric across the unit circle. In fact we have the stronger

Theorem 2 (Riemann hypothesis for Dirichlet {L}-functions over function fields) Let {\chi} be an odd Dirichlet character of modulus {Q}, and {t \in {\bf R}}. Then all the zeroes of {P} lie on the unit circle.

We derive this result from the Riemann hypothesis for curves over function fields below the fold.

In view of this theorem (and the fact that {a_1=1}), we may write

\displaystyle  P(Z) = \mathrm{det}(1 - ZU)

for some unitary {N \times N} matrix {U = U_{t,\chi}}. It is possible to interpret {U} as the action of the geometric Frobenius map on a certain cohomology group, but we will not do so here. The situation here is simpler than in the number field case because the factor {\exp(A)} arising from very small primes is now absent (in the function field setting there are no primes of size between {1} and {q}).

We now let {\chi} vary uniformly at random over all odd characters of modulus {Q}, and {t} uniformly over {{\bf R}/\frac{2\pi}{\log q}{\bf Z}}, independently of {\chi}; we also make the distribution of the random variable {U} conjugation invariant in {U(N)}. We use {{\mathbf E}_Q} to denote the expectation with respect to this randomness. One can then ask what the limiting distribution of {U} is in various regimes; we will focus in this post on the regime where {N} is fixed and {q} is being sent to infinity. In the spirit of the Sato-Tate conjecture, one should expect {U} to converge in distribution to the circular unitary ensemble (CUE), that is to say Haar probability measure on {U(N)}. This may well be provable from Deligne’s “Weil II” machinery (in the spirit of this monograph of Katz and Sarnak), though I do not know how feasible this is or whether it has already been done in the literature; here we shall avoid using this machinery and study what partial results towards this CUE hypothesis one can make without it.

If one lets {\lambda_1,\dots,\lambda_N} be the eigenvalues of {U} (ordered arbitrarily), then we now have

\displaystyle  \sum_{m=0}^N c^1_m Z^m = P(Z) = \prod_{j=1}^N (1 - \lambda_j Z)

and hence the {c^1_m} are essentially elementary symmetric polynomials of the eigenvalues:

\displaystyle  c^1_m = (-1)^j e_m( \lambda_1,\dots,\lambda_N). \ \ \ \ \ (5)

One can take log derivatives to conclude

\displaystyle  \frac{P'(Z)}{P(Z)} = \sum_{j=1}^N \frac{\lambda_j}{1-\lambda_j Z}.

On the other hand, as in the number field case one has the Dirichlet series expansion

\displaystyle  Z \frac{P'(Z)}{P(Z)} = \sum_{n \in {\mathbb F}[X]'} \frac{\Lambda_q(n) \chi(n)}{|n|^s}

where {s = \frac{1}{2} + it - \frac{2\pi i z}{\log T}} has sufficiently large real part, {Z = e(z/N)}, and the von Mangoldt function {\Lambda_q(n)} is defined as {\log_q |p| = \mathrm{deg} p} when {n} is the power of an irreducible {p} and {0} otherwise. We conclude the “explicit formula”

\displaystyle  c^{\Lambda_q}_m = \sum_{j=1}^N \lambda_j^m = \mathrm{tr}(U^m) \ \ \ \ \ (6)

for {m \geq 1}, where

\displaystyle  c^{\Lambda_q}_m := q^{-m/2-imt} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \Lambda_q(n) \chi(n).

Similarly on inverting {P(Z)} we have

\displaystyle  P(Z)^{-1} = \prod_{j=1}^N (1 - \lambda_j Z)^{-1}.

Since we also have

\displaystyle  P(Z)^{-1} = \sum_{n \in {\mathbb F}[X]'} \frac{\mu(n) \chi(n)}{|n|^s}

for {s} sufficiently large real part, where the Möbius function {\mu(n)} is equal to {(-1)^k} when {n} is the product of {k} distinct irreducibles, and {0} otherwise, we conclude that the Möbius coefficients

\displaystyle  c^\mu_m := q^{-m/2-imt} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \mu(n) \chi(n)

are just the complete homogeneous symmetric polynomials of the eigenvalues:

\displaystyle  c^\mu_m = h_m( \lambda_1,\dots,\lambda_N). \ \ \ \ \ (7)

One can then derive various algebraic relationships between the coefficients {c^1_m, c^{\Lambda_q}_m, c^\mu_m} from various identities involving symmetric polynomials, but we will not do so here.

What do we know about the distribution of {U}? By construction, it is conjugation-invariant; from (2) it is also invariant with respect to the rotations {U \rightarrow e^{i\theta} U} for any phase {\theta \in{\bf R}}. We also have the function field analogue of the Rudnick-Sarnak asymptotics:

Proposition 3 (Rudnick-Sarnak asymptotics) Let {a_1,\dots,a_k,b_1,\dots,b_k} be nonnegative integers. If

\displaystyle  \sum_{j=1}^k j a_j \leq N, \ \ \ \ \ (8)

then the moment

\displaystyle  {\bf E}_{Q} \prod_{j=1}^k (\mathrm{tr} U^j)^{a_j} (\overline{\mathrm{tr} U^j})^{b_j} \ \ \ \ \ (9)

is equal to {o(1)} in the limit {q \rightarrow \infty} (holding {N,a_1,\dots,a_k,b_1,\dots,b_k} fixed) unless {a_j=b_j} for all {j}, in which case it is equal to

\displaystyle  \prod_{j=1}^k j^{a_j} a_j! + o(1). \ \ \ \ \ (10)

Comparing this with Proposition 1 from this previous post, we thus see that all the low moments of {U} are consistent with the CUE hypothesis (and also with the ACUE hypothesis, again by the previous post). The case {\sum_{j=1}^k a_j + \sum_{j=1}^k b_j \leq 2} of this proposition was essentially established by Andrade, Miller, Pratt, and Trinh.

Proof: We may assume the homogeneity relationship

\displaystyle  \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j \ \ \ \ \ (11)

since otherwise the claim follows from the invariance under phase rotation {U \mapsto e^{i\theta} U}. By (6), the expression (9) is equal to

\displaystyle  q^{-D} {\bf E}_Q \sum_{n_1,\dots,n_l,n'_1,\dots,n'_{l'} \in {\mathbb F}[X]': |n_i| = q^{s_i}, |n'_i| = q^{s'_i}} (\prod_{i=1}^l \Lambda_q(n_i) \chi(n_i)) \prod_{i=1}^{l'} \Lambda_q(n'_i) \overline{\chi(n'_i)}

where

\displaystyle  D := \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j

\displaystyle  l := \sum_{j=1}^k a_j

\displaystyle  l' := \sum_{j=1}^k b_j

and {s_1 \leq \dots \leq s_l} consists of {a_j} copies of {j} for each {j=1,\dots,k}, and similarly {s'_1 \leq \dots \leq s'_{l'}} consists of {b_j} copies of {j} for each {j=1,\dots,k}.

The polynomials {n_1 \dots n_l} and {n'_1 \dots n'_{l'}} are monic of degree {D}, which by hypothesis is less than the degree of {Q}, and thus they can only be scalar multiples of each other in {{\mathbb F}[X] / Q {\mathbb F}[X]} if they are identical (in {{\mathbb F}[X]}). As such, we see that the average

\displaystyle  {\bf E}_Q \chi(n_1) \dots \chi(n_l) \overline{\chi(n'_1)} \dots \overline{\chi(n'_{l'})}

vanishes unless {n_1 \dots n_l = n'_1 \dots n'_{l'}}, in which case this average is equal to {1}. Thus the expression (9) simplifies to

\displaystyle  q^{-D} \sum_{n_1,\dots,n_l,n'_1,\dots,n'_{l'}: |n_i| = q^{s_i}, |n'_i| = q^{s'_i}; n_1 \dots n_l = n'_1 \dots n'_l} (\prod_{i=1}^l \Lambda_q(n_i)) \prod_{i=1}^{l'} \Lambda_q(n'_i).

There are at most {q^D} choices for the product {n_1 \dots n_l}, and each one contributes {O_D(1)} to the above sum. All but {o(q^D)} of these choices are square-free, so by accepting an error of {o(1)}, we may restrict attention to square-free {n_1 \dots n_l}. This forces {n_1,\dots,n_l,n'_1,\dots,n'_{l'}} to all be irreducible (as opposed to powers of irreducibles); as {{\mathbb F}[X]} is a unique factorisation domain, this forces {l=l'} and {n_1,\dots,n_l} to be a permutation of {n'_1,\dots,n'_{l'}}. By the size restrictions, this then forces {a_j = b_j} for all {j} (if the above expression is to be anything other than {o(1)}), and each {n_1,\dots,n_l} is associated to {\prod_{j=1}^k a_j!} possible choices of {n'_1,\dots,n'_{l'}}. Writing {\Lambda_q(n'_i) = s'_i} and then reinstating the non-squarefree possibilities for {n_1 \dots n_l}, we can thus write the above expression as

\displaystyle  q^{-D} \prod_{j=1}^k j a_j! \sum_{n_1,\dots,n_l,n'_1,\dots,n'_{l'}\in {\mathbb F}[X]': |n_i| = q^{s_i}} \prod_{i=1}^l \Lambda_q(n_i) + o(1).

Using the prime number theorem {\sum_{n \in {\mathbb F}[X]': |n| = q^s} \Lambda_q(n) = q^s}, we obtain the claim. \Box

Comparing this with Proposition 1 from this previous post, we thus see that all the low moments of {U} are consistent with the CUE and ACUE hypotheses:

Corollary 4 (CUE statistics at low frequencies) Let {\lambda_1,\dots,\lambda_N} be the eigenvalues of {U}, permuted uniformly at random. Let {R(\lambda)} be a linear combination of monomials {\lambda_1^{a_1} \dots \lambda_N^{a_N}} where {a_1,\dots,a_N} are integers with either {\sum_{j=1}^N a_j \neq 0} or {\sum_{j=1}^N |a_j| \leq 2N}. Then

\displaystyle  {\bf E}_Q R(\lambda) = {\bf E}_{CUE} R(\lambda) + o(1).

The analogue of the GUE hypothesis in this setting would be the CUE hypothesis, which asserts that the threshold {2N} here can be replaced by an arbitrarily large quantity. As far as I know this is not known even for {2N+2} (though, as mentioned previously, in principle one may be able to resolve such cases using Deligne’s proof of the Riemann hypothesis for function fields). Among other things, this would allow one to distinguish CUE from ACUE, since as discussed in the previous post, these two distributions agree when tested against monomials up to threshold {2N}, though not to {2N+2}.

Proof: By permutation symmetry we can take {R} to be symmetric, and by linearity we may then take {R} to be the symmetrisation of a single monomial {\lambda_1^{a_1} \dots \lambda_N^{a_N}}. If {\sum_{j=1}^N a_j \neq 0} then both expectations vanish due to the phase rotation symmetry, so we may assume that {\sum_{j=1}^N a_j \neq 0} and {\sum_{j=1}^N |a_j| \leq 2N}. We can write this symmetric polynomial as a constant multiple of {\mathrm{tr}(U^{a_1}) \dots \mathrm{tr}(U^{a_N})} plus other monomials with a smaller value of {\sum_{j=1}^N |a_j|}. Since {\mathrm{tr}(U^{-a}) = \overline{\mathrm{tr}(U^a)}}, the claim now follows by induction from Proposition 3 and Proposition 1 from the previous post. \Box

Thus, for instance, for {k=1,2}, the {2k^{th}} moment

\displaystyle {\bf E}_Q |\det(1-U)|^{2k} = {\bf E}_Q |P(1)|^{2k} = {\bf E}_Q |L(\frac{1}{2} + it, \chi)|^{2k}

is equal to

\displaystyle  {\bf E}_{CUE} |\det(1-U)|^{2k} + o(1)

because all the monomials in {\prod_{j=1}^N (1-\lambda_j)^k (1-\lambda_j^{-1})^k} are of the required form when {k \leq 2}. The latter expectation can be computed exactly (for any natural number {k}) using a formula

\displaystyle  {\bf E}_{CUE} |\det(1-U)|^{2k} = \prod_{j=1}^N \frac{\Gamma(j) \Gamma(j+2k)}{\Gamma(j+k)^2}

of Baker-Forrester and Keating-Snaith, thus for instance

\displaystyle  {\bf E}_{CUE} |\det(1-U)|^2 = N+1

\displaystyle  {\bf E}_{CUE} |\det(1-U)|^4 = \frac{(N+1)(N+2)^2(N+3)}{12}

and more generally

\displaystyle  {\bf E}_{CUE}|\det(1-U)|^{2k} = \frac{g_k+o(1)}{(k^2)!} N^{k^2}

when {N \rightarrow \infty}, where {g_k} are the integers

\displaystyle  g_1 = 1, g_2 = 2, g_3 = 42, g_4 = 24024, \dots

and more generally

\displaystyle  g_k := \frac{(k^2)!}{\prod_{i=1}^{2k-1} i^{k-|k-i|}}

(OEIS A039622). Thus we have

\displaystyle {\bf E}_Q |\det(1-U)|^{2k} = \frac{g_k+o(1)}{k^2!} N^{k^2}

for {k=1,2} if {Q \rightarrow \infty} and {N} is sufficiently slowly growing depending on {Q}. The CUE hypothesis would imply that that this formula also holds for higher {k}. (The situation here is cleaner than in the number field case, in which the GUE hypothesis only suggests the correct lower bound for the moments rather than an asymptotic, due to the absence of the wildly fluctuating additional factor {\exp(A)} that is present in the Riemann zeta function model.)

Now we can recover the analogue of Montgomery’s work on the pair correlation conjecture. Consider the statistic

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j )

where

\displaystyle R(z) = \sum_m \hat R(m) z^m

is some finite linear combination of monomials {z^m} independent of {q}. We can expand the above sum as

\displaystyle  \sum_m \hat R(m) {\bf E}_Q \mathrm{tr}(U^m) \mathrm{tr}(U^{-m}).

Assuming the CUE hypothesis, then by Example 3 of the previous post, we would conclude that

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j ) = N^2 \hat R(0) + \sum_m \min(|m|,N) \hat R(m) + o(1). \ \ \ \ \ (12)

This is the analogue of Montgomery’s pair correlation conjecture. Proposition 3 implies that this claim is true whenever {\hat R} is supported on {[-N,N]}. If instead we assume the ACUE hypothesis (or the weaker Alternative Hypothesis that the phase gaps are non-zero multiples of {1/2N}), one should instead have

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j ) = \sum_{k \in {\bf Z}} N^2 \hat R(2Nk) + \sum_{1 \leq |m| \leq N} |m| \hat R(m+2Nk) + o(1)

for arbitrary {R}; this is the function field analogue of a recent result of Baluyot. In any event, since {\mathrm{tr}(U^m) \mathrm{tr}(U^{-m})} is non-negative, we unconditionally have the lower bound

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j ) \geq N^2 \hat R(0) + \sum_{1 \leq |m| \leq N} |m| \hat R(m) + o(1). \ \ \ \ \ (13)

if {\hat R(m)} is non-negative for {|m| > N}.

By applying (12) for various choices of test functions {R} we can obtain various bounds on the behaviour of eigenvalues. For instance suppose we take the Fejér kernel

\displaystyle  R(z) = |1 + z + \dots + z^N|^2 = \sum_{m=-N}^N (N+1-|m|) z^m.

Then (12) applies unconditionally and we conclude that

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j ) = N^2 (N+1) + \sum_{1 \leq |m| \leq N} (N+1-|m|) |m| + o(1).

The right-hand side evaluates to {\frac{2}{3} N(N+1)(2N+1)+o(1)}. On the other hand, {R(\lambda_i/\lambda_j)} is non-negative, and equal to {(N+1)^2} when {\lambda_i = \lambda_j}. Thus

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} 1_{\lambda_i = \lambda_j} \leq \frac{2}{3} \frac{N(2N+1)}{N+1} + o(1).

The sum {\sum_{1 \leq j \leq N} 1_{\lambda_i = \lambda_j}} is at least {1}, and is at least {2} if {\lambda_i} is not a simple eigenvalue. Thus

\displaystyle  {\bf E}_Q \sum_{1 \leq i, \leq N} 1_{\lambda_i \hbox{ not simple}} \leq \frac{1}{3} \frac{N(N-1)}{N+1} + o(1),

and thus the expected number of simple eigenvalues is at least {\frac{2N}{3} \frac{N+4}{N+1} + o(1)}; in particular, at least two thirds of the eigenvalues are simple asymptotically on average. If we had (12) without any restriction on the support of {\hat R}, the same arguments allow one to show that the expected proportion of simple eigenvalues is {1-o(1)}.

Suppose that the phase gaps in {U} are all greater than {c/N} almost surely. Let {\hat R} is non-negative and {R(e^{i\theta})} non-positive for {\theta} outside of the arc {[-c/N,c/N]}. Then from (13) one has

\displaystyle  R(0) N \geq N^2 \hat R(0) + \sum_{1 \leq |m| \leq N} |m| \hat R(m) + o(1),

so by taking contrapositives one can force the existence of a gap less than {c/N} asymptotically if one can find {R} with {\hat R} non-negative, {R} non-positive for {\theta} outside of the arc {[-c/N,c/N]}, and for which one has the inequality

\displaystyle  R(0) N < N^2 \hat R(0) + \sum_{1 \leq |m| \leq N} |m| \hat R(m).

By a suitable choice of {R} (based on a minorant of Selberg) one can ensure this for {c \approx 0.6072} for {N} large; see Section 5 of these notes of Goldston. This is not the smallest value of {c} currently obtainable in the literature for the number field case (which is currently {0.50412}, due to Goldston and Turnage-Butterbaugh, by a somewhat different method), but is still significantly less than the trivial value of {1}. On the other hand, due to the compatibility of the ACUE distribution with Proposition 3, it is not possible to lower {c} below {0.5} purely through the use of Proposition 3.

In some cases it is possible to go beyond Proposition 3. Consider the mollified moment

\displaystyle  {\bf E}_Q |M(U) P(1)|^2

where

\displaystyle  M(U) = \sum_{m=0}^d a_m h_m(\lambda_1,\dots,\lambda_N)

for some coefficients {a_0,\dots,a_d}. We can compute this moment in the CUE case:

Proposition 5 We have

\displaystyle  {\bf E}_{CUE} |M(U) P(1)|^2 = |a_0|^2 + N \sum_{m=1}^d |a_m - a_{m-1}|^2.

Proof: From (5) one has

\displaystyle  P(1) = \sum_{i=0}^N (-1)^i e_i(\lambda_1,\dots,\lambda_N)

hence

\displaystyle  M(U) P(1) = \sum_{i=0}^N \sum_{m=0}^d (-1)^i a_m e_i h_m

where we suppress the dependence on the eigenvalues {\lambda}. Now observe the Pieri formula

\displaystyle  e_i h_m = s_{m 1^i} + s_{(m+1) 1^{i-1}}

where {s_{m 1^i}} are the hook Schur polynomials

\displaystyle  s_{m 1^i} = \sum_{a_1 \leq \dots \leq a_m; a_1 < b_1 < \dots < b_i} \lambda_{a_1} \dots \lambda_{a_m} \lambda_{b_1} \dots \lambda_{b_i}

and we adopt the convention that {s_{m 1^i}} vanishes for {i = -1}, or when {m = 0} and {i > 0}. Then {s_{m1^i}} also vanishes for {i\geq N}. We conclude that

\displaystyle  M(U) P(1) = a_0 s_{0 1^0} + \sum_{0 \leq i \leq N-1} \sum_{m \geq 1} (-1)^i (a_m - a_{m-1}) s_{m 1^i}.

As the Schur polynomials are orthonormal on the unitary group, the claim follows. \Box

The CUE hypothesis would then imply the corresponding mollified moment conjecture

\displaystyle  {\bf E}_{Q} |M(U) P(1)|^2 = |a_0|^2 + N \sum_{m=1}^d |a_m - a_{m-1}|^2 + o(1). \ \ \ \ \ (14)

(See this paper of Conrey, and this paper of Radziwill, for some discussion of the analogous conjecture for the zeta function, which is essentially due to Farmer.)

From Proposition 3 one sees that this conjecture holds in the range {d \leq \frac{1}{2} N}. It is likely that the function field analogue of the calculations of Conrey (based ultimately on deep exponential sum estimates of Deshouillers and Iwaniec) can extend this range to {d < \theta N} for any {\theta < \frac{4}{7}}, if {N} is sufficiently large depending on {\theta}; these bounds thus go beyond what is available from Proposition 3. On the other hand, as discussed in Remark 7 of the previous post, ACUE would also predict (14) for {d} as large as {N-2}, so the available mollified moment estimates are not strong enough to rule out ACUE. It would be interesting to see if there is some other estimate in the function field setting that can be used to exclude the ACUE hypothesis (possibly one that exploits the fact that GRH is available in the function field case?).

Read the rest of this entry »

Archives