You are currently browsing the category archive for the ‘math.CO’ category.

Earlier this month, Hao Huang (who, incidentally, was a graduate student here at UCLA) gave a remarkably short proof of a long-standing problem in theoretical computer science known as the sensitivity conjecture. See for instance this blog post of Gil Kalai for further discussion and links to many other online discussions of this result. One formulation of the theorem proved is as follows. Define the ${n}$-dimensional hypercube graph ${Q_n}$ to be the graph with vertex set ${({\bf Z}/2{\bf Z})^n}$, and with every vertex ${v \in ({\bf Z}/2{\bf Z})^n}$ joined to the ${n}$ vertices ${v + e_1,\dots,v+e_n}$, where ${e_1,\dots,e_n}$ is the standard basis of ${({\bf Z}/2{\bf Z})^n}$.

Theorem 1 (Lower bound on maximum degree of induced subgraphs of hypercube) Let ${E}$ be a set of at least ${2^{n-1}+1}$ vertices in ${Q_n}$. Then there is a vertex in ${E}$ that is adjacent (in ${Q_n}$) to at least ${\sqrt{n}}$ other vertices in ${E}$.

The bound ${\sqrt{n}}$ (or more precisely, ${\lceil \sqrt{n} \rceil}$) is completely sharp, as shown by Chung, Furedi, Graham, and Seymour; we describe this example below the fold. When combined with earlier reductions of Gotsman-Linial and Nisan-Szegedy; we give these below the fold also.

Let ${A = (a_{vw})_{v,w \in ({\bf Z}/2{\bf Z})^n}}$ be the adjacency matrix of ${Q_n}$ (where we index the rows and columns directly by the vertices in ${({\bf Z}/2{\bf Z})^n}$, rather than selecting some enumeration ${1,\dots,2^n}$), thus ${a_{vw}=1}$ when ${w = v+e_i}$ for some ${i=1,\dots,n}$, and ${a_{vw}=0}$ otherwise. The above theorem then asserts that if ${E}$ is a set of at least ${2^{n-1}+1}$ vertices, then the ${E \times E}$ minor ${(a_{vw})_{v,w \in E}}$ of ${A}$ has a row (or column) that contains at least ${\sqrt{n}}$ non-zero entries.

The key step to prove this theorem is the construction of rather curious variant ${\tilde A}$ of the adjacency matrix ${A}$:

Proposition 2 There exists a ${({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})^n}$ matrix ${\tilde A = (\tilde a_{vw})_{v,w \in ({\bf Z}/2{\bf Z})^n}}$ which is entrywise dominated by ${A}$ in the sense that

$\displaystyle |\tilde a_{vw}| \leq a_{vw} \hbox{ for all } v,w \in ({\bf Z}/2{\bf Z})^n \ \ \ \ \ (1)$

and such that ${\tilde A}$ has ${\sqrt{n}}$ as an eigenvalue with multiplicity ${2^{n-1}}$.

Assuming this proposition, the proof of Theorem 1 can now be quickly concluded. If we view ${\tilde A}$ as a linear operator on the ${2^n}$-dimensional space ${\ell^2(({\bf Z}/2{\bf Z})^n)}$ of functions of ${({\bf Z}/2{\bf Z})^n}$, then by hypothesis this space has a ${2^{n-1}}$-dimensional subspace ${V}$ on which ${\tilde A}$ acts by multiplication by ${\sqrt{n}}$. If ${E}$ is a set of at least ${2^{n-1}+1}$ vertices in ${Q_n}$, then the space ${\ell^2(E)}$ of functions on ${E}$ has codimension at most ${2^{n-1}-1}$ in ${\ell^2(({\bf Z}/2{\bf Z})^n)}$, and hence intersects ${V}$ non-trivially. Thus the ${E \times E}$ minor ${\tilde A_E}$ of ${\tilde A}$ also has ${\sqrt{n}}$ as an eigenvalue (this can also be derived from the Cauchy interlacing inequalities), and in particular this minor has operator norm at least ${\sqrt{n}}$. By Schur’s test, this implies that one of the rows or columns of this matrix has absolute values summing to at least ${\sqrt{n}}$, giving the claim.

Remark 3 The argument actually gives a strengthening of Theorem 1: there exists a vertex ${v_0}$ of ${E}$ with the property that for every natural number ${k}$, there are at least ${n^{k/2}}$ paths of length ${k}$ in the restriction ${Q_n|_E}$ of ${Q_n}$ to ${E}$ that start from ${v_0}$. Indeed, if we let ${(u_v)_{v \in E}}$ be an eigenfunction of ${\tilde A}$ on ${\ell^2(E)}$, and let ${v_0}$ be a vertex in ${E}$ that maximises the value of ${|u_{v_0}|}$, then for any ${k}$ we have that the ${v_0}$ component of ${\tilde A_E^k (u_v)_{v \in E}}$ is equal to ${n^{k/2} |u_{v_0}|}$; on the other hand, by the triangle inequality, this component is at most ${|u_{v_0}|}$ times the number of length ${k}$ paths in ${Q_n|_E}$ starting from ${v_0}$, giving the claim.

This argument can be viewed as an instance of a more general “interlacing method” to try to control the behaviour of a graph ${G}$ on all large subsets ${E}$ by first generating a matrix ${\tilde A}$ on ${G}$ with very good spectral properties, which are then partially inherited by the ${E \times E}$ minor of ${\tilde A}$ by interlacing inequalities. In previous literature using this method (see e.g., this survey of Haemers, or this paper of Wilson), either the original adjacency matrix ${A}$, or some non-negatively weighted version of that matrix, was used as the controlling matrix ${\tilde A}$; the novelty here is the use of signed controlling matrices. It will be interesting to see what further variants and applications of this method emerge in the near future. (Thanks to Anurag Bishoi in the comments for these references.)

The “magic” step in the above argument is constructing ${\tilde A}$. In Huang’s paper, ${\tilde A}$ is constructed recursively in the dimension ${n}$ in a rather simple but mysterious fashion. Very recently, Roman Karasev gave an interpretation of this matrix in terms of the exterior algebra on ${{\bf R}^n}$. In this post I would like to give an alternate interpretation in terms of the operation of twisted convolution, which originated in the theory of the Heisenberg group in quantum mechanics.

Firstly note that the original adjacency matrix ${A}$, when viewed as a linear operator on ${\ell^2(({\bf Z}/2{\bf Z})^n)}$, is a convolution operator

$\displaystyle A f = f * \mu$

where

$\displaystyle \mu(x) := \sum_{i=1}^n 1_{x=e_i}$

is the counting measure on the standard basis ${e_1,\dots,e_n}$, and ${*}$ denotes the ordinary convolution operation

$\displaystyle f * g(x) := \sum_{y \in ({\bf Z}/2{\bf Z})^n} f(y) g(x-y) = \sum_{y_1+y_2 = x} f(y_1) g(y_2).$

As is well known, this operation is commutative and associative. Thus for instance the square ${A^2}$ of the adjacency operator ${A}$ is also a convolution operator

$\displaystyle A^2 f = f * (\mu * \mu)(x)$

where the convolution kernel ${\mu * \mu}$ is moderately complicated:

$\displaystyle \mu*\mu(x) = n \times 1_{x=0} + \sum_{1 \leq i < j \leq n} 2 \times 1_{x = e_i + e_j}.$

The factor ${2}$ in this expansion comes from combining the two terms ${1_{x=e_i} * 1_{x=e_j}}$ and ${1_{x=e_j} * 1_{x=e_i}}$, which both evaluate to ${1_{x=e_i+e_j}}$.

More generally, given any bilinear form ${B: ({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})^n \rightarrow {\bf Z}/2{\bf Z}}$, one can define the twisted convolution

$\displaystyle f *_B g(x) := \sum_{y \in ({\bf Z}/2{\bf Z})^n} (-1)^{B(y,x-y)} f(y) g(x-y)$

$\displaystyle = \sum_{y_1+y_2=x} (-1)^{B(y_1,y_2)} f(y_1) g(y_2)$

of two functions ${f,g \in \ell^2(({\bf Z}/2{\bf Z})^n)}$. This operation is no longer commutative (unless ${B}$ is symmetric). However, it remains associative; indeed, one can easily compute that

$\displaystyle (f *_B g) *_B h(x) = f *_B (g *_B h)(x)$

$\displaystyle = \sum_{y_1+y_2+y_3=x} (-1)^{B(y_1,y_2)+B(y_1,y_3)+B(y_2,y_3)} f(y_1) g(y_2) h(y_3).$

In particular, if we define the twisted convolution operator

$\displaystyle A_B f(x) := f *_B \mu(x)$

then the square ${A_B^2}$ is also a twisted convolution operator

$\displaystyle A_B^2 f = f *_B (\mu *_B \mu)$

and the twisted convolution kernel ${\mu *_B \mu}$ can be computed as

$\displaystyle \mu *_B \mu(x) = (\sum_{i=1}^n (-1)^{B(e_i,e_i)}) 1_{x=0}$

$\displaystyle + \sum_{1 \leq i < j \leq n} ((-1)^{B(e_i,e_j)} + (-1)^{B(e_j,e_i)}) 1_{x=e_i+e_j}.$

For general bilinear forms ${B}$, this twisted convolution is just as messy as ${\mu * \mu}$ is. But if we take the specific bilinear form

$\displaystyle B(x,y) := \sum_{1 \leq i < j \leq n} x_i y_j \ \ \ \ \ (2)$

then ${B(e_i,e_i)=0}$ for ${1 \leq i \leq n}$ and ${B(e_i,e_j)=1, B(e_j,e_i)=0}$ for ${1 \leq i < j \leq n}$, and the above twisted convolution simplifies to

$\displaystyle \mu *_B \mu(x) = n 1_{x=0}$

and now ${A_B^2}$ is very simple:

$\displaystyle A_B^2 f = n f.$

Thus the only eigenvalues of ${A_B}$ are ${+\sqrt{n}}$ and ${-\sqrt{n}}$. The matrix ${A_B}$ is entrywise dominated by ${A}$ in the sense of (1), and in particular has trace zero; thus the ${+\sqrt{n}}$ and ${-\sqrt{n}}$ eigenvalues must occur with equal multiplicity, so in particular the ${+\sqrt{n}}$ eigenvalue occurs with multiplicity ${2^{n-1}}$ since the matrix has dimensions ${2^n \times 2^n}$. This establishes Proposition 2.

Remark 4 Twisted convolution ${*_B}$ is actually just a component of ordinary convolution, but not on the original group ${({\bf Z}/2{\bf Z})^n}$; instead it relates to convolution on a Heisenberg group extension of this group. More specifically, define the Heisenberg group ${H}$ to be the set of pairs ${(x, t) \in ({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})}$ with group law

$\displaystyle (x,t) \cdot (y,s) := (x+y, t+s+B(x,y))$

and inverse operation

$\displaystyle (x,t)^{-1} = (-x, -t+B(x,x))$

(one can dispense with the negative signs here if desired, since we are in characteristic two). Convolution on ${H}$ is defined in the usual manner: one has

$\displaystyle F*G( (x,t) ) := \sum_{(y,s) \in H} F(y,s) G( (y,s)^{-1} (x,t) )$

for any ${F,G \in \ell^2(H)}$. Now if ${f \in \ell^2(({\bf Z}/2{\bf Z})^n)}$ is a function on the original group ${({\bf Z}/2{\bf Z})^n}$, we can define the lift ${\tilde f \in \ell^2(H)}$ by the formula

$\displaystyle \tilde f(x,t) := (-1)^t f(x)$

and then by chasing all the definitions one soon verifies that

$\displaystyle \tilde f * \tilde g = 2 \widetilde{f *_B g}$

for any ${f,g \in \ell^2(({\bf Z}/2{\bf Z})^n)}$, thus relating twisted convolution ${*_B}$ to Heisenberg group convolution ${*}$.

Remark 5 With the twisting by the specific bilinear form ${B}$ given by (2), convolution by ${1_{x=e_i}}$ and ${1_{x=e_j}}$ now anticommute rather than commute. This makes the twisted convolution algebra ${(\ell^2(({\bf Z}/2{\bf Z})^n), *_B)}$ isomorphic to a Clifford algebra ${Cl({\bf R}^n,I_n)}$ (the real or complex algebra generated by formal generators ${v_1,\dots,v_n}$ subject to the relations ${(v_iv_j+v_jv_i)/2 = 1_{i=j}}$ for ${i,j=1,\dots,n}$) rather than the commutative algebra more familiar to abelian Fourier analysis. This connection to Clifford algebra (also observed independently by Tom Mrowka and by Daniel Matthews) may be linked to the exterior algebra interpretation of the argument in the recent preprint of Karasev mentioned above.

Remark 6 One could replace the form (2) in this argument by any other bilinear form ${B'}$ that obeyed the relations ${B'(e_i,e_i)=0}$ and ${B'(e_i,e_j) + B'(e_j,e_i)=1}$ for ${i \neq j}$. However, this additional level of generality does not add much; any such ${B'}$ will differ from ${B}$ by an antisymmetric form ${C}$ (so that ${C(x,x) = 0}$ for all ${x}$, which in characteristic two implied that ${C(x,y) = C(y,x)}$ for all ${x,y}$), and such forms can always be decomposed as ${C(x,y) = C'(x,y) + C'(y,x)}$, where ${C'(x,y) := \sum_{i. As such, the matrices ${A_B}$ and ${A_{B'}}$ are conjugate, with the conjugation operator being the diagonal matrix with entries ${(-1)^{C'(x,x)}}$ at each vertex ${x}$.

Remark 7 (Added later) This remark combines the two previous remarks. One can view any of the matrices ${A_{B'}}$ in Remark 6 as components of a single canonical matrix ${A_{Cl}}$ that is still of dimensions ${({\bf Z}/2{\bf Z})^n \times ({\bf Z}/2{\bf Z})^n}$, but takes values in the Clifford algebra ${Cl({\bf R}^n,I_n)}$ from Remark 5; with this “universal algebra” perspective, one no longer needs to make any arbitrary choices of form ${B}$. More precisely, let ${\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))}$ denote the vector space of functions ${f: ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n)}$ from the hypercube to the Clifford algebra; as a real vector space, this is a ${2^{2n}}$ dimensional space, isomorphic to the direct sum of ${2^n}$ copies of ${\ell^2(({\bf Z}/2{\bf Z})^n)}$, as the Clifford algebra is itself ${2^n}$ dimensional. One can then define a canonical Clifford adjacency operator ${A_{Cl}}$ on this space by

$\displaystyle A_{Cl} f(x) := \sum_{i=1}^n f(x+e_i) v_i$

where ${v_1,\dots,v_n}$ are the generators of ${Cl({\bf R}^n,I_n)}$. This operator can either be identified with a Clifford-valued ${2^n \times 2^n}$ matrix or as a real-valued ${2^{2n} \times 2^{2n}}$ matrix. In either case one still has the key algebraic relations ${A_{Cl}^2 = n}$ and ${\mathrm{tr} A_{Cl} = 0}$, ensuring that when viewed as a real ${2^{2n} \times 2^{2n}}$ matrix, half of the eigenvalues are equal to ${+\sqrt{n}}$ and half equal to ${-\sqrt{n}}$. One can then use this matrix in place of any of the ${A_{B'}}$ to establish Theorem 1 (noting that Schur’s test continues to work for Clifford-valued matrices because of the norm structure on ${Cl({\bf R}^n,I_n)}$).

To relate ${A_{Cl}}$ to the real ${2^n \times 2^n}$ matrices ${A_{B'}}$, first observe that each point ${x}$ in the hypercube ${({\bf Z}/2{\bf Z})^n}$ can be associated with a one-dimensional real subspace ${\ell_x}$ (i.e., a line) in the Clifford algebra ${Cl({\bf R}^n,I_n)}$ by the formula

$\displaystyle \ell_{e_{i_1} + \dots + e_{i_k}} := \mathrm{span}_{\bf R}( v_{i_1} \dots v_{i_k} )$

for any ${i_1,\dots,i_k \in \{1,\dots,n\}}$ (note that this definition is well-defined even if the ${i_1,\dots,i_k}$ are out of order or contain repetitions). This can be viewed as a discrete line bundle over the hypercube. Since ${\ell_{x+e_i} = \ell_x e_i}$ for any ${i}$, we see that the ${2^n}$-dimensional real linear subspace ${V}$ of ${\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))}$ of sections of this bundle, that is to say the space of functions ${f: ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n)}$ such that ${f(x) \in \ell_x}$ for all ${x \in ({\bf Z}/2{\bf Z})^n}$, is an invariant subspace of ${A_{Cl}}$. (Indeed, using the left-action of the Clifford algebra on ${\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))}$, which commutes with ${A_{Cl}}$, one can naturally identify ${\ell^2( ({\bf Z}/2{\bf Z})^n \rightarrow Cl({\bf R}^n,I_n))}$ with ${Cl({\bf R}^n,I_n) \otimes V}$, with the left action of ${Cl({\bf R}^n,I_n)}$ acting purely on the first factor and ${A_{Cl}}$ acting purely on the second factor.) Any trivialisation of this line bundle lets us interpret the restriction ${A_{Cl}|_V}$ of ${A_{Cl}}$ to ${V}$ as a real ${2^n \times 2^n}$ matrix. In particular, given one of the bilinear forms ${B'}$ from Remark 6, we can identify ${V}$ with ${\ell^2(({\bf Z}/2{\bf Z})^n)}$ by identifying any real function ${f \in \ell^2( ({\bf Z}/2{\bf Z})^n)}$ with the lift ${\tilde f \in V}$ defined by

$\displaystyle \tilde f(e_{i_1} + \dots + e_{i_k}) := (-1)^{\sum_{1 \leq j < j' \leq k} B'(e_{i_j}, e_{i_{j'}})}$

$\displaystyle f(e_{i_1} + \dots + e_{i_k}) v_{i_1} \dots v_{i_k}$

whenever ${1 \leq i_1 < \dots < i_k \leq n}$. A somewhat tedious computation using the properties of ${B'}$ then eventually gives the intertwining identity

$\displaystyle A_{Cl} \tilde f = \widetilde{A_{B'} f}$

and so ${A_{B'}}$ is conjugate to ${A_{Cl}|_V}$.

Let ${\Omega}$ be some domain (such as the real numbers). For any natural number ${p}$, let ${L(\Omega^p)_{sym}}$ denote the space of symmetric real-valued functions ${F^{(p)}: \Omega^p \rightarrow {\bf R}}$ on ${p}$ variables ${x_1,\dots,x_p \in \Omega}$, thus

$\displaystyle F^{(p)}(x_{\sigma(1)},\dots,x_{\sigma(p)}) = F^{(p)}(x_1,\dots,x_p)$

for any permutation ${\sigma: \{1,\dots,p\} \rightarrow \{1,\dots,p\}}$. For instance, for any natural numbers ${k,p}$, the elementary symmetric polynomials

$\displaystyle e_k^{(p)}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} x_{i_1} \dots x_{i_k}$

will be an element of ${L({\bf R}^p)_{sym}}$. With the pointwise product operation, ${L(\Omega^p)_{sym}}$ becomes a commutative real algebra. We include the case ${p=0}$, in which case ${L(\Omega^0)_{sym}}$ consists solely of the real constants.

Given two natural numbers ${k,p}$, one can “lift” a symmetric function ${F^{(k)} \in L(\Omega^k)_{sym}}$ of ${k}$ variables to a symmetric function ${[F^{(k)}]_{k \rightarrow p} \in L(\Omega^p)_{sym}}$ of ${p}$ variables by the formula

$\displaystyle [F^{(k)}]_{k \rightarrow p}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} F^{(k)}(x_{i_1}, \dots, x_{i_k})$

$\displaystyle = \frac{1}{k!} \sum_\pi F^{(k)}( x_{\pi(1)}, \dots, x_{\pi(k)} )$

where ${\pi}$ ranges over all injections from ${\{1,\dots,k\}}$ to ${\{1,\dots,p\}}$ (the latter formula making it clearer that ${[F^{(k)}]_{k \rightarrow p}}$ is symmetric). Thus for instance

$\displaystyle [F^{(1)}(x_1)]_{1 \rightarrow p} = \sum_{i=1}^p F^{(1)}(x_i)$

$\displaystyle [F^{(2)}(x_1,x_2)]_{2 \rightarrow p} = \sum_{1 \leq i < j \leq p} F^{(2)}(x_i,x_j)$

and

$\displaystyle e_k^{(p)}(x_1,\dots,x_p) = [x_1 \dots x_k]_{k \rightarrow p}.$

Also we have

$\displaystyle [1]_{k \rightarrow p} = \binom{p}{k} = \frac{p(p-1)\dots(p-k+1)}{k!}.$

With these conventions, we see that ${[F^{(k)}]_{k \rightarrow p}}$ vanishes for ${p=0,\dots,k-1}$, and is equal to ${F}$ if ${k=p}$. We also have the transitivity

$\displaystyle [F^{(k)}]_{k \rightarrow p} = \frac{1}{\binom{p-k}{p-l}} [[F^{(k)}]_{k \rightarrow l}]_{l \rightarrow p}$

if ${k \leq l \leq p}$.

The lifting map ${[]_{k \rightarrow p}}$ is a linear map from ${L(\Omega^k)_{sym}}$ to ${L(\Omega^p)_{sym}}$, but it is not a ring homomorphism. For instance, when ${\Omega={\bf R}}$, one has

$\displaystyle [x_1]_{1 \rightarrow p} [x_1]_{1 \rightarrow p} = (\sum_{i=1}^p x_i)^2 \ \ \ \ \ (1)$

$\displaystyle = \sum_{i=1}^p x_i^2 + 2 \sum_{1 \leq i < j \leq p} x_i x_j$

$\displaystyle = [x_1^2]_{1 \rightarrow p} + 2 [x_1 x_2]_{1 \rightarrow p}$

$\displaystyle \neq [x_1^2]_{1 \rightarrow p}.$

In general, one has the identity

$\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow p} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow p} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (2)$

$\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow p}$

for all natural numbers ${k,l,p}$ and ${F^{(k)} \in L(\Omega^k)_{sym}}$, ${G^{(l)} \in L(\Omega^l)_{sym}}$, where ${\pi, \rho}$ range over all injections ${\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}$, ${\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}}$ with ${\pi(\{1,\dots,k\}) \cup \rho(\{1,\dots,l\}) = \{1,\dots,m\}}$. Combinatorially, the identity (2) follows from the fact that given any injections ${\tilde \pi: \{1,\dots,k\} \rightarrow \{1,\dots,p\}}$ and ${\tilde \rho: \{1,\dots,l\} \rightarrow \{1,\dots,p\}}$ with total image ${\tilde \pi(\{1,\dots,k\}) \cup \tilde \rho(\{1,\dots,l\})}$ of cardinality ${m}$, one has ${k,l \leq m \leq k+l}$, and furthermore there exist precisely ${m!}$ triples ${(\pi, \rho, \sigma)}$ of injections ${\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}$, ${\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}}$, ${\sigma: \{1,\dots,m\} \rightarrow \{1,\dots,p\}}$ such that ${\tilde \pi = \sigma \circ \pi}$ and ${\tilde \rho = \sigma \circ \rho}$.

Example 1 When ${\Omega = {\bf R}}$, one has

$\displaystyle [x_1 x_2]_{2 \rightarrow p} [x_1]_{1 \rightarrow p} = [\frac{1}{2! 1!}( 2 x_1^2 x_2 + 2 x_1 x_2^2 )]_{2 \rightarrow p} + [\frac{1}{2! 1!} 6 x_1 x_2 x_3]_{3 \rightarrow p}$

$\displaystyle = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow p} + [3x_1 x_2 x_3]_{3 \rightarrow p}$

which is just a restatement of the identity

$\displaystyle (\sum_{i < j} x_i x_j) (\sum_k x_k) = \sum_{i

Note that the coefficients appearing in (2) do not depend on the final number of variables ${p}$. We may therefore abstract the role of ${p}$ from the law (2) by introducing the real algebra ${L(\Omega^*)_{sym}}$ of formal sums

$\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}$

where for each ${k}$, ${F^{(k)}}$ is an element of ${L(\Omega^k)_{sym}}$ (with only finitely many of the ${F^{(k)}}$ being non-zero), and with the formal symbol ${[]_{k \rightarrow *}}$ being formally linear, thus

$\displaystyle [F^{(k)}]_{k \rightarrow *} + [G^{(k)}]_{k \rightarrow *} := [F^{(k)} + G^{(k)}]_{k \rightarrow *}$

and

$\displaystyle c [F^{(k)}]_{k \rightarrow *} := [cF^{(k)}]_{k \rightarrow *}$

for ${F^{(k)}, G^{(k)} \in L(\Omega^k)_{sym}}$ and scalars ${c \in {\bf R}}$, and with multiplication given by the analogue

$\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow *} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow *} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (3)$

$\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow *}$

of (2). Thus for instance, in this algebra ${L(\Omega^*)_{sym}}$ we have

$\displaystyle [x_1]_{1 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2]_{1 \rightarrow *} + 2 [x_1 x_2]_{2 \rightarrow *}$

and

$\displaystyle [x_1 x_2]_{2 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow *} + [3 x_1 x_2 x_3]_{3 \rightarrow *}.$

Informally, ${L(\Omega^*)_{sym}}$ is an abstraction (or “inverse limit”) of the concept of a symmetric function of an unspecified number of variables, which are formed by summing terms that each involve only a bounded number of these variables at a time. One can check (somewhat tediously) that ${L(\Omega^*)_{sym}}$ is indeed a commutative real algebra, with a unit ${[1]_{0 \rightarrow *}}$. (I do not know if this algebra has previously been studied in the literature; it is somewhat analogous to the abstract algebra of finite linear combinations of Schur polynomials, with multiplication given by a Littlewood-Richardson rule. )

For natural numbers ${p}$, there is an obvious specialisation map ${[]_{* \rightarrow p}}$ from ${L(\Omega^*)_{sym}}$ to ${L(\Omega^p)_{sym}}$, defined by the formula

$\displaystyle [\sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}]_{* \rightarrow p} := \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}.$

Thus, for instance, ${[]_{* \rightarrow p}}$ maps ${[x_1]_{1 \rightarrow *}}$ to ${[x_1]_{1 \rightarrow p}}$ and ${[x_1 x_2]_{2 \rightarrow *}}$ to ${[x_1 x_2]_{2 \rightarrow p}}$. From (2) and (3) we see that this map ${[]_{* \rightarrow p}: L(\Omega^*)_{sym} \rightarrow L(\Omega^p)_{sym}}$ is an algebra homomorphism, even though the maps ${[]_{k \rightarrow *}: L(\Omega^k)_{sym} \rightarrow L(\Omega^*)_{sym}}$ and ${[]_{k \rightarrow p}: L(\Omega^k)_{sym} \rightarrow L(\Omega^p)_{sym}}$ are not homomorphisms. By inspecting the ${p^{th}}$ component of ${L(\Omega^*)_{sym}}$ we see that the homomorphism ${[]_{* \rightarrow p}}$ is in fact surjective.

Now suppose that we have a measure ${\mu}$ on the space ${\Omega}$, which then induces a product measure ${\mu^p}$ on every product space ${\Omega^p}$. To avoid degeneracies we will assume that the integral ${\int_\Omega \mu}$ is strictly positive. Assuming suitable measurability and integrability hypotheses, a function ${F \in L(\Omega^p)_{sym}}$ can then be integrated against this product measure to produce a number

$\displaystyle \int_{\Omega^p} F\ d\mu^p.$

In the event that ${F}$ arises as a lift ${[F^{(k)}]_{k \rightarrow p}}$ of another function ${F^{(k)} \in L(\Omega^k)_{sym}}$, then from Fubini’s theorem we obtain the formula

$\displaystyle \int_{\Omega^p} F\ d\mu^p = \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}.$

Thus for instance, if ${\Omega={\bf R}}$,

$\displaystyle \int_{{\bf R}^p} [x_1]_{1 \rightarrow p}\ d\mu^p = p (\int_{\bf R} x\ d\mu(x)) (\int_{\bf R} \mu)^{p-1} \ \ \ \ \ (4)$

and

$\displaystyle \int_{{\bf R}^p} [x_1 x_2]_{2 \rightarrow p}\ d\mu^p = \binom{p}{2} (\int_{{\bf R}^2} x_1 x_2\ d\mu(x_1) d\mu(x_2)) (\int_{\bf R} \mu)^{p-2}. \ \ \ \ \ (5)$

On summing, we see that if

$\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}$

is an element of the formal algebra ${L(\Omega^*)_{sym}}$, then

$\displaystyle \int_{\Omega^p} [F^{(*)}]_{* \rightarrow p}\ d\mu^p = \sum_{k=0}^\infty \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}. \ \ \ \ \ (6)$

Note that by hypothesis, only finitely many terms on the right-hand side are non-zero.

Now for a key observation: whereas the left-hand side of (6) only makes sense when ${p}$ is a natural number, the right-hand side is meaningful when ${p}$ takes a fractional value (or even when it takes negative or complex values!), interpreting the binomial coefficient ${\binom{p}{k}}$ as a polynomial ${\frac{p(p-1) \dots (p-k+1)}{k!}}$ in ${p}$. As such, this suggests a way to introduce a “virtual” concept of a symmetric function on a fractional power space ${\Omega^p}$ for such values of ${p}$, and even to integrate such functions against product measures ${\mu^p}$, even if the fractional power ${\Omega^p}$ does not exist in the usual set-theoretic sense (and ${\mu^p}$ similarly does not exist in the usual measure-theoretic sense). More precisely, for arbitrary real or complex ${p}$, we now define ${L(\Omega^p)_{sym}}$ to be the space of abstract objects

$\displaystyle F^{(p)} = [F^{(*)}]_{* \rightarrow p} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}$

with ${F^{(*)} \in L(\Omega^*)_{sym}}$ and ${[]_{* \rightarrow p}}$ (and ${[]_{k \rightarrow p}}$ now interpreted as formal symbols, with the structure of a commutative real algebra inherited from ${L(\Omega^*)_{sym}}$, thus

$\displaystyle [F^{(*)}]_{* \rightarrow p} + [G^{(*)}]_{* \rightarrow p} := [F^{(*)} + G^{(*)}]_{* \rightarrow p}$

$\displaystyle c [F^{(*)}]_{* \rightarrow p} := [c F^{(*)}]_{* \rightarrow p}$

$\displaystyle [F^{(*)}]_{* \rightarrow p} [G^{(*)}]_{* \rightarrow p} := [F^{(*)} G^{(*)}]_{* \rightarrow p}.$

In particular, the multiplication law (2) continues to hold for such values of ${p}$, thanks to (3). Given any measure ${\mu}$ on ${\Omega}$, we formally define a measure ${\mu^p}$ on ${\Omega^p}$ with regards to which we can integrate elements ${F^{(p)}}$ of ${L(\Omega^p)_{sym}}$ by the formula (6) (providing one has sufficient measurability and integrability to make sense of this formula), thus providing a sort of “fractional dimensional integral” for symmetric functions. Thus, for instance, with this formalism the identities (4), (5) now hold for fractional values of ${p}$, even though the formal space ${{\bf R}^p}$ no longer makes sense as a set, and the formal measure ${\mu^p}$ no longer makes sense as a measure. (The formalism here is somewhat reminiscent of the technique of dimensional regularisation employed in the physical literature in order to assign values to otherwise divergent integrals. See also this post for an unrelated abstraction of the integration concept involving integration over supercommutative variables (and in particular over fermionic variables).)

Example 2 Suppose ${\mu}$ is a probability measure on ${\Omega}$, and ${X: \Omega \rightarrow {\bf R}}$ is a random variable; on any power ${\Omega^k}$, we let ${X_1,\dots,X_k: \Omega^k \rightarrow {\bf R}}$ be the usual independent copies of ${X}$ on ${\Omega^k}$, thus ${X_j(\omega_1,\dots,\omega_k) := X(\omega_j)}$ for ${(\omega_1,\dots,\omega_k) \in \Omega^k}$. Then for any real or complex ${p}$, the formal integral

$\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p$

can be evaluated by first using the identity

$\displaystyle [X_1]_{1 \rightarrow p}^2 = [X_1^2]_{1 \rightarrow p} + 2[X_1 X_2]_{2 \rightarrow p}$

(cf. (1)) and then using (6) and the probability measure hypothesis ${\int_\Omega\ d\mu = 1}$ to conclude that

$\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = \binom{p}{1} \int_{\Omega} X^2\ d\mu + 2 \binom{p}{2} \int_{\Omega^2} X_1 X_2\ d\mu^2$

$\displaystyle = p (\int_\Omega X^2\ d\mu - (\int_\Omega X\ d\mu)^2) + p^2 (\int_\Omega X\ d\mu)^2$

or in probabilistic notation

$\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2. \ \ \ \ \ (7)$

For ${p}$ a natural number, this identity has the probabilistic interpretation

$\displaystyle \mathbf{E}( X_1 + \dots + X_p)^2 = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2 \ \ \ \ \ (8)$

whenever ${X_1,\dots,X_p}$ are jointly independent copies of ${X}$, which reflects the well known fact that the sum ${X_1 + \dots + X_p}$ has expectation ${p \mathbf{E} X}$ and variance ${p \mathbf{Var}(X)}$. One can thus view (7) as an abstract generalisation of (8) to the case when ${p}$ is fractional, negative, or even complex, despite the fact that there is no sensible way in this case to talk about ${p}$ independent copies ${X_1,\dots,X_p}$ of ${X}$ in the standard framework of probability theory.

In this particular case, the quantity (7) is non-negative for every nonnegative ${p}$, which looks plausible given the form of the left-hand side. Unfortunately, this sort of non-negativity does not always hold; for instance, if ${X}$ has mean zero, one can check that

$\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^4\ d\mu^p = p \mathbf{Var}(X^2) + p(3p-2) (\mathbf{E}(X^2))^2$

and the right-hand side can become negative for ${p < 2/3}$. This is a shame, because otherwise one could hope to start endowing ${L(X^p)_{sym}}$ with some sort of commutative von Neumann algebra type structure (or the abstract probability structure discussed in this previous post) and then interpret it as a genuine measure space rather than as a virtual one. (This failure of positivity is related to the fact that the characteristic function of a random variable, when raised to the ${p^{th}}$ power, need not be a characteristic function of any random variable once ${p}$ is no longer a natural number: “fractional convolution” does not preserve positivity!) However, one vestige of positivity remains: if ${F: \Omega \rightarrow {\bf R}}$ is non-negative, then so is

$\displaystyle \int_{\Omega^p} [F]_{1 \rightarrow p}\ d\mu^p = p (\int_\Omega F\ d\mu) (\int_\Omega\ d\mu)^{p-1}.$

One can wonder what the point is to all of this abstract formalism and how it relates to the rest of mathematics. For me, this formalism originated implicitly in an old paper I wrote with Jon Bennett and Tony Carbery on the multilinear restriction and Kakeya conjectures, though we did not have a good language for working with it at the time, instead working first with the case of natural number exponents ${p}$ and appealing to a general extrapolation theorem to then obtain various identities in the fractional ${p}$ case. The connection between these fractional dimensional integrals and more traditional integrals ultimately arises from the simple identity

$\displaystyle (\int_\Omega\ d\mu)^p = \int_{\Omega^p}\ d\mu^p$

(where the right-hand side should be viewed as the fractional dimensional integral of the unit ${[1]_{0 \rightarrow p}}$ against ${\mu^p}$). As such, one can manipulate ${p^{th}}$ powers of ordinary integrals using the machinery of fractional dimensional integrals. A key lemma in this regard is

Lemma 3 (Differentiation formula) Suppose that a positive measure ${\mu = \mu(t)}$ on ${\Omega}$ depends on some parameter ${t}$ and varies by the formula

$\displaystyle \frac{d}{dt} \mu(t) = a(t) \mu(t) \ \ \ \ \ (9)$

for some function ${a(t): \Omega \rightarrow {\bf R}}$. Let ${p}$ be any real or complex number. Then, assuming sufficient smoothness and integrability of all quantities involved, we have

$\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}\ d\mu(t)^p = \int_{\Omega^p} F^{(p)} [a(t)]_{1 \rightarrow p}\ d\mu(t)^p \ \ \ \ \ (10)$

for all ${F^{(p)} \in L(\Omega^p)_{sym}}$ that are independent of ${t}$. If we allow ${F^{(p)}(t)}$ to now depend on ${t}$ also, then we have the more general total derivative formula

$\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}(t)\ d\mu(t)^p \ \ \ \ \ (11)$

$\displaystyle = \int_{\Omega^p} \frac{d}{dt} F^{(p)}(t) + F^{(p)}(t) [a(t)]_{1 \rightarrow p}\ d\mu(t)^p,$

again assuming sufficient amounts of smoothness and regularity.

Proof: We just prove (10), as (11) then follows by same argument used to prove the usual product rule. By linearity it suffices to verify this identity in the case ${F^{(p)} = [F^{(k)}]_{k \rightarrow p}}$ for some symmetric function ${F^{(k)} \in L(\Omega^k)_{sym}}$ for a natural number ${k}$. By (6), the left-hand side of (10) is then

$\displaystyle \frac{d}{dt} [\binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu(t)^k) (\int_\Omega\ d\mu(t))^{p-k}]. \ \ \ \ \ (12)$

Differentiating under the integral sign using (9) we have

$\displaystyle \frac{d}{dt} \int_\Omega\ d\mu(t) = \int_\Omega\ a(t)\ d\mu(t)$

and similarly

$\displaystyle \frac{d}{dt} \int_{\Omega^k} F^{(k)}\ d\mu(t)^k = \int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu(t)^k$

where ${a_1,\dots,a_k}$ are the standard ${k}$ copies of ${a = a(t)}$ on ${\Omega^k}$:

$\displaystyle a_j(\omega_1,\dots,\omega_k) := a(\omega_j).$

By the product rule, we can thus expand (12) as

$\displaystyle \binom{p}{k} (\int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu^k ) (\int_\Omega\ d\mu)^{p-k}$

$\displaystyle + \binom{p}{k} (p-k) (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ a\ d\mu) (\int_\Omega\ d\mu)^{p-k-1}$

where we have suppressed the dependence on ${t}$ for brevity. Since ${\binom{p}{k} (p-k) = \binom{p}{k+1} (k+1)}$, we can write this expression using (6) as

$\displaystyle \int_{\Omega^p} [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p}\ d\mu^p$

where ${F^{(k)} \ast a \in L(\Omega^{k+1})_{sym}}$ is the symmetric function

$\displaystyle F^{(k)} \ast a(\omega_1,\dots,\omega_{k+1}) := \sum_{j=1}^{k+1} F^{(k)}(\omega_1,\dots,\omega_{j-1},\omega_{j+1} \dots \omega_{k+1}) a(\omega_j).$

But from (2) one has

$\displaystyle [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p} = [F^{(k)}]_{k \rightarrow p} [a]_{1 \rightarrow p}$

and the claim follows. $\Box$

Remark 4 It is also instructive to prove this lemma in the special case when ${p}$ is a natural number, in which case the fractional dimensional integral ${\int_{\Omega^p} F^{(p)}\ d\mu(t)^p}$ can be interpreted as a classical integral. In this case, the identity (10) is immediate from applying the product rule to (9) to conclude that

$\displaystyle \frac{d}{dt} d\mu(t)^p = [a(t)]_{1 \rightarrow p} d\mu(t)^p.$

One could in fact derive (10) for arbitrary real or complex ${p}$ from the case when ${p}$ is a natural number by an extrapolation argument; see the appendix of my paper with Bennett and Carbery for details.

Let us give a simple PDE application of this lemma as illustration:

Proposition 5 (Heat flow monotonicity) Let ${u: [0,+\infty) \times {\bf R}^d \rightarrow {\bf R}}$ be a solution to the heat equation ${u_t = \Delta u}$ with initial data ${\mu_0}$ a rapidly decreasing finite non-negative Radon measure, or more explicitly

$\displaystyle u(t,x) = \frac{1}{(4\pi t)^{d/2}} \int_{{\bf R}^d} e^{-|x-y|^2/4t}\ d\mu_0(y)$

for al ${t>0}$. Then for any ${p>0}$, the quantity

$\displaystyle Q_p(t) := t^{\frac{d}{2} (p-1)} \int_{{\bf R}^d} u(t,x)^p\ dx$

is monotone non-decreasing in ${t \in (0,+\infty)}$ for ${1 < p < \infty}$, constant for ${p=1}$, and monotone non-increasing for ${0 < p < 1}$.

Proof: By a limiting argument we may assume that ${d\mu_0}$ is absolutely continuous, with Radon-Nikodym derivative a test function; this is more than enough regularity to justify the arguments below.

For any ${(t,x) \in (0,+\infty) \times {\bf R}^d}$, let ${\mu(t,x)}$ denote the Radon measure

$\displaystyle d\mu(t,x)(y) := \frac{1}{(4\pi)^{d/2}} e^{-|x-y|^2/4t}\ d\mu_0(y).$

Then the quantity ${Q_p(t)}$ can be written as a fractional dimensional integral

$\displaystyle Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx.$

Observe that

$\displaystyle \frac{\partial}{\partial t} d\mu(t,x) = \frac{|x-y|^2}{4t^2} d\mu(t,x)$

and thus by Lemma 3 and the product rule

$\displaystyle \frac{d}{dt} Q_p(t) = -\frac{d}{2t} Q_p(t) + t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [\frac{|x-y|^2}{4t^2}]_{1 \rightarrow p} d\mu(t,x)^p\ dx \ \ \ \ \ (13)$

where we use ${y}$ for the variable of integration in the factor space ${{\bf R}^d}$ of ${({\bf R}^d)^p}$.

To simplify this expression we will take advantage of integration by parts in the ${x}$ variable. Specifically, in any direction ${x_j}$, we have

$\displaystyle \frac{\partial}{\partial x_j} d\mu(t,x) = -\frac{x_j-y_j}{2t} d\mu(t,x)$

and hence by Lemma 3

$\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx = - \int_{({\bf R}^d)^p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.$

Multiplying by ${x_j}$ and integrating by parts, we see that

$\displaystyle d Q_p(t) = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx$

$\displaystyle = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx$

where we use the Einstein summation convention in ${j}$. Similarly, if ${F_j(y)}$ is any reasonable function depending only on ${y}$, we have

$\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}[F_j(y)]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx$

$\displaystyle = - \int_{({\bf R}^d)^p} [F_j(y)]_{1 \rightarrow p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx$

and hence on integration by parts

$\displaystyle 0 = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [F_j(y) \frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.$

We conclude that

$\displaystyle \frac{d}{2t} Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} (x_j - [F_j(y)]_{1 \rightarrow p}) [\frac{(x_j-y_j)}{4t}]_{1 \rightarrow p} d\mu(t,x)^p\ dx$

and thus by (13)

$\displaystyle \frac{d}{dt} Q_p(t) = \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}$

$\displaystyle [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p} - (x_j - [F_j(y)]_{1 \rightarrow p}) [x_j - y_j]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.$

The choice of ${F_j}$ that then achieves the most cancellation turns out to be ${F_j(y) = \frac{1}{p} y_j}$ (this cancels the terms that are linear or quadratic in the ${x_j}$), so that ${x_j - [F_j(y)]_{1 \rightarrow p} = \frac{1}{p} [x_j - y_j]_{1 \rightarrow p}}$. Repeating the calculations establishing (7), one has

$\displaystyle \int_{({\bf R}^d)^p} [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p}\ d\mu^p = p \mathop{\bf E} |x-Y|^2 (\int_{{\bf R}^d}\ d\mu)^{p}$

and

$\displaystyle \int_{({\bf R}^d)^p} [x_j-y_j]_{1 \rightarrow p} [x_j-y_j]_{1 \rightarrow p}\ d\mu^p$

$\displaystyle = (p \mathbf{Var}(x-Y) + p^2 |\mathop{\bf E} x-Y|^2) (\int_{{\bf R}^d}\ d\mu)^{p}$

where ${Y}$ is the random variable drawn from ${{\bf R}^d}$ with the normalised probability measure ${\mu / \int_{{\bf R}^d}\ d\mu}$. Since ${\mathop{\bf E} |x-Y|^2 = \mathbf{Var}(x-Y) + |\mathop{\bf E} x-Y|^2}$, one thus has

$\displaystyle \frac{d}{dt} Q_p(t) = (p-1) \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \mathbf{Var}(x-Y) (\int_{{\bf R}^d}\ d\mu)^{p}\ dx. \ \ \ \ \ (14)$

This expression is clearly non-negative for ${p>1}$, equal to zero for ${p=1}$, and positive for ${0 < p < 1}$, giving the claim. (One could simplify ${\mathbf{Var}(x-Y)}$ here as ${\mathbf{Var}(Y)}$ if desired, though it is not strictly necessary to do so for the proof.) $\Box$

Remark 6 As with Remark 4, one can also establish the identity (14) first for natural numbers ${p}$ by direct computation avoiding the theory of fractional dimensional integrals, and then extrapolate to the case of more general values of ${p}$. This particular identity is also simple enough that it can be directly established by integration by parts without much difficulty, even for fractional values of ${p}$.

A more complicated version of this argument establishes the non-endpoint multilinear Kakeya inequality (without any logarithmic loss in a scale parameter ${R}$); this was established in my previous paper with Jon Bennett and Tony Carbery, but using the “natural number ${p}$ first” approach rather than using the current formalism of fractional dimensional integration. However, the arguments can be translated into this formalism without much difficulty; we do so below the fold. (To simplify the exposition slightly we will not address issues of establishing enough regularity and integrability to justify all the manipulations, though in practice this can be done by standard limiting arguments.)

Joni Teräväinen and I have just uploaded to the arXiv our paper “Value patterns of multiplicative functions and related sequences“, submitted to Forum of Mathematics, Sigma. This paper explores how to use recent technology on correlations of multiplicative (or nearly multiplicative functions), such as the “entropy decrement method”, in conjunction with techniques from additive combinatorics, to establish new results on the sign patterns of functions such as the Liouville function ${\lambda}$. For instance, with regards to length 5 sign patterns

$\displaystyle (\lambda(n+1),\dots,\lambda(n+5)) \in \{-1,+1\}^5$

of the Liouville function, we can now show that at least ${24}$ of the ${32}$ possible sign patterns in ${\{-1,+1\}^5}$ occur with positive upper density. (Conjecturally, all of them do so, and this is known for all shorter sign patterns, but unfortunately ${24}$ seems to be the limitation of our methods.)

The Liouville function can be written as ${\lambda(n) = e^{2\pi i \Omega(n)/2}}$, where ${\Omega(n)}$ is the number of prime factors of ${n}$ (counting multiplicity). One can also consider the variant ${\lambda_3(n) = e^{2\pi i \Omega(n)/3}}$, which is a completely multiplicative function taking values in the cube roots of unity ${\{1, \omega, \omega^2\}}$. Here we are able to show that all ${27}$ sign patterns in ${\{1,\omega,\omega^2\}}$ occur with positive lower density as sign patterns ${(\lambda_3(n+1), \lambda_3(n+2), \lambda_3(n+3))}$ of this function. The analogous result for ${\lambda}$ was already known (see this paper of Matomäki, Radziwiłł, and myself), and in that case it is even known that all sign patterns occur with equal logarithmic density ${1/8}$ (from this paper of myself and Teräväinen), but these techniques barely fail to handle the ${\lambda_3}$ case by itself (largely because the “parity” arguments used in the case of the Liouville function no longer control three-point correlations in the ${\lambda_3}$ case) and an additional additive combinatorial tool is needed. After applying existing technology (such as entropy decrement methods), the problem roughly speaking reduces to locating patterns ${a \in A_1, a+r \in A_2, a+2r \in A_3}$ for a certain partition ${G = A_1 \cup A_2 \cup A_3}$ of a compact abelian group ${G}$ (think for instance of the unit circle ${G={\bf R}/{\bf Z}}$, although the general case is a bit more complicated, in particular if ${G}$ is disconnected then there is a certain “coprimality” constraint on ${r}$, also we can allow the ${A_1,A_2,A_3}$ to be replaced by any ${A_{c_1}, A_{c_2}, A_{c_3}}$ with ${c_1+c_2+c_3}$ divisible by ${3}$), with each of the ${A_i}$ having measure ${1/3}$. An inequality of Kneser just barely fails to guarantee the existence of such patterns, but by using an inverse theorem for Kneser’s inequality in this previous paper of mine we are able to identify precisely the obstruction for this method to work, and rule it out by an ad hoc method.

The same techniques turn out to also make progress on some conjectures of Erdös-Pomerance and Hildebrand regarding patterns of the largest prime factor ${P^+(n)}$ of a natural number ${n}$. For instance, we improve results of Erdös-Pomerance and of Balog demonstrating that the inequalities

$\displaystyle P^+(n+1) < P^+(n+2) < P^+(n+3)$

and

$\displaystyle P^+(n+1) > P^+(n+2) > P^+(n+3)$

each hold for infinitely many ${n}$, by demonstrating the stronger claims that the inequalities

$\displaystyle P^+(n+1) < P^+(n+2) < P^+(n+3) > P^+(n+4)$

and

$\displaystyle P^+(n+1) > P^+(n+2) > P^+(n+3) < P^+(n+4)$

each hold for a set of ${n}$ of positive lower density. As a variant, we also show that we can find a positive density set of ${n}$ for which

$\displaystyle P^+(n+1), P^+(n+2), P^+(n+3) > n^\gamma$

for any fixed ${\gamma < e^{-1/3} = 0.7165\dots}$ (this improves on a previous result of Hildebrand with ${e^{-1/3}}$ replaced by ${e^{-1/2} = 0.6065\dots}$. A number of other results of this type are also obtained in this paper.

In order to obtain these sorts of results, one needs to extend the entropy decrement technology from the setting of multiplicative functions to that of what we call “weakly stable sets” – sets ${A}$ which have some multiplicative structure, in the sense that (roughly speaking) there is a set ${B}$ such that for all small primes ${p}$, the statements ${n \in A}$ and ${pn \in B}$ are roughly equivalent to each other. For instance, if ${A}$ is a level set ${A = \{ n: \omega(n) = 0 \hbox{ mod } 3 \}}$, one would take ${B = \{ n: \omega(n) = 1 \hbox{ mod } 3 \}}$; if instead ${A}$ is a set of the form ${\{ n: P^+(n) \geq n^\gamma\}}$, then one can take ${B=A}$. When one has such a situation, then very roughly speaking, the entropy decrement argument then allows one to estimate a one-parameter correlation such as

$\displaystyle {\bf E}_n 1_A(n+1) 1_A(n+2) 1_A(n+3)$

with a two-parameter correlation such as

$\displaystyle {\bf E}_n {\bf E}_p 1_B(n+p) 1_B(n+2p) 1_B(n+3p)$

(where we will be deliberately vague as to how we are averaging over ${n}$ and ${p}$), and then the use of the “linear equations in primes” technology of Ben Green, Tamar Ziegler, and myself then allows one to replace this average in turn by something like

$\displaystyle {\bf E}_n {\bf E}_r 1_B(n+r) 1_B(n+2r) 1_B(n+3r)$

where ${r}$ is constrained to be not divisible by small primes but is otherwise quite arbitrary. This latter average can then be attacked by tools from additive combinatorics, such as translation to a continuous group model (using for instance the Furstenberg correspondence principle) followed by tools such as Kneser’s inequality (or inverse theorems to that inequality).

(This post is mostly intended for my own reference, as I found myself repeatedly looking up several conversions between polynomial bases on various occasions.)

Let ${\mathrm{Poly}_{\leq n}}$ denote the vector space of polynomials ${P:{\bf R} \rightarrow {\bf R}}$ of one variable ${x}$ with real coefficients of degree at most ${n}$. This is a vector space of dimension ${n+1}$, and the sequence of these spaces form a filtration:

$\displaystyle \mathrm{Poly}_{\leq 0} \subset \mathrm{Poly}_{\leq 1} \subset \mathrm{Poly}_{\leq 2} \subset \dots$

A standard basis for these vector spaces are given by the monomials ${x^0, x^1, x^2, \dots}$: every polynomial ${P(x)}$ in ${\mathrm{Poly}_{\leq n}}$ can be expressed uniquely as a linear combination of the first ${n+1}$ monomials ${x^0, x^1, \dots, x^n}$. More generally, if one has any sequence ${Q_0(x), Q_1(x), Q_2(x)}$ of polynomials, with each ${Q_n}$ of degree exactly ${n}$, then an easy induction shows that ${Q_0(x),\dots,Q_n(x)}$ forms a basis for ${\mathrm{Poly}_{\leq n}}$.

In particular, if we have two such sequences ${Q_0(x), Q_1(x), Q_2(x),\dots}$ and ${R_0(x), R_1(x), R_2(x), \dots}$ of polynomials, with each ${Q_n}$ of degree ${n}$ and each ${R_k}$ of degree ${k}$, then ${Q_n}$ must be expressible uniquely as a linear combination of the polynomials ${R_0,R_1,\dots,R_n}$, thus we have an identity of the form

$\displaystyle Q_n(x) = \sum_{k=0}^n c_{QR}(n,k) R_k(x)$

for some change of basis coefficients ${c_{QR}(n,k) \in {\bf R}}$. These coefficients describe how to convert a polynomial expressed in the ${Q_n}$ basis into a polynomial expressed in the ${R_k}$ basis.

Many standard combinatorial quantities ${c(n,k)}$ involving two natural numbers ${0 \leq k \leq n}$ can be interpreted as such change of basis coefficients. The most familiar example are the binomial coefficients ${\binom{n}{k}}$, which measures the conversion from the shifted monomial basis ${(x+1)^n}$ to the monomial basis ${x^k}$, thanks to (a special case of) the binomial formula:

$\displaystyle (x+1)^n = \sum_{k=0}^n \binom{n}{k} x^k,$

thus for instance

$\displaystyle (x+1)^3 = \binom{3}{0} x^0 + \binom{3}{1} x^1 + \binom{3}{2} x^2 + \binom{3}{3} x^3$

$\displaystyle = 1 + 3x + 3x^2 + x^3.$

More generally, for any shift ${h}$, the conversion from ${(x+h)^n}$ to ${x^k}$ is measured by the coefficients ${h^{n-k} \binom{n}{k}}$, thanks to the general case of the binomial formula.

But there are other bases of interest too. For instance if one uses the falling factorial basis

$\displaystyle (x)_n := x (x-1) \dots (x-n+1)$

then the conversion from falling factorials to monomials is given by the Stirling numbers of the first kind ${s(n,k)}$:

$\displaystyle (x)_n = \sum_{k=0}^n s(n,k) x^k,$

thus for instance

$\displaystyle (x)_3 = s(3,0) x^0 + s(3,1) x^1 + s(3,2) x^2 + s(3,3) x^3$

$\displaystyle = 0 + 2 x - 3x^2 + x^3$

and the conversion back is given by the Stirling numbers of the second kind ${S(n,k)}$:

$\displaystyle x^n = \sum_{k=0}^n S(n,k) (x)_k$

thus for instance

$\displaystyle x^3 = S(3,0) (x)_0 + S(3,1) (x)_1 + S(3,2) (x)_2 + S(3,3) (x)_3$

$\displaystyle = 0 + x + 3 x(x-1) + x(x-1)(x-2).$

If one uses the binomial functions ${\binom{x}{n} = \frac{1}{n!} (x)_n}$ as a basis instead of the falling factorials, one of course can rewrite these conversions as

$\displaystyle \binom{x}{n} = \sum_{k=0}^n \frac{1}{n!} s(n,k) x^k$

and

$\displaystyle x^n = \sum_{k=0}^n k! S(n,k) \binom{x}{k}$

thus for instance

$\displaystyle \binom{x}{3} = 0 + \frac{1}{3} x - \frac{1}{2} x^2 + \frac{1}{6} x^3$

and

$\displaystyle x^3 = 0 + \binom{x}{1} + 6 \binom{x}{2} + 6 \binom{x}{3}.$

As a slight variant, if one instead uses rising factorials

$\displaystyle (x)^n := x (x+1) \dots (x+n-1)$

then the conversion to monomials yields the unsigned Stirling numbers ${|s(n,k)|}$ of the first kind:

$\displaystyle (x)^n = \sum_{k=0}^n |s(n,k)| x^k$

thus for instance

$\displaystyle (x)^3 = 0 + 2x + 3x^2 + x^3.$

One final basis comes from the polylogarithm functions

$\displaystyle \mathrm{Li}_{-n}(x) := \sum_{j=1}^\infty j^n x^j.$

For instance one has

$\displaystyle \mathrm{Li}_1(x) = -\log(1-x)$

$\displaystyle \mathrm{Li}_0(x) = \frac{x}{1-x}$

$\displaystyle \mathrm{Li}_{-1}(x) = \frac{x}{(1-x)^2}$

$\displaystyle \mathrm{Li}_{-2}(x) = \frac{x}{(1-x)^3} (1+x)$

$\displaystyle \mathrm{Li}_{-3}(x) = \frac{x}{(1-x)^4} (1+4x+x^2)$

$\displaystyle \mathrm{Li}_{-4}(x) = \frac{x}{(1-x)^5} (1+11x+11x^2+x^3)$

and more generally one has

$\displaystyle \mathrm{Li}_{-n-1}(x) = \frac{x}{(1-x)^{n+2}} E_n(x)$

for all natural numbers ${n}$ and some polynomial ${E_n}$ of degree ${n}$ (the Eulerian polynomials), which when converted to the monomial basis yields the (shifted) Eulerian numbers

$\displaystyle E_n(x) = \sum_{k=0}^n A(n+1,k) x^k.$

For instance

$\displaystyle E_3(x) = A(4,0) x^0 + A(4,1) x^1 + A(4,2) x^2 + A(4,3) x^3$

$\displaystyle = 1 + 11x + 11x^2 + x^3.$

These particular coefficients also have useful combinatorial interpretations. For instance:

• The binomial coefficient ${\binom{n}{k}}$ is of course the number of ${k}$-element subsets of ${\{1,\dots,n\}}$.
• The unsigned Stirling numbers ${|s(n,k)|}$ of the first kind are the number of permutations of ${\{1,\dots,n\}}$ with exactly ${k}$ cycles. The signed Stirling numbers ${s(n,k)}$ are then given by the formula ${s(n,k) = (-1)^{n-k} |s(n,k)|}$.
• The Stirling numbers ${S(n,k)}$ of the second kind are the number of ways to partition ${\{1,\dots,n\}}$ into ${k}$ non-empty subsets.
• The Eulerian numbers ${A(n,k)}$ are the number of permutations of ${\{1,\dots,n\}}$ with exactly ${k}$ ascents.

These coefficients behave similarly to each other in several ways. For instance, the binomial coefficients ${\binom{n}{k}}$ obey the well known Pascal identity

$\displaystyle \binom{n+1}{k} = \binom{n}{k} + \binom{n}{k-1}$

(with the convention that ${\binom{n}{k}}$ vanishes outside of the range ${0 \leq k \leq n}$). In a similar spirit, the unsigned Stirling numbers ${|s(n,k)|}$ of the first kind obey the identity

$\displaystyle |s(n+1,k)| = n |s(n,k)| + |s(n,k-1)|$

and the signed counterparts ${s(n,k)}$ obey the identity

$\displaystyle s(n+1,k) = -n s(n,k) + s(n,k-1).$

The Stirling numbers of the second kind ${S(n,k)}$ obey the identity

$\displaystyle S(n+1,k) = k S(n,k) + S(n,k-1)$

and the Eulerian numbers ${A(n,k)}$ obey the identity

$\displaystyle A(n+1,k) = (k+1) A(n,k) + (n-k+1) A(n,k-1).$

Let ${G = (G,+)}$, ${H = (H,+)}$ be additive groups (i.e., groups with an abelian addition group law). A map ${f: G \rightarrow H}$ is a homomorphism if one has

$\displaystyle f(x+y) - f(x) - f(y) = 0$

for all ${x,y \in G}$. A map ${f: G \rightarrow H}$ is an affine homomorphism if one has

$\displaystyle f(x_1) - f(x_2) + f(x_3) - f(x_4) = 0 \ \ \ \ \ (1)$

for all additive quadruples ${(x_1,x_2,x_3,x_4)}$ in ${G}$, by which we mean that ${x_1,x_2,x_3,x_4 \in G}$ and ${x_1-x_2+x_3-x_4=0}$. The two notions are closely related; it is easy to verify that ${f}$ is an affine homomorphism if and only if ${f}$ is the sum of a homomorphism and a constant.

Now suppose that ${H}$ also has a translation-invariant metric ${d}$. A map ${f: G \rightarrow H}$ is said to be a quasimorphism if one has

$\displaystyle f(x+y) - f(x) - f(y) = O(1) \ \ \ \ \ (2)$

for all ${x,y \in G}$, where ${O(1)}$ denotes a quantity at a bounded distance from the origin. Similarly, ${f: G \rightarrow H}$ is an affine quasimorphism if

$\displaystyle f(x_1) - f(x_2) + f(x_3) - f(x_4) = O(1) \ \ \ \ \ (3)$

for all additive quadruples ${(x_1,x_2,x_3,x_4)}$ in ${G}$. Again, one can check that ${f}$ is an affine quasimorphism if and only if it is the sum of a quasimorphism and a constant (with the implied constant of the quasimorphism controlled by the implied constant of the affine quasimorphism). (Since every constant is itself a quasimorphism, it is in fact the case that affine quasimorphisms are quasimorphisms, but now the implied constant in the latter is not controlled by the implied constant of the former.)

“Trivial” examples of quasimorphisms include the sum of a homomorphism and a bounded function. Are there others? In some cases, the answer is no. For instance, suppose we have a quasimorphism ${f: {\bf Z} \rightarrow {\bf R}}$. Iterating (2), we see that ${f(kx) = kf(x) + O(k)}$ for any integer ${x}$ and natural number ${k}$, which we can rewrite as ${f(kx)/kx = f(x)/x + O(1/|x|)}$ for non-zero ${x}$. Also, ${f}$ is Lipschitz. Sending ${k \rightarrow \infty}$, we can verify that ${f(x)/x}$ is a Cauchy sequence as ${x \rightarrow \infty}$ and thus tends to some limit ${\alpha}$; we have ${\alpha = f(x)/x + O(1/x)}$ for ${x \geq 1}$, hence ${f(x) = \alpha x + O(1)}$ for positive ${x}$, and then one can use (2) one last time to obtain ${f(x) = \alpha x + O(1)}$ for all ${x}$. Thus ${f}$ is the sum of the homomorphism ${x \mapsto \alpha x}$ and a bounded sequence.

In general, one can phrase this problem in the language of group cohomology (discussed in this previous post). Call a map ${f: G \rightarrow H}$ a ${0}$-cocycle. A ${1}$-cocycle is a map ${\rho: G \times G \rightarrow H}$ obeying the identity

$\displaystyle \rho(x,y+z) + \rho(y,z) = \rho(x,y) + \rho(x+y,z)$

for all ${x,y,z \in G}$. Given a ${0}$-cocycle ${f: G \rightarrow H}$, one can form its derivative ${\partial f: G \times G \rightarrow H}$ by the formula

$\displaystyle \partial f(x,y) := f(x+y)-f(x)-f(y).$

Such functions are called ${1}$-coboundaries. It is easy to see that the abelian group of ${1}$-coboundaries is a subgroup of the abelian group of ${1}$-cocycles. The quotient of these two groups is the first group cohomology of ${G}$ with coefficients in ${H}$, and is denoted ${H^1(G; H)}$.

If a ${0}$-cocycle is bounded then its derivative is a bounded ${1}$-coboundary. The quotient of the group of bounded ${1}$-cocycles by the derivatives of bounded ${0}$-cocycles is called the bounded first group cohomology of ${G}$ with coefficients in ${H}$, and is denoted ${H^1_b(G; H)}$. There is an obvious homomorphism ${\phi}$ from ${H^1_b(G; H)}$ to ${H^1(G; H)}$, formed by taking a coset of the space of derivatives of bounded ${0}$-cocycles, and enlarging it to a coset of the space of ${1}$-coboundaries. By chasing all the definitions, we see that all quasimorphism from ${G}$ to ${H}$ are the sum of a homomorphism and a bounded function if and only if this homomorphism ${\phi}$ is injective; in fact the quotient of the space of quasimorphisms by the sum of homomorphisms and bounded functions is isomorphic to the kernel of ${\phi}$.

In additive combinatorics, one is often working with functions which only have additive structure a fraction of the time, thus for instance (1) or (3) might only hold “${1\%}$ of the time”. This makes it somewhat difficult to directly interpret the situation in terms of group cohomology. However, thanks to tools such as the Balog-Szemerédi-Gowers lemma, one can upgrade this sort of ${1\%}$-structure to ${100\%}$-structure – at the cost of restricting the domain to a smaller set. Here I record one such instance of this phenomenon, thus giving a tentative link between additive combinatorics and group cohomology. (I thank Yuval Wigderson for suggesting the problem of locating such a link.)

Theorem 1 Let ${G = (G,+)}$, ${H = (H,+)}$ be additive groups with ${|G|=N}$, let ${S}$ be a subset of ${H}$, let ${E \subset G}$, and let ${f: E \rightarrow H}$ be a function such that

$\displaystyle f(x_1) - f(x_2) + f(x_3) - f(x_4) \in S$

for ${\geq K^{-1} N^3}$ additive quadruples ${(x_1,x_2,x_3,x_4)}$ in ${E}$. Then there exists a subset ${A}$ of ${G}$ containing ${0}$ with ${|A| \gg K^{-O(1)} N}$, a subset ${X}$ of ${H}$ with ${|X| \ll K^{O(1)}}$, and a function ${g: 4A-4A \rightarrow H}$ such that

$\displaystyle g(x+y) - g(x)-g(y) \in X + 496S - 496S \ \ \ \ \ (4)$

for all ${x, y \in 2A-2A}$ (thus, the derivative ${\partial g}$ takes values in ${X + 496 S - 496 S}$ on ${2A - 2A}$), and such that for each ${h \in A}$, one has

$\displaystyle f(x+h) - f(x) - g(h) \in 8S - 8S \ \ \ \ \ (5)$

for ${\gg K^{-O(1)} N}$ values of ${x \in E}$.

Presumably the constants ${8}$ and ${496}$ can be improved further, but we have not attempted to optimise these constants. We chose ${2A-2A}$ as the domain on which one has a bounded derivative, as one can use the Bogulybov lemma (see e.g, Proposition 4.39 of my book with Van Vu) to find a large Bohr set inside ${2A-2A}$. In applications, the set ${S}$ need not have bounded size, or even bounded doubling; for instance, in the inverse ${U^4}$ theory over a small finite fields ${F}$, one would be interested in the situation where ${H}$ is the group of ${n \times n}$ matrices with coefficients in ${F}$ (for some large ${n}$, and ${S}$ being the subset consisting of those matrices of rank bounded by some bound ${C = O(1)}$.

Proof: By hypothesis, there are ${\geq K N^3}$ triples ${(h,x,y) \in G^3}$ such that ${x,x+h,y,y+h \in E}$ and

$\displaystyle f(x+h) - f(x) \in f(y+h)-f(y) + S. \ \ \ \ \ (6)$

Thus, there is a set ${B \subset G}$ with ${|B| \gg K^{-1} N}$ such that for all ${h \in B}$, one has (6) for ${\gg K^{-1} N^2}$ pairs ${(x,y) \in G^2}$ with ${x,x+h,y,y+h \in E}$; in particular, there exists ${y = y(h) \in E \cap (E-h)}$ such that (6) holds for ${\gg K^{-1} N}$ values of ${x \in E \cap (E-h)}$. Setting ${g_0(h) := f(y(h)+h) - f(y(h))}$, we conclude that for each ${h \in B}$, one has

$\displaystyle f(x+h) - f(x) \in g_0(h) + S \ \ \ \ \ (7)$

for ${\gg K^{-1} N}$ values of ${x \in E \cap (E-h)}$.

Consider the bipartite graph whose vertex sets are two copies of ${E}$, and ${x}$ and ${x+h}$ connected by a (directed) edge if ${h \in B}$ and (7) holds. Then this graph has ${\gg K^{-2} N^2}$ edges. Applying (a slight modification of) the Balog-Szemerédi-Gowers theorem (for instance by modifying the proof of Corollary 5.19 of my book with Van Vu), we can then find a subset ${C}$ of ${E}$ with ${|C| \gg K^{-O(1)} N}$ with the property that for any ${x_1,x_3 \in C}$, there exist ${\gg K^{-O(1)} N^3}$ triples ${(x_2,y_1,y_2) \in E^3}$ such that the edges ${(x_1,y_1), (x_2,y_1), (x_2,y_2), (x_3,y_2)}$ all lie in this bipartite graph. This implies that, for all ${x_1,x_3 \in C}$, there exist ${\gg K^{-O(1)} N^7}$ septuples ${(x_2,y_1,y_2,z_{11},z_{21},z_{22},z_{32}) \in G^7}$ obeying the constraints

$\displaystyle f(y_j) - f(x_i), f(y_j+z_{ij}) - f(x_i+z_{ij}) \in g_0(y_j-x_i) + S$

and ${y_j, x_i, y_j+z_{ij}, x_i+z_{ij} \in E}$ for ${ij = 11, 21, 22, 32}$. These constraints imply in particular that

$\displaystyle f(x_3) - f(x_1) \in f(x_3+z_{32}) - f(y_2+z_{32}) + f(y_2+z_{22}) - f(x_2+z_{22}) + f(x_2+z_{21}) - f(y_1+z_{21}) + f(y_1+z_{11}) - f(x_1+z_{11}) + 4S - 4S.$

Also observe that

$\displaystyle x_3 - x_1 = (x_3+z_{32}) - (y_2+z_{32}) + (y_2+z_{22}) - (x_2+z_{22}) + (x_2+z_{21}) - (y_1+z_{21}) + (y_1+z_{11}) - (x_1+z_{11}).$

Thus, if ${h \in G}$ and ${x_3,x_1 \in C}$ are such that ${x_3-x_1 = h}$, we see that

$\displaystyle f(w_1) - f(w_2) + f(w_3) - f(w_4) + f(w_5) - f(w_6) + f(w_7) - f(w_8) \in f(x_3) - f(x_1) + 4S - 4S$

for ${\gg K^{-O(1)} N^7}$ octuples ${(w_1,w_2,w_3,w_4,w_5,w_6,w_7,w_8) \in E^8}$ in the hyperplane

$\displaystyle h = w_1 - w_2 + w_3 - w_4 + w_5 - w_6 + w_7 - w_8.$

By the pigeonhole principle, this implies that for any fixed ${h \in G}$, there can be at most ${O(K^{O(1)})}$ sets of the form ${f(x_3)-f(x_1) + 3S-3S}$ with ${x_3-x_1=h}$, ${x_1,x_3 \in C}$ that are pairwise disjoint. Using a greedy algorithm, we conclude that there is a set ${W_h}$ of cardinality ${O(K^{O(1)})}$, such that each set ${f(x_3) - f(x_1) + 3S-3S}$ with ${x_3-x_1=h}$, ${x_1,x_3 \in C}$ intersects ${w+4S -4S}$ for some ${w \in W_h}$, or in other words that

$\displaystyle f(x_3) - f(x_1) \in W_{x_3-x_1} + 8S-8S \ \ \ \ \ (8)$

whenever ${x_1,x_3 \in C}$. In particular,

$\displaystyle \sum_{h \in G} \sum_{w \in W_h} | \{ (x_1,x_3) \in C^2: x_3-x_1 = h; f(x_3) - f(x_1) \in w + 8S-8S \}| \geq |C|^2 \gg K^{-O(1)} N^2.$

This implies that there exists a subset ${A}$ of ${G}$ with ${|A| \gg K^{-O(1)} N}$, and an element ${g_1(h) \in W_h}$ for each ${h \in A}$, such that

$\displaystyle | \{ (x_1,x_3) \in C^2: x_3-x_1 = h; f(x_3) - f(x_1) \in g_1(h) + 8S-8S \}| \gg K^{-O(1)} N \ \ \ \ \ (9)$

for all ${h \in A}$. Note we may assume without loss of generality that ${0 \in A}$ and ${g_1(0)=0}$.

Suppose that ${h_1,\dots,h_{16} \in A}$ are such that

$\displaystyle \sum_{i=1}^{16} (-1)^{i-1} h_i = 0. \ \ \ \ \ (10)$

By construction of ${A}$, and permuting labels, we can find ${\gg K^{-O(1)} N^{16}}$ 16-tuples ${(x_1,\dots,x_{16},y_1,\dots,y_{16}) \in C^{32}}$ such that

$\displaystyle y_i - x_i = (-1)^{i-1} h_i$

and

$\displaystyle f(y_i) - f(x_i) \in (-1)^{i-1} g_i(h) + 8S - 8S$

for ${i=1,\dots,16}$. We sum this to obtain

$\displaystyle f(y_1) + \sum_{i=1}^{15} f(y_{i+1})-f(x_i) - f(x_8) \in \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) + 128 S - 128 S$

and hence by (8)

$\displaystyle f(y_1) - f(x_{16}) + \sum_{i=1}^{15} W_{k_i} \in \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) + 248 S - 248 S$

where ${k_i := y_{i+1}-x_i}$. Since

$\displaystyle y_1 - x_{16} + \sum_{i=1}^{15} k_i = 0$

we see that there are only ${N^{16}}$ possible values of ${(y_1,x_{16},k_1,\dots,k_{15})}$. By the pigeonhole principle, we conclude that at most ${O(K^{O(1)})}$ of the sets ${\sum_{i=1}^{16} (-1)^i g_1(h_i) + 248 S - 248 S}$ can be disjoint. Arguing as before, we conclude that there exists a set ${X}$ of cardinality ${O(K^{O(1)})}$ such that

$\displaystyle \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) \in X + 496 S - 496 S \ \ \ \ \ (11)$

whenever (10) holds.

For any ${h \in 4A-4A}$, write ${h}$ arbitrarily as ${h = \sum_{i=1}^8 (-1)^{i-1} h_i}$ for some ${h_1,\dots,h_8 \in A}$ (with ${h_5=\dots=h_8=0}$ if ${h \in 2A-2A}$, and ${h_2 = \dots = h_8 = 0}$ if ${h \in A}$) and then set

$\displaystyle g(h) := \sum_{i=1}^8 (-1)^i g_1(h_i).$

Then from (11) we have (4). For ${h \in A}$ we have ${g(h) = g_1(h)}$, and (5) then follows from (9). $\Box$

I have just uploaded to the arXiv the paper “An inverse theorem for an inequality of Kneser“, submitted to a special issue of the Proceedings of the Steklov Institute of Mathematics in honour of Sergei Konyagin. It concerns an inequality of Kneser discussed previously in this blog, namely that

$\displaystyle \mu(A+B) \geq \min(\mu(A)+\mu(B), 1) \ \ \ \ \ (1)$

whenever ${A,B}$ are compact non-empty subsets of a compact connected additive group ${G}$ with probability Haar measure ${\mu}$.  (A later result of Kemperman extended this inequality to the nonabelian case.) This inequality is non-trivial in the regime

$\displaystyle \mu(A), \mu(B), 1- \mu(A)-\mu(B) > 0. \ \ \ \ \ (2)$

The connectedness of ${G}$ is essential, otherwise one could form counterexamples involving proper subgroups of ${G}$ of positive measure. In the blog post, I indicated how this inequality (together with a more “robust” strengthening of it) could be deduced from submodularity inequalities such as

$\displaystyle \mu( (A_1 \cup A_2) + B) + \mu( (A_1 \cap A_2) + B)$

$\displaystyle \leq \mu(A_1+B) + \mu(A_2+B) \ \ \ \ \ (3)$

which in turn easily follows from the identity ${(A_1 \cup A_2) + B = (A_1+B) \cup (A_2+B)}$ and the inclusion ${(A_1 \cap A_2) + B \subset (A_1 +B) \cap (A_2+B)}$, combined with the inclusion-exclusion formula.

In the non-trivial regime (2), equality can be attained in (1), for instance by taking ${G}$ to be the unit circle ${G = {\bf R}/{\bf Z}}$ and ${A,B}$ to be arcs in that circle (obeying (2)). A bit more generally, if ${G}$ is an arbitrary connected compact abelian group and ${\xi: G \rightarrow {\bf R}/{\bf Z}}$ is a non-trivial character (i.e., a continuous homomorphism), then ${\xi}$ must be surjective (as ${{\bf R}/{\bf Z}}$ has no non-trivial connected subgroups), and one can take ${A = \xi^{-1}(I)}$ and ${B = \xi^{-1}(J)}$ for some arcs ${I,J}$ in that circle (again choosing the measures of these arcs to obey (2)). The main result of this paper is an inverse theorem that asserts that this is the only way in which equality can occur in (1) (assuming (2)); furthermore, if (1) is close to being satisfied with equality and (2) holds, then ${A,B}$ must be close (in measure) to an example of the above form ${A = \xi^{-1}(I), B = \xi^{-1}(J)}$. Actually, for technical reasons (and for the applications we have in mind), it is important to establish an inverse theorem not just for (1), but for the more robust version mentioned earlier (in which the sumset ${A+B}$ is replaced by the partial sumset ${A +_\varepsilon B}$ consisting of “popular” sums).

Roughly speaking, the idea is as follows. Let us informally call ${(A,B)}$ a critical pair if (2) holds and the inequality (1) (or more precisely, a robust version of this inequality) is almost obeyed with equality. The notion of a critical pair obeys some useful closure properties. Firstly, it is symmetric in ${A,B}$, and invariant with respect to translation of either ${A}$ or ${B}$. Furthermore, from the submodularity inequality (3), one can show that if ${(A_1,B)}$ and ${(A_2,B)}$ are critical pairs (with ${\mu(A_1 \cap A_2)}$ and ${1 - \mu(A_1 \cup A_2) - \mu(B)}$ positive), then ${(A_1 \cap A_2,B)}$ and ${(A_1 \cup A_2, B)}$ are also critical pairs. (Note that this is consistent with the claim that critical pairs only occur when ${A,B}$ come from arcs of a circle.) Similarly, from associativity ${(A+B)+C = A+(B+C)}$, one can show that if ${(A,B)}$ and ${(A+B,C)}$ are critical pairs, then so are ${(B,C)}$ and ${(A,B+C)}$.

One can combine these closure properties to obtain further ones. For instance, suppose ${A,B}$ is such that ${\mu(A+B) 0}$. Then (cheating a little bit), one can show that ${(A+B,C)}$ is also a critical pair, basically because ${A+B}$ is the union of the ${A+b}$, ${b \in B}$, the ${(A+b,C)}$ are all critical pairs, and the ${A+b}$ all intersect each other. This argument doesn’t quite work as stated because one has to apply the closure property under union an uncountable number of times, but it turns out that if one works with the robust version of sumsets and uses a random sampling argument to approximate ${A+B}$ by the union of finitely many of the ${A+b}$, then the argument can be made to work.

Using all of these closure properties, it turns out that one can start with an arbitrary critical pair ${(A,B)}$ and end up with a small set ${C}$ such that ${(A,C)}$ and ${(kC,C)}$ are also critical pairs for all ${1 \leq k \leq 10^4}$ (say), where ${kC}$ is the ${k}$-fold sumset of ${C}$. (Intuitively, if ${A,B}$ are thought of as secretly coming from the pullback of arcs ${I,J}$ by some character ${\xi}$, then ${C}$ should be the pullback of a much shorter arc by the same character.) In particular, ${C}$ exhibits linear growth, in that ${\mu(kC) = k\mu(C)}$ for all ${1 \leq k \leq 10^4}$. One can now use standard technology from inverse sumset theory to show first that ${C}$ has a very large Fourier coefficient (and thus is biased with respect to some character ${\xi}$), and secondly that ${C}$ is in fact almost of the form ${C = \xi^{-1}(K)}$ for some arc ${K}$, from which it is not difficult to conclude similar statements for ${A}$ and ${B}$ and thus finish the proof of the inverse theorem.

In order to make the above argument rigorous, one has to be more precise about what the modifier “almost” means in the definition of a critical pair. I chose to do this in the language of “cheap” nonstandard analysis (aka asymptotic analysis), as discussed in this previous blog post; one could also have used the full-strength version of nonstandard analysis, but this does not seem to convey any substantial advantages. (One can also work in a more traditional “non-asymptotic” framework, but this requires one to keep much more careful account of various small error terms and leads to a messier argument.)

[Update, Nov 15: Corrected the attribution of the inequality (1) to Kneser instead of Kemperman.  Thanks to John Griesmer for pointing out the error.]

Let ${\lambda: {\bf N} \rightarrow \{-1,1\}}$ be the Liouville function, thus ${\lambda(n)}$ is defined to equal ${+1}$ when ${n}$ is the product of an even number of primes, and ${-1}$ when ${n}$ is the product of an odd number of primes. The Chowla conjecture asserts that ${\lambda}$ has the statistics of a random sign pattern, in the sense that

$\displaystyle \lim_{N \rightarrow \infty} \mathbb{E}_{n \leq N} \lambda(n+h_1) \dots \lambda(n+h_k) = 0 \ \ \ \ \ (1)$

for all ${k \geq 1}$ and all distinct natural numbers ${h_1,\dots,h_k}$, where we use the averaging notation

$\displaystyle \mathbb{E}_{n \leq N} f(n) := \frac{1}{N} \sum_{n \leq N} f(n).$

For ${k=1}$, this conjecture is equivalent to the prime number theorem (as discussed in this previous blog post), but the conjecture remains open for any ${k \geq 2}$.

In recent years, it has been realised that one can make more progress on this conjecture if one works instead with the logarithmically averaged version

$\displaystyle \lim_{N \rightarrow \infty} \mathbb{E}_{n \leq N}^{\log} \lambda(n+h_1) \dots \lambda(n+h_k) = 0 \ \ \ \ \ (2)$

of the conjecture, where we use the logarithmic averaging notation

$\displaystyle \mathbb{E}_{n \leq N}^{\log} f(n) := \frac{\sum_{n \leq N} \frac{f(n)}{n}}{\sum_{n \leq N} \frac{1}{n}}.$

Using the summation by parts (or telescoping series) identity

$\displaystyle \sum_{n \leq N} \frac{f(n)}{n} = \sum_{M < N} \frac{1}{M(M+1)} (\sum_{n \leq M} f(n)) + \frac{1}{N} \sum_{n \leq N} f(n) \ \ \ \ \ (3)$

it is not difficult to show that the Chowla conjecture (1) for a given ${k,h_1,\dots,h_k}$ implies the logarithmically averaged conjecture (2). However, the converse implication is not at all clear. For instance, for ${k=1}$, we have already mentioned that the Chowla conjecture

$\displaystyle \lim_{N \rightarrow \infty} \mathbb{E}_{n \leq N} \lambda(n) = 0$

is equivalent to the prime number theorem; but the logarithmically averaged analogue

$\displaystyle \lim_{N \rightarrow \infty} \mathbb{E}^{\log}_{n \leq N} \lambda(n) = 0$

is significantly easier to show (a proof with the Liouville function ${\lambda}$ replaced by the closely related Möbius function ${\mu}$ is given in this previous blog post). And indeed, significantly more is now known for the logarithmically averaged Chowla conjecture; in this paper of mine I had proven (2) for ${k=2}$, and in this recent paper with Joni Teravainen, we proved the conjecture for all odd ${k}$ (with a different proof also given here).

In view of this emerging consensus that the logarithmically averaged Chowla conjecture was easier than the ordinary Chowla conjecture, it was thus somewhat of a surprise for me to read a recent paper of Gomilko, Kwietniak, and Lemanczyk who (among other things) established the following statement:

Theorem 1 Assume that the logarithmically averaged Chowla conjecture (2) is true for all ${k}$. Then there exists a sequence ${N_i}$ going to infinity such that the Chowla conjecture (1) is true for all ${k}$ along that sequence, that is to say

$\displaystyle \lim_{N_i \rightarrow \infty} \mathbb{E}_{n \leq N_i} \lambda(n+h_1) \dots \lambda(n+h_k) = 0$

for all ${k}$ and all distinct ${h_1,\dots,h_k}$.

This implication does not use any special properties of the Liouville function (other than that they are bounded), and in fact proceeds by ergodic theoretic methods, focusing in particular on the ergodic decomposition of invariant measures of a shift into ergodic measures. Ergodic methods have proven remarkably fruitful in understanding these sorts of number theoretic and combinatorial problems, as could already be seen by the ergodic theoretic proof of Szemerédi’s theorem by Furstenberg, and more recently by the work of Frantzikinakis and Host on Sarnak’s conjecture. (My first paper with Teravainen also uses ergodic theory tools.) Indeed, many other results in the subject were first discovered using ergodic theory methods.

On the other hand, many results in this subject that were first proven ergodic theoretically have since been reproven by more combinatorial means; my second paper with Teravainen is an instance of this. As it turns out, one can also prove Theorem 1 by a standard combinatorial (or probabilistic) technique known as the second moment method. In fact, one can prove slightly more:

Theorem 2 Let ${k}$ be a natural number. Assume that the logarithmically averaged Chowla conjecture (2) is true for ${2k}$. Then there exists a set ${{\mathcal N}}$ of natural numbers of logarithmic density ${1}$ (that is, ${\lim_{N \rightarrow \infty} \mathbb{E}_{n \leq N}^{\log} 1_{n \in {\mathcal N}} = 1}$) such that

$\displaystyle \lim_{N \rightarrow \infty: N \in {\mathcal N}} \mathbb{E}_{n \leq N} \lambda(n+h_1) \dots \lambda(n+h_k) = 0$

for any distinct ${h_1,\dots,h_k}$.

It is not difficult to deduce Theorem 1 from Theorem 2 using a diagonalisation argument. Unfortunately, the known cases of the logarithmically averaged Chowla conjecture (${k=2}$ and odd ${k}$) are currently insufficient to use Theorem 2 for any purpose other than to reprove what is already known to be true from the prime number theorem. (Indeed, the even cases of Chowla, in either logarithmically averaged or non-logarithmically averaged forms, seem to be far more powerful than the odd cases; see Remark 1.7 of this paper of myself and Teravainen for a related observation in this direction.)

We now sketch the proof of Theorem 2. For any distinct ${h_1,\dots,h_k}$, we take a large number ${H}$ and consider the limiting the second moment

$\displaystyle \limsup_{N \rightarrow \infty} \mathop{\bf E}_{n \leq N}^{\log} |\mathop{\bf E}_{m \leq H} \lambda(n+m+h_1) \dots \lambda(n+m+h_k)|^2.$

We can expand this as

$\displaystyle \limsup_{N \rightarrow \infty} \mathop{\bf E}_{m,m' \leq H} \mathop{\bf E}_{n \leq N}^{\log} \lambda(n+m+h_1) \dots \lambda(n+m+h_k)$

$\displaystyle \lambda(n+m'+h_1) \dots \lambda(n+m'+h_k).$

If all the ${m+h_1,\dots,m+h_k,m'+h_1,\dots,m'+h_k}$ are distinct, the hypothesis (2) tells us that the inner averages goes to zero as ${N \rightarrow \infty}$. The remaining averages are ${O(1)}$, and there are ${O( k^2 )}$ of these averages. We conclude that

$\displaystyle \limsup_{N \rightarrow \infty} \mathop{\bf E}_{n \leq N}^{\log} |\mathop{\bf E}_{m \leq H} \lambda(n+m+h_1) \dots \lambda(n+m+h_k)|^2 \ll k^2 / H.$

By Markov’s inequality (and (3)), we conclude that for any fixed ${h_1,\dots,h_k, H}$, there exists a set ${{\mathcal N}_{h_1,\dots,h_k,H}}$ of upper logarithmic density at least ${1-k/H^{1/2}}$, thus

$\displaystyle \limsup_{N \rightarrow \infty} \mathbb{E}_{n \leq N}^{\log} 1_{n \in {\mathcal N}_{h_1,\dots,h_k,H}} \geq 1 - k/H^{1/2}$

such that

$\displaystyle \mathop{\bf E}_{n \leq N} |\mathop{\bf E}_{m \leq H} \lambda(n+m+h_1) \dots \lambda(n+m+h_k)|^2 \ll k / H^{1/2}.$

By deleting at most finitely many elements, we may assume that ${{\mathcal N}_{h_1,\dots,h_k,H}}$ consists only of elements of size at least ${H^2}$ (say).

For any ${H_0}$, if we let ${{\mathcal N}_{h_1,\dots,h_k, \geq H_0}}$ be the union of ${{\mathcal N}_{h_1,\dots,h_k, H}}$ for ${H \geq H_0}$, then ${{\mathcal N}_{h_1,\dots,h_k, \geq H_0}}$ has logarithmic density ${1}$. By a diagonalisation argument (using the fact that the set of tuples ${(h_1,\dots,h_k)}$ is countable), we can then find a set ${{\mathcal N}}$ of natural numbers of logarithmic density ${1}$, such that for every ${h_1,\dots,h_k,H_0}$, every sufficiently large element of ${{\mathcal N}}$ lies in ${{\mathcal N}_{h_1,\dots,h_k,\geq H_0}}$. Thus for every sufficiently large ${N}$ in ${{\mathcal N}}$, one has

$\displaystyle \mathop{\bf E}_{n \leq N} |\mathop{\bf E}_{m \leq H} \lambda(n+m+h_1) \dots \lambda(n+m+h_k)|^2 \ll k / H^{1/2}.$

for some ${H \geq H_0}$ with ${N \geq H^2}$. By Cauchy-Schwarz, this implies that

$\displaystyle \mathop{\bf E}_{n \leq N} \mathop{\bf E}_{m \leq H} \lambda(n+m+h_1) \dots \lambda(n+m+h_k) \ll k^{1/2} / H^{1/4};$

interchanging the sums and using ${N \geq H^2}$ and ${H \geq H_0}$, this implies that

$\displaystyle \mathop{\bf E}_{n \leq N} \lambda(n+h_1) \dots \lambda(n+h_k) \ll k^{1/2} / H^{1/4} \leq k^{1/2} / H_0^{1/4}.$

We conclude on taking ${H_0}$ to infinity that

$\displaystyle \lim_{N \rightarrow \infty; N \in {\mathcal N}} \mathop{\bf E}_{n \leq N} \lambda(n+h_1) \dots \lambda(n+h_k) = 0$

as required.

Szemerédi’s theorem asserts that all subsets of the natural numbers of positive density contain arbitrarily long arithmetic progressions.  Roth’s theorem is the special case when one considers arithmetic progressions of length three.  Both theorems have many important proofs using tools from additive combinatorics, (higher order) Fourier analysis, (hyper) graph regularity theory, and ergodic theory.  However, the original proof by Endre Szemerédi, while extremely intricate, was purely combinatorial (and in particular “elementary”) and almost entirely self-contained, except for an invocation of the van der Waerden theorem.  It is also notable for introducing a prototype of what is now known as the Szemerédi regularity lemma.

Back in 2005, I rewrote Szemerédi’s original proof in order to understand it better, however my rewrite ended up being about the same length as the original argument and was probably only usable to myself.  In 2012, after Szemerédi was awarded the Abel prize, I revisited this argument with the intention to try to write up a more readable version of the proof, but ended up just presenting some ingredients of the argument in a blog post, rather than try to rewrite the whole thing.  In that post, I suspected that the cleanest way to write up the argument would be through the language of nonstandard analysis (perhaps in an iterated hyperextension that could handle various hierarchies of infinitesimals), but was unable to actually achieve any substantial simplifications by passing to the nonstandard world.

A few weeks ago, I participated in a week-long workshop at the American Institute of Mathematics on “Nonstandard methods in combinatorial number theory”, and spent some time in a working group with Shabnam Akhtari, Irfam Alam, Renling Jin, Steven Leth, Karl Mahlburg, Paul Potgieter, and Henry Towsner to try to obtain a manageable nonstandard version of Szemerédi’s original proof.  We didn’t end up being able to do so – in fact there are now signs that perhaps nonstandard analysis is not the optimal framework in which to place this argument – but we did at least clarify the existing standard argument, to the point that I was able to go back to my original rewrite of the proof and present it in a more civilised form, which I am now uploading here as an unpublished preprint.   There are now a number of simplifications to the proof.  Firstly, one no longer needs the full strength of the regularity lemma; only the simpler “weak” regularity lemma of Frieze and Kannan is required.  Secondly, the proof has been “factored” into a number of stand-alone propositions of independent interest, in particular involving just (families of) one-dimensional arithmetic progressions rather than the complicated-looking multidimensional arithmetic progressions that occur so frequently in the original argument of Szemerédi.  Finally, the delicate manipulations of densities and epsilons via double counting arguments in Szemerédi’s original paper have been abstracted into a certain key property of families of arithmetic progressions that I call the “double counting property”.

The factoring mentioned above is particularly simple in the case of proving Roth’s theorem, which is now presented separately in the above writeup.  Roth’s theorem seeks to locate a length three progression ${(P(1),P(2),P(3)) = (a, a+r, a+2r)}$ in which all three elements lie in a single set.  This will be deduced from an easier variant of the theorem in which one locates (a family of) length three progressions in which just the first two elements ${P(1), P(2)}$ of the progression lie in a good set (and some other properties of the family are also required).  This is in turn derived from an even easier variant in which now just the first element of the progression is required to be in the good set.

More specifically, Roth’s theorem is now deduced from

Theorem 1.5.  Let ${L}$ be a natural number, and let ${S}$ be a set of integers of upper density at least ${1-1/10L}$.  Then, whenever ${S}$ is partitioned into finitely many colour classes, there exists a colour class ${A}$ and a family ${(P_l(1),P_l(2),P_l(3))_{l=1}^L}$ of 3-term arithmetic progressions with the following properties:

1. For each ${l}$, ${P_l(1)}$ and ${P_l(2)}$ lie in ${A}$.
2. For each ${l}$, ${P_l(3)}$ lie in ${S}$.
3. The ${P_l(3)}$ for ${l=1,\dots,L}$ are in arithmetic progression.

The situation in this theorem is depicted by the following diagram, in which elements of $A$ are in blue and elements of $S$ are in grey:

Theorem 1.5 is deduced in turn from the following easier variant:

Theorem 1.6.  Let ${L}$ be a natural number, and let ${S}$ be a set of integers of upper density at least ${1-1/10L}$.  Then, whenever ${S}$ is partitioned into finitely many colour classes, there exists a colour class ${A}$ and a family ${(P_l(1),P_l(2),P_l(3))_{l=1}^L}$ of 3-term arithmetic progressions with the following properties:

1. For each ${l}$, ${P_l(1)}$ lie in ${A}$.
2. For each ${l}$, ${P_l(2)}$ and ${P_l(3)}$ lie in ${S}$.
3. The ${P_l(2)}$ for ${l=1,\dots,L}$ are in arithmetic progression.

The situation here is described by the figure below.

Theorem 1.6 is easy to prove.  To derive Theorem 1.5 from Theorem 1.6, or to derive Roth’s theorem from Theorem 1.5, one uses double counting arguments, van der Waerden’s theorem, and the weak regularity lemma, largely as described in this previous blog post; see the writeup for the full details.  (I would be interested in seeing a shorter proof of Theorem 1.5 though that did not go through these arguments, and did not use the more powerful theorems of  Roth or Szemerédi.)

Fix a non-negative integer ${k}$. Define an (weak) integer partition of length ${k}$ to be a tuple ${\lambda = (\lambda_1,\dots,\lambda_k)}$ of non-increasing non-negative integers ${\lambda_1 \geq \dots \geq \lambda_k \geq 0}$. (Here our partitions are “weak” in the sense that we allow some parts of the partition to be zero. Henceforth we will omit the modifier “weak”, as we will not need to consider the more usual notion of “strong” partitions.) To each such partition ${\lambda}$, one can associate a Young diagram consisting of ${k}$ left-justified rows of boxes, with the ${i^{th}}$ row containing ${\lambda_i}$ boxes. A semi-standard Young tableau (or Young tableau for short) ${T}$ of shape ${\lambda}$ is a filling of these boxes by integers in ${\{1,\dots,k\}}$ that is weakly increasing along rows (moving rightwards) and strictly increasing along columns (moving downwards). The collection of such tableaux will be denoted ${{\mathcal T}_\lambda}$. The weight ${|T|}$ of a tableau ${T}$ is the tuple ${(n_1,\dots,n_k)}$, where ${n_i}$ is the number of occurrences of the integer ${i}$ in the tableau. For instance, if ${k=3}$ and ${\lambda = (6,4,2)}$, an example of a Young tableau of shape ${\lambda}$ would be

$\displaystyle \begin{tabular}{|c|c|c|c|c|c|} \hline 1 & 1 & 1 & 2 & 3 & 3 \\ \cline{1-6} 2 & 2 & 2 &3\\ \cline{1-4} 3 & 3\\ \cline{1-2} \end{tabular}$

The weight here would be ${|T| = (3,4,5)}$.

To each partition ${\lambda}$ one can associate the Schur polynomial ${s_\lambda(u_1,\dots,u_k)}$ on ${k}$ variables ${u = (u_1,\dots,u_k)}$, which we will define as

$\displaystyle s_\lambda(u) := \sum_{T \in {\mathcal T}_\lambda} u^{|T|}$

using the multinomial convention

$\displaystyle (u_1,\dots,u_k)^{(n_1,\dots,n_k)} := u_1^{n_1} \dots u_k^{n_k}.$

Thus for instance the Young tableau ${T}$ given above would contribute a term ${u_1^3 u_2^4 u_3^5}$ to the Schur polynomial ${s_{(6,4,2)}(u_1,u_2,u_3)}$. In the case of partitions of the form ${(n,0,\dots,0)}$, the Schur polynomial ${s_{(n,0,\dots,0)}}$ is just the complete homogeneous symmetric polynomial ${h_n}$ of degree ${n}$ on ${k}$ variables:

$\displaystyle s_{(n,0,\dots,0)}(u_1,\dots,u_k) := \sum_{n_1,\dots,n_k \geq 0: n_1+\dots+n_k = n} u_1^{n_1} \dots u_k^{n_k},$

thus for instance

$\displaystyle s_{(3,0)}(u_1,u_2) = u_1^3 + u_1^2 u_2 + u_1 u_2^2 + u_2^3.$

Schur polyomials are ubiquitous in the algebraic combinatorics of “type ${A}$ objects” such as the symmetric group ${S_k}$, the general linear group ${GL_k}$, or the unitary group ${U_k}$. For instance, one can view ${s_\lambda}$ as the character of an irreducible polynomial representation of ${GL_k({\bf C})}$ associated with the partition ${\lambda}$. However, we will not focus on these interpretations of Schur polynomials in this post.

This definition of Schur polynomials allows for a way to describe the polynomials recursively. If ${k > 1}$ and ${T}$ is a Young tableau of shape ${\lambda = (\lambda_1,\dots,\lambda_k)}$, taking values in ${\{1,\dots,k\}}$, one can form a sub-tableau ${T'}$ of some shape ${\lambda' = (\lambda'_1,\dots,\lambda'_{k-1})}$ by removing all the appearances of ${k}$ (which, among other things, necessarily deletes the ${k^{th}}$ row). For instance, with ${T}$ as in the previous example, the sub-tableau ${T'}$ would be

$\displaystyle \begin{tabular}{|c|c|c|c|} \hline 1 & 1 & 1 & 2 \\ \cline{1-4} 2 & 2 & 2 \\ \cline{1-3} \end{tabular}$

and the reduced partition ${\lambda'}$ in this case is ${(4,3)}$. As Young tableaux are required to be strictly increasing down columns, we can see that the reduced partition ${\lambda'}$ must intersperse the original partition ${\lambda}$ in the sense that

$\displaystyle \lambda_{i+1} \leq \lambda'_i \leq \lambda_i \ \ \ \ \ (1)$

for all ${1 \leq i \leq k-1}$; we denote this interspersion relation as ${\lambda' \prec \lambda}$ (though we caution that this is not intended to be a partial ordering). In the converse direction, if ${\lambda' \prec \lambda}$ and ${T'}$ is a Young tableau with shape ${\lambda'}$ with entries in ${\{1,\dots,k-1\}}$, one can form a Young tableau ${T}$ with shape ${\lambda}$ and entries in ${\{1,\dots,k\}}$ by appending to ${T'}$ an entry of ${k}$ in all the boxes that appear in the ${\lambda}$ shape but not the ${\lambda'}$ shape. This one-to-one correspondence leads to the recursion

$\displaystyle s_\lambda(u) = \sum_{\lambda' \prec \lambda} s_{\lambda'}(u') u_k^{|\lambda| - |\lambda'|} \ \ \ \ \ (2)$

where ${u = (u_1,\dots,u_k)}$, ${u' = (u_1,\dots,u_{k-1})}$, and the size ${|\lambda|}$ of a partition ${\lambda = (\lambda_1,\dots,\lambda_k)}$ is defined as ${|\lambda| := \lambda_1 + \dots + \lambda_k}$.

One can use this recursion (2) to prove some further standard identities for Schur polynomials, such as the determinant identity

$\displaystyle s_\lambda(u) V(u) = \det( u_i^{\lambda_j+k-j} )_{1 \leq i,j \leq k} \ \ \ \ \ (3)$

for ${u=(u_1,\dots,u_k)}$, where ${V(u)}$ denotes the Vandermonde determinant

$\displaystyle V(u) := \prod_{1 \leq i < j \leq k} (u_i - u_j), \ \ \ \ \ (4)$

or the Jacobi-Trudi identity

$\displaystyle s_\lambda(u) = \det( h_{\lambda_j - j + i}(u) )_{1 \leq i,j \leq k}, \ \ \ \ \ (5)$

with the convention that ${h_d(u) = 0}$ if ${d}$ is negative. Thus for instance

$\displaystyle s_{(1,1,0,\dots,0)}(u) = h_1^2(u) - h_0(u) h_2(u) = \sum_{1 \leq i < j \leq k} u_i u_j.$

We review the (standard) derivation of these identities via (2) below the fold. Among other things, these identities show that the Schur polynomials are symmetric, which is not immediately obvious from their definition.

One can also iterate (2) to write

$\displaystyle s_\lambda(u) = \sum_{() = \lambda^0 \prec \lambda^1 \prec \dots \prec \lambda^k = \lambda} \prod_{j=1}^k u_j^{|\lambda^j| - |\lambda^{j-1}|} \ \ \ \ \ (6)$

where the sum is over all tuples ${\lambda^1,\dots,\lambda^k}$, where each ${\lambda^j}$ is a partition of length ${j}$ that intersperses the next partition ${\lambda^{j+1}}$, with ${\lambda^k}$ set equal to ${\lambda}$. We will call such a tuple an integral Gelfand-Tsetlin pattern based at ${\lambda}$.

One can generalise (6) by introducing the skew Schur functions

$\displaystyle s_{\lambda/\mu}(u) := \sum_{\mu = \lambda^i \prec \dots \prec \lambda^k = \lambda} \prod_{j=i+1}^k u_j^{|\lambda^j| - |\lambda^{j-1}|} \ \ \ \ \ (7)$

for ${u = (u_{i+1},\dots,u_k)}$, whenever ${\lambda}$ is a partition of length ${k}$ and ${\mu}$ a partition of length ${i}$ for some ${0 \leq i \leq k}$, thus the Schur polynomial ${s_\lambda}$ is also the skew Schur polynomial ${s_{\lambda /()}}$ with ${i=0}$. (One could relabel the variables here to be something like ${(u_1,\dots,u_{k-i})}$ instead, but this labeling seems slightly more natural, particularly in view of identities such as (8) below.)

By construction, we have the decomposition

$\displaystyle s_{\lambda/\nu}(u_{i+1},\dots,u_k) = \sum_\mu s_{\mu/\nu}(u_{i+1},\dots,u_j) s_{\lambda/\mu}(u_{j+1},\dots,u_k) \ \ \ \ \ (8)$

whenever ${0 \leq i \leq j \leq k}$, and ${\nu, \mu, \lambda}$ are partitions of lengths ${i,j,k}$ respectively. This gives another recursive way to understand Schur polynomials and skew Schur polynomials. For instance, one can use it to establish the generalised Jacobi-Trudi identity

$\displaystyle s_{\lambda/\mu}(u) = \det( h_{\lambda_j - j - \mu_i + i}(u) )_{1 \leq i,j \leq k}, \ \ \ \ \ (9)$

with the convention that ${\mu_i = 0}$ for ${i}$ larger than the length of ${\mu}$; we do this below the fold.

The Schur polynomials (and skew Schur polynomials) are “discretised” (or “quantised”) in the sense that their parameters ${\lambda, \mu}$ are required to be integer-valued, and their definition similarly involves summation over a discrete set. It turns out that there are “continuous” (or “classical”) analogues of these functions, in which the parameters ${\lambda,\mu}$ now take real values rather than integers, and are defined via integration rather than summation. One can view these continuous analogues as a “semiclassical limit” of their discrete counterparts, in a manner that can be made precise using the machinery of geometric quantisation, but we will not do so here.

The continuous analogues can be defined as follows. Define a real partition of length ${k}$ to be a tuple ${\lambda = (\lambda_1,\dots,\lambda_k)}$ where ${\lambda_1 \geq \dots \geq \lambda_k \geq 0}$ are now real numbers. We can define the relation ${\lambda' \prec \lambda}$ of interspersion between a length ${k-1}$ real partition ${\lambda' = (\lambda'_1,\dots,\lambda'_{k-1})}$ and a length ${k}$ real partition ${\lambda = (\lambda_1,\dots,\lambda_{k})}$ precisely as before, by requiring that the inequalities (1) hold for all ${1 \leq i \leq k-1}$. We can then define the continuous Schur functions ${S_\lambda(x)}$ for ${x = (x_1,\dots,x_k) \in {\bf R}^k}$ recursively by defining

$\displaystyle S_{()}() = 1$

and

$\displaystyle S_\lambda(x) = \int_{\lambda' \prec \lambda} S_{\lambda'}(x') \exp( (|\lambda| - |\lambda'|) x_k ) \ \ \ \ \ (10)$

for ${k \geq 1}$ and ${\lambda}$ of length ${k}$, where ${x' := (x_1,\dots,x_{k-1})}$ and the integral is with respect to ${k-1}$-dimensional Lebesgue measure, and ${|\lambda| = \lambda_1 + \dots + \lambda_k}$ as before. Thus for instance

$\displaystyle S_{(\lambda_1)}(x_1) = \exp( \lambda_1 x_1 )$

and

$\displaystyle S_{(\lambda_1,\lambda_2)}(x_1,x_2) = \int_{\lambda_2}^{\lambda_1} \exp( \lambda'_1 x_1 + (\lambda_1+\lambda_2-\lambda'_1) x_2 )\ d\lambda'_1.$

More generally, we can define the continuous skew Schur functions ${S_{\lambda/\mu}(x)}$ for ${\lambda}$ of length ${k}$, ${\mu}$ of length ${j \leq k}$, and ${x = (x_{j+1},\dots,x_k) \in {\bf R}^{k-j}}$ recursively by defining

$\displaystyle S_{\mu/\mu}() = 1$

and

$\displaystyle S_{\lambda/\mu}(x) = \int_{\lambda' \prec \lambda} S_{\lambda'/\mu}(x') \exp( (|\lambda| - |\lambda'|) x_k )$

for ${k > j}$. Thus for instance

$\displaystyle S_{(\lambda_1,\lambda_2,\lambda_3)/(\mu_1,\mu_2)}(x_3) = 1_{\lambda_3 \leq \mu_2 \leq \lambda_2 \leq \mu_1 \leq \lambda_1} \exp( x_3 (\lambda_1+\lambda_2+\lambda_3 - \mu_1 - \mu_2 ))$

and

$\displaystyle S_{(\lambda_1,\lambda_2,\lambda_3)/(\mu_1)}(x_2, x_3) = \int_{\lambda_2 \leq \lambda'_2 \leq \lambda_2, \mu_1} \int_{\mu_1, \lambda_2 \leq \lambda'_1 \leq \lambda_1}$

$\displaystyle \exp( x_2 (\lambda'_1+\lambda'_2 - \mu_1) + x_3 (\lambda_1+\lambda_2+\lambda_3 - \lambda'_1 - \lambda'_2))\ d\lambda'_1 d\lambda'_2.$

By expanding out the recursion, one obtains the analogue

$\displaystyle S_\lambda(x) = \int_{\lambda^1 \prec \dots \prec \lambda^k = \lambda} \exp( \sum_{j=1}^k x_j (|\lambda^j| - |\lambda^{j-1}|))\ d\lambda^1 \dots d\lambda^{k-1},$

of (6), and more generally one has

$\displaystyle S_{\lambda/\mu}(x) = \int_{\mu = \lambda^i \prec \dots \prec \lambda^k = \lambda} \exp( \sum_{j=i+1}^k x_j (|\lambda^j| - |\lambda^{j-1}|))\ d\lambda^{i+1} \dots d\lambda^{k-1}.$

We will call the tuples ${(\lambda^1,\dots,\lambda^k)}$ in the first integral real Gelfand-Tsetlin patterns based at ${\lambda}$. The analogue of (8) is then

$\displaystyle S_{\lambda/\nu}(x_{i+1},\dots,x_k) = \int S_{\mu/\nu}(x_{i+1},\dots,x_j) S_{\lambda/\mu}(x_{j+1},\dots,x_k)\ d\mu$

where the integral is over all real partitions ${\mu}$ of length ${j}$, with Lebesgue measure.

By approximating various integrals by their Riemann sums, one can relate the continuous Schur functions to their discrete counterparts by the limiting formula

$\displaystyle N^{-k(k-1)/2} s_{\lfloor N \lambda \rfloor}( \exp[ x/N ] ) \rightarrow S_\lambda(x) \ \ \ \ \ (11)$

as ${N \rightarrow \infty}$ for any length ${k}$ real partition ${\lambda = (\lambda_1,\dots,\lambda_k)}$ and any ${x = (x_1,\dots,x_k) \in {\bf R}^k}$, where

$\displaystyle \lfloor N \lambda \rfloor := ( \lfloor N \lambda_1 \rfloor, \dots, \lfloor N \lambda_k \rfloor )$

and

$\displaystyle \exp[x/N] := (\exp(x_1/N), \dots, \exp(x_k/N)).$

More generally, one has

$\displaystyle N^{j(j-1)/2-k(k-1)/2} s_{\lfloor N \lambda \rfloor / \lfloor N \mu \rfloor}( \exp[ x/N ] ) \rightarrow S_{\lambda/\mu}(x)$

as ${N \rightarrow \infty}$ for any length ${k}$ real partition ${\lambda}$, any length ${j}$ real partition ${\mu}$ with ${0 \leq j \leq k}$, and any ${x = (x_{j+1},\dots,x_k) \in {\bf R}^{k-j}}$.

As a consequence of these limiting formulae, one expects all of the discrete identities above to have continuous counterparts. This is indeed the case; below the fold we shall prove the discrete and continuous identities in parallel. These are not new results by any means, but I was not able to locate a good place in the literature where they are explicitly written down, so I thought I would try to do so here (primarily for my own internal reference, but perhaps the calculations will be worthwhile to some others also).

Ben Green and I have (finally!) uploaded to the arXiv our paper “New bounds for Szemerédi’s theorem, III: A polylogarithmic bound for ${r_4(N)}$“, submitted to Mathematika. This is the sequel to two previous papers (and an erratum to the former paper), concerning quantitative versions of Szemerédi’s theorem in the case of length four progressions. This sequel has been delayed for over a decade for a number of reasons, but we have finally managed to write the arguments up to our satisfaction and submit it (to a special issue of Mathematika honouring the work of Klaus Roth).

For any natural number ${N}$, define ${r_4(N)}$ to be the largest cardinality of a subset ${A}$ of ${[N] = \{1,\dots,N\}}$ which does not contain any non-trivial arithmetic progressions ${a, a+r, a+2r, a+3r}$ of length four (where “non-trivial” means that ${r}$ is non-zero). Trivially we have ${r_4(N) \leq N}$. In 1969, Szemerédi showed that ${r_4(N) = o(N)}$. However, the decay rate that could be theoretically extracted from this argument (and from several subsequent proofs of this bound, including one by Roth) were quite poor. The first significant quantitative bound on this quantity was by Gowers, who showed that ${r_4(N) \ll N (\log \log N)^{-c}}$ for some absolute constant ${c>0}$. In the second paper in the above-mentioned series, we managed to improve this bound to ${r_4(N) \ll N \exp( - c \sqrt{\log \log N})}$. In this paper, we improve the bound further to ${r_4(N) \ll N (\log N)^{-c}}$, which seems to be the limit of the methods. (We remark that if we could take ${c}$ to be larger than one, this would imply the length four case of a well known conjecture of Erdös that any set of natural numbers whose sum of reciprocals diverges would contain arbitrarily long arithmetic progressions. Thanks to the work of Sanders and of Bloom, the corresponding case of the conjecture for length three conjectures is nearly settled, as it is known that for the analogous bound on ${r_3(N)}$ one can take any ${c}$ less than one.)

Most of the previous work on bounding ${r_4(N)}$ relied in some form or another on the density increment argument introduced by Roth back in 1953; roughly speaking, the idea is to show that if a dense subset ${A}$ of ${[N]}$ fails to contain arithmetic progressions of length four, one seeks to then locate a long subprogression of ${[N]}$ in which ${A}$ has increased density. This was the basic method for instance underlying our previous bound ${r_4(N) \ll N \exp( - c \sqrt{\log \log N})}$, as well as a finite field analogue of the bound ${r_4(N) \ll N (\log N)^{-c}}$; however we encountered significant technical difficulties for several years in extending this argument to obtain the result of the current paper. Our method is instead based on “energy increment arguments”, and more specifically on establishing quantitative version of a Khintchine-type recurrence theorem, similar to the qualitative recurrence theorems established (in the ergodic theory context) by Bergelson-Host-Kra, and (in the current combinatorial context) by Ben Green and myself.

One way to phrase the latter recurrence theorem is as follows. Suppose that ${A \subset [N]}$ has density ${\delta}$. Then one would expect a “randomly” selected arithmetic progression ${{\bf a}, {\bf a}+{\bf r}, {\bf a}+2{\bf r}, {\bf a}+3{\bf r}}$ in ${[N]}$ (using the convention that random variables will be in boldface) to be contained in ${A}$ with probability about ${\delta^4}$. This is not true in general, however it was shown by Ben and myself that for any ${\eta>0}$, there was a set of shifts ${r \in [-N,N]}$ of cardinality ${\gg_{\delta,\eta} N}$, such that for any such ${r}$ one had

$\displaystyle {\bf P}( {\bf a}, {\bf a}+r, {\bf a}+2r, {\bf a}+3r \in A ) \geq \delta^4 - \eta$

if ${{\bf a}}$ was chosen uniformly at random from ${[N]}$. This easily implies that ${r_4(N) = o(N)}$, but does not give a particularly good bound on the decay rate, because the implied constant in the cardinality lower bound ${\gg_{\delta,\eta} N}$ is quite poor (in fact of tower-exponential type, due to the use of regularity lemmas!), and so one has to take ${N}$ to be extremely large compared to ${\delta,\eta}$ to avoid the possibility that the set of shifts in the above theorem consists only of the trivial shift ${r=0}$.

We do not know how to improve the lower bound on the set of shifts to the point where it can give bounds that are competitive with those in this paper. However, we can obtain better quantitative results if we permit ourselves to couple together the two parameters ${{\bf a}}$ and ${{\bf r}}$ of the length four progression. Namely, with ${A}$, ${\delta}$, ${\eta}$ as above, we are able to show that there exist random variables ${{\bf a}, {\bf r}}$, not necessarily independent, such that

$\displaystyle {\bf P}( {\bf a}, {\bf a}+{\bf r}, {\bf a}+2{\bf r}, {\bf a}+3{\bf r} \in A ) \geq \delta^4 - \eta \ \ \ \ \ (1)$

and such that we have the non-degeneracy bound

$\displaystyle {\bf P}( {\bf r} = 0 ) \ll \exp( - \eta^{-O(1)} ) / N.$

This then easily implies the main theorem.

The energy increment method is then deployed to locate a good pair ${({\bf a}, {\bf r})}$ of random variables that will obey the above bounds. One can get some intuition on how to proceed here by considering some model cases. Firstly one can consider a “globally quadratically structured” case in which the indicator function ${1_A}$ “behaves like” a globally quadratic function such as ${F( \alpha n^2 )}$, for some irrational ${\alpha}$ and some smooth periodic function ${F: {\bf R}/{\bf Z} \rightarrow {\bf R}}$ of mean ${\delta}$. If one then takes ${{\bf a}, {\bf r}}$ to be uniformly distributed in ${[N]}$ and ${[-\varepsilon N, \varepsilon N]}$ respectively for some small ${\varepsilon>0}$, with no coupling between the two variables, then the left-hand side of (1) is approximately of the form

$\displaystyle \int_{(x,y,z,w) \in ({\bf R}/{\bf Z})^4: x-3y+3z-w = 0} F(x) F(y) F(z) F(w) \ \ \ \ \ (2)$

where the integral is with respect to the probability Haar measure, and the constraint ${x-3y+3z-w=0}$ ultimately arises from the algebraic constraint

$\displaystyle \alpha {\bf a}^2 - 3 \alpha ({\bf a}+{\bf r})^2 + 3 \alpha ({\bf a}+2{\bf r})^2 - \alpha ({\bf a}+3{\bf r})^2 = 0.$

However, an application of the Cauchy-Schwarz inequality and Fubini’s theorem shows that the integral in (2) is at least ${(\int_{{\bf R}/{\bf Z}} F)^4}$, which (morally at least) gives (1) in this case.

Due to the nature of the energy increment argument, it also becomes necessary to consider “locally quadratically structured” cases, in which ${[N]}$ is partitioned into some number of structured pieces ${B_c}$ (think of these as arithmetic progressions, or as “Bohr sets), and on each piece ${B_c}$, ${1_A}$ behaves like a locally quadratic function such as ${F_c( \alpha_c n^2 )}$, where ${\alpha_c}$ now varies with ${c}$, and the mean of ${F_c}$ will be approximately ${\delta}$ on the average after averaging in ${c}$ (weighted by the size of the pieces ${B_c}$). Now one should select ${{\bf a}}$ and ${{\bf r}}$ in the following coupled manner: first one chooses ${{\bf a}}$ uniformly from ${[N]}$, then one defines ${{\bf c}}$ to be the label ${c}$ such that ${{\bf a} \in B_c}$, and then selects ${{\bf r}}$ uniformly from a set ${B_{c,\varepsilon}}$ which is related to ${B_c}$ in much the same way that ${[-\varepsilon N, \varepsilon N]}$ is related to ${[N]}$. If one does this correctly, the analogue of (2) becomes

$\displaystyle {\bf E} \int_{(x,y,z,w) \in ({\bf R}/{\bf Z})^4: x-3y+3z-w = 0} F_{\mathbf c}(x) F_{\mathbf c}(y) F_{\mathbf c}(z) F_{\mathbf c}(w),$

and one can again use Cauchy-Schwarz and Fubini’s theorem to conclude.

The general case proceeds, very roughly, by an iterative argument. At each stage of the iteration, one has some sort of quadratic model of ${1_A}$ which involves a decomposition of ${[N]}$ into structured pieces ${B_c}$, and a quadratic approximation to ${1_A}$ on each piece. If this approximation is accurate enough (or more precisely, if a certain (averaged) local Gowers uniformity norm ${U^3}$ of the error is small enough) to model the count in (1) (for random variables ${{\bf a}, {\bf r}}$ determined by the above partition of ${[N]}$ into pieces ${B_c}$), and if the frequencies (such as ${\alpha_c}$) involved in the quadratic approximation are “high rank” or “linearly independent over the rationals” in a suitably quantitative sense, then some version of the above arguments can be made to work. If there are some unwanted linear dependencies in the frequencies, we can do some linear algebra to eliminate one of the frequencies (using some geometry of numbers to keep the quantitative bounds under control) and continue the iteration. If instead the approximation is too inaccurate, then the error will be large in a certain averaged local Gowers uniformity norm ${U^3}$. A significant fraction of the paper is then devoted to establishing a quantitative inverse theorem for that norm that concludes (with good bounds) that the error must then locally correlate with locally quadratic phases, which can be used to refine the quadratic approximation to ${1_A}$ in a manner that significantly increases its “energy” (basically an ${L^2}$ norm). Such energy increments cannot continue indefinitely, and when they terminate we obtain the desired claim.

There are existing inverse theorems for ${U^3}$ type norms in the literature, going back to the pioneering work of Gowers mentioned previously, and relying on arithmetic combinatorics tools such as Freiman’s theorem and the Balog-Szemerédi-Gowers lemma, which are good for analysing the “${1\%}$-structured homomorphisms” that arise in Gowers’ argument. However, when we applied these methods to the local Gowers norms we obtained inferior quantitative results that were not strong enough for our application. Instead, we use arguments from a different paper of Gowers in which he tackled Szemerédi’s theorem for arbitrary length progressions. This method produces “${99\%}$-structured homomorphisms” associated to any function with large Gowers uniformity norm; however the catch is that such homomorphisms are initially supported only on a sparse unstructured set, rather than a structured set such as a Bohr set. To proceed further, one first has to locate inside the sparse unstructured set a sparse pseudorandom subset of a Bohr set, and then use “error-correction” type methods (such as “majority-vote” based algorithms) to locally upgrade this ${99\%}$-structured homomorphism on pseudorandom subsets of Bohr sets to a ${100\%}$-structured homomorphism on the entirety of a Bohr set. It is then possible to use some “approximate cohomology” tools to “integrate” these homomorphisms (and discern a key “local symmetry” property of these homomorphisms) to locate the desired local quadratic structure (in much the same fashion that a ${1}$-form on ${{\bf R}^n}$ that varies linearly with the coordinates can be integrated to be the derivative of a quadratic function if we know that the ${1}$-form is closed). These portions of the paper are unfortunately rather technical, but broadly follow the methods already used in previous literature.