[This blog post was written jointly by Terry Tao and Will Sawin.]

In the previous blog post, one of us (Terry) implicitly introduced a notion of rank for tensors which is a little different from the usual notion of tensor rank, and which (following BCCGNSU) we will call “slice rank”. This notion of rank could then be used to encode the Croot-Lev-Pach-Ellenberg-Gijswijt argument that uses the polynomial method to control capsets.

Afterwards, several papers have applied the slice rank method to further problems – to control tri-colored sum-free sets in abelian groups (BCCGNSU, KSS) and from there to the triangle removal lemma in vector spaces over finite fields (FL), to control sunflowers (NS), and to bound progression-free sets in {p}-groups (P).

In this post we investigate the notion of slice rank more systematically. In particular, we show how to give lower bounds for the slice rank. In many cases, we can show that the upper bounds on slice rank given in the aforementioned papers are sharp to within a subexponential factor. This still leaves open the possibility of getting a better bound for the original combinatorial problem using the slice rank of some other tensor, but for very long arithmetic progressions (at least eight terms), we show that the slice rank method cannot improve over the trivial bound using any tensor.

It will be convenient to work in a “basis independent” formalism, namely working in the category of abstract finite-dimensional vector spaces over a fixed field {{\bf F}}. (In the applications to the capset problem one takes {{\bf F}={\bf F}_3} to be the finite field of three elements, but most of the discussion here applies to arbitrary fields.) Given {k} such vector spaces {V_1,\dots,V_k}, we can form the tensor product {\bigotimes_{i=1}^k V_i}, generated by the tensor products {v_1 \otimes \dots \otimes v_k} with {v_i \in V_i} for {i=1,\dots,k}, subject to the constraint that the tensor product operation {(v_1,\dots,v_k) \mapsto v_1 \otimes \dots \otimes v_k} is multilinear. For each {1 \leq j \leq k}, we have the smaller tensor products {\bigotimes_{1 \leq i \leq k: i \neq j} V_i}, as well as the {j^{th}} tensor product

\displaystyle \otimes_j: V_j \times \bigotimes_{1 \leq i \leq k: i \neq j} V_i \rightarrow \bigotimes_{i=1}^k V_i

defined in the obvious fashion. Elements of {\bigotimes_{i=1}^k V_i} of the form {v_j \otimes_j v_{\hat j}} for some {v_j \in V_j} and {v_{\hat j} \in \bigotimes_{1 \leq i \leq k: i \neq j} V_i} will be called rank one functions, and the slice rank (or rank for short) {\hbox{rank}(v)} of an element {v} of {\bigotimes_{i=1}^k V_i} is defined to be the least nonnegative integer {r} such that {v} is a linear combination of {r} rank one functions. If {V_1,\dots,V_k} are finite-dimensional, then the rank is always well defined as a non-negative integer (in fact it cannot exceed {\min( \hbox{dim}(V_1), \dots, \hbox{dim}(V_k))}. It is also clearly subadditive:

\displaystyle \hbox{rank}(v+w) \leq \hbox{rank}(v) + \hbox{rank}(w). \ \ \ \ \ (1)

For {k=1}, {\hbox{rank}(v)} is {0} when {v} is zero, and {1} otherwise. For {k=2}, {\hbox{rank}(v)} is the usual rank of the {2}-tensor {v \in V_1 \otimes V_2} (which can for instance be identified with a linear map from {V_1} to the dual space {V_2^*}). The usual notion of tensor rank for higher order tensors uses complete tensor products {v_1 \otimes \dots \otimes v_k}, {v_i \in V_i} as the rank one objects, rather than {v_j \otimes_j v_{\hat j}}, giving a rank that is greater than or equal to the slice rank studied here.

From basic linear algebra we have the following equivalences:

Lemma 1 Let {V_1,\dots,V_k} be finite-dimensional vector spaces over a field {{\bf F}}, let {v} be an element of {V_1 \otimes \dots \otimes V_k}, and let {r} be a non-negative integer. Then the following are equivalent:

  • (i) One has {\hbox{rank}(v) \leq r}.
  • (ii) One has a representation of the form

    \displaystyle v = \sum_{j=1}^k \sum_{s \in S_j} v_{j,s} \otimes_j v_{\hat j,s}

    where {S_1,\dots,S_k} are finite sets of total cardinality {|S_1|+\dots+|S_k|} at most {r}, and for each {1 \leq j \leq k} and {s \in S_j}, {v_{j,s} \in V_j} and {v_{\hat j,s} \in \bigotimes_{1 \leq i \leq k: i \neq j} V_i}.

  • (iii) One has

    \displaystyle v \in \sum_{j=1}^k U_j \otimes_j \bigotimes_{1 \leq i \leq k: i \neq j} V_i

    where for each {j=1,\dots,k}, {U_j} is a subspace of {V_j} of total dimension {\hbox{dim}(U_1)+\dots+\hbox{dim}(U_k)} at most {r}, and we view {U_j \otimes_j \bigotimes_{1 \leq i \leq k: i \neq j} V_i} as a subspace of {\bigotimes_{i=1}^k V_i} in the obvious fashion.

  • (iv) (Dual formulation) There exist subspaces {W_j} of the dual space {V_j^*} for {j=1,\dots,k}, of total dimension at least {\hbox{dim}(V_1)+\dots+\hbox{dim}(V_k) - r}, such that {v} is orthogonal to {\bigotimes_{j=1}^k W_j}, in the sense that one has the vanishing

    \displaystyle \langle \bigotimes_{j=1}^k w_j, v \rangle = 0

    for all {w_j \in W_j}, where {\langle, \rangle: \bigotimes_{j=1}^k V_j^* \times \bigotimes_{j=1}^k V_j \rightarrow {\bf F}} is the obvious pairing.

Proof: The equivalence of (i) and (ii) is clear from definition. To get from (ii) to (iii) one simply takes {U_j} to be the span of the {v_{j,s}}, and conversely to get from (iii) to (ii) one takes the {v_{j,s}} to be a basis of the {U_j} and computes {v_{\hat j,s}} by using a basis for the tensor product {\bigotimes_{j=1}^k U_j \otimes_j \bigotimes_{1 \leq i \leq k: i \neq j} V_i} consisting entirely of functions of the form {v_{j,s} \otimes_j e} for various {e}. To pass from (iii) to (iv) one takes {W_j} to be the annihilator {\{ w_j \in V_j: \langle w_j, v_j \rangle = 0 \forall v_j \in U_j \}} of {U_j}, and conversely to pass from (iv) to (iii). \Box

One corollary of the formulation (iv), is that the set of tensors of slice rank at most {r} is Zariski closed (if the field {{\mathbf F}} is algebraically closed), and so the slice rank itself is a lower semi-continuous function. This is in contrast to the usual tensor rank, which is not necessarily semicontinuous.

Corollary 2 Let {V_1,\dots, V_k} be finite-dimensional vector spaces over an algebraically closed field {{\bf F}}. Let {r} be a nonnegative integer. The set of elements of {V_1 \otimes \dots \otimes V_k} of slice rank at most {r} is closed in the Zariski topology.

Proof: In view of Lemma 1(i and iv), this set is the union over tuples of integers {d_1,\dots,d_k} with {d_1 + \dots + d_k \geq \hbox{dim}(V_1)+\dots+\hbox{dim}(V_k) - r} of the projection from {\hbox{Gr}(d_1, V_1) \times \dots \times \hbox{Gr}(d_k, V_k) \times ( V_1 \otimes \dots \otimes V_k)} of the set of tuples {(W_1,\dots,W_k, v)} with { v} orthogonal to {W_1 \times \dots \times W_k}, where {\hbox{Gr}(d,V)} is the Grassmanian parameterizing {d}-dimensional subspaces of {V}.

One can check directly that the set of tuples {(W_1,\dots,W_k, v)} with { v} orthogonal to {W_1 \times \dots \times W_k} is Zariski closed in {\hbox{Gr}(d_1, V_1) \times \dots \times \hbox{Gr}(d_k, V_k) \times V_1 \otimes \dots \otimes V_k} using a set of equations of the form {\langle \bigotimes_{j=1}^k w_j, v \rangle = 0} locally on {\hbox{Gr}(d_1, V_1) \times \dots \times \hbox{Gr}(d_k, V_k) }. Hence because the Grassmanian is a complete variety, the projection of this set to {V_1 \otimes \dots \otimes V_k} is also Zariski closed. So the finite union over tuples {d_1,\dots,d_k} of these projections is also Zariski closed.

\Box

We also have good behaviour with respect to linear transformations:

Lemma 3 Let {V_1,\dots,V_k, W_1,\dots,W_k} be finite-dimensional vector spaces over a field {{\bf F}}, let {v} be an element of {V_1 \otimes \dots \otimes V_k}, and for each {1 \leq j \leq k}, let {\phi_j: V_j \rightarrow W_j} be a linear transformation, with {\bigotimes_{j=1}^k \phi_j: \bigotimes_{j=1}^k V_k \rightarrow \bigotimes_{j=1}^k W_k} the tensor product of these maps. Then

\displaystyle \hbox{rank}( (\bigotimes_{j=1}^k \phi_j)(v) ) \leq \hbox{rank}(v). \ \ \ \ \ (2)

Furthermore, if the {\phi_j} are all injective, then one has equality in (2).

Thus, for instance, the rank of a tensor {v \in \bigotimes_{j=1}^k V_k} is intrinsic in the sense that it is unaffected by any enlargements of the spaces {V_1,\dots,V_k}.

Proof: The bound (2) is clear from the formulation (ii) of rank in Lemma 1. For equality, apply (2) to the injective {\phi_j}, as well as to some arbitrarily chosen left inverses {\phi_j^{-1}: W_j \rightarrow V_j} of the {\phi_j}. \Box

Computing the rank of a tensor is difficult in general; however, the problem becomes a combinatorial one if one has a suitably sparse representation of that tensor in some basis, where we will measure sparsity by the property of being an antichain.

Proposition 4 Let {V_1,\dots,V_k} be finite-dimensional vector spaces over a field {{\bf F}}. For each {1 \leq j \leq k}, let {(v_{j,s})_{s \in S_j}} be a linearly independent set in {V_j} indexed by some finite set {S_j}. Let {\Gamma} be a subset of {S_1 \times \dots \times S_k}.

Let {v \in \bigotimes_{j=1}^k V_j} be a tensor of the form

\displaystyle v = \sum_{(s_1,\dots,s_k) \in \Gamma} c_{s_1,\dots,s_k} v_{1,s_1} \otimes \dots \otimes v_{k,s_k} \ \ \ \ \ (3)

where for each {(s_1,\dots,s_k)}, {c_{s_1,\dots,s_k}} is a coefficient in {{\bf F}}. Then one has

\displaystyle \hbox{rank}(v) \leq \min_{\Gamma = \Gamma_1 \cup \dots \cup \Gamma_k} |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)| \ \ \ \ \ (4)

where the minimum ranges over all coverings of {\Gamma} by sets {\Gamma_1,\dots,\Gamma_k}, and {\pi_j: S_1 \times \dots \times S_k \rightarrow S_j} for {j=1,\dots,k} are the projection maps.

Now suppose that the coefficients {c_{s_1,\dots,s_k}} are all non-zero, that each of the {S_j} are equipped with a total ordering {\leq_j}, and {\Gamma'} is the set of maximal elements of {\Gamma}, thus there do not exist distinct {(s_1,\dots,s_k) \in \Gamma'}, {(t_1,\dots,t_k) \in \Gamma} such that {s_j \leq t_j} for all {j=1,\dots,k}. Then one has

\displaystyle \hbox{rank}(v) \geq \min_{\Gamma' = \Gamma_1 \cup \dots \cup \Gamma_k} |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)|. \ \ \ \ \ (5)

In particular, if {\Gamma} is an antichain (i.e. every element is maximal), then equality holds in (4).

Proof: By Lemma 3 (or by enlarging the bases {v_{j,s_j}}), we may assume without loss of generality that each of the {V_j} is spanned by the {v_{j,s_j}}. By relabeling, we can also assume that each {S_j} is of the form

\displaystyle S_j = \{1,\dots,|S_j|\}

with the usual ordering, and by Lemma 3 we may take each {V_j} to be {{\bf F}^{|S_j|}}, with {v_{j,s_j} = e_{s_j}} the standard basis.

Let {r} denote the rank of {v}. To show (4), it suffices to show the inequality

\displaystyle r \leq |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)| \ \ \ \ \ (6)

for any covering of {\Gamma} by {\Gamma_1,\dots,\Gamma_k}. By removing repeated elements we may assume that the {\Gamma_i} are disjoint. For each {1 \leq j \leq k}, the tensor

\displaystyle \sum_{(s_1,\dots,s_k) \in \Gamma_j} c_{s_1,\dots,s_k} e_{s_1} \otimes \dots \otimes e_{s_k}

can (after collecting terms) be written as

\displaystyle \sum_{s_j \in \pi_j(\Gamma_j)} e_{s_j} \otimes_j v_{\hat j,s_j}

for some {v_{\hat j, s_j} \in \bigotimes_{1 \leq i \leq k: i \neq j} {\bf F}^{|S_i|}}. Summing and using (1), we conclude the inequality (6).

Now assume that the {c_{s_1,\dots,s_k}} are all non-zero and that {\Gamma'} is the set of maximal elements of {\Gamma}. To conclude the proposition, it suffices to show that the reverse inequality

\displaystyle r \geq |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)| \ \ \ \ \ (7)

 holds for some {\Gamma_1,\dots,\Gamma_k} covering {\Gamma'}. By Lemma 1(iv), there exist subspaces {W_j} of {({\bf F}^{|S_j|})^*} whose dimension {d_j := \hbox{dim}(W_j)} sums to

\displaystyle \sum_{j=1}^k d_j = \sum_{j=1}^k |S_j| - r \ \ \ \ \ (8)

such that {v} is orthogonal to {\bigotimes_{j=1}^k W_j}.

Let {1 \leq j \leq k}. Using Gaussian elimination, one can find a basis {w_{j,1},\dots,w_{j,d_j}} of {W_j} whose representation in the standard dual basis {e^*_{1},\dots,e^*_{|S_j|}} of {({\bf F}^{|S_j|})^*} is in row-echelon form. That is to say, there exist natural numbers

\displaystyle 1 \leq s_{j,1} < \dots < s_{j,d_j} \leq |S_j|

such that for all {1 \leq t \leq d_j}, {w_{j,t}} is a linear combination of the dual vectors {e^*_{s_{j,t}},\dots,e^*_{|S_j|}}, with the {e^*_{s_{j,t}}} coefficient equal to one.

We now claim that {\prod_{j=1}^k \{ s_{j,t}: 1 \leq t \leq d_j \}} is disjoint from {\Gamma'}. Suppose for contradiction that this were not the case, thus there exists {1 \leq t_j \leq d_j} for each {1 \leq j \leq k} such that

\displaystyle (s_{1,t_1}, \dots, s_{k,t_k}) \in \Gamma'.

As {\Gamma'} is the set of maximal elements of {\Gamma}, this implies that

\displaystyle (s'_1,\dots,s'_k) \not \in \Gamma

for any tuple {(s'_1,\dots,s'_k) \in \prod_{j=1}^k \{ s_{j,t_j}, \dots, |S_j|\}} other than {(s_{1,t_1}, \dots, s_{k,t_k})}. On the other hand, we know that {w_{j,t_j}} is a linear combination of {e^*_{s_{j,t_j}},\dots,e^*_{|S_j|}}, with the {e^*_{s_{j,t_j}}} coefficient one. We conclude that the tensor product {\bigotimes_{j=1}^k w_{j,t_j}} is equal to

\displaystyle \bigotimes_{j=1}^k e^*_{s_{j,t_j}}

plus a linear combination of other tensor products {\bigotimes_{j=1}^k e^*_{s'_j}} with {(s'_1,\dots,s'_k)} not in {\Gamma}. Taking inner products with (3), we conclude that {\langle v, \bigotimes_{j=1}^k w_{j,t_j}\rangle = c_{s_{1,t_1},\dots,s_{k,t_k}} \neq 0}, contradicting the fact that {v} is orthogonal to {\prod_{j=1}^k W_j}. Thus we have {\prod_{j=1}^k \{ s_{j,t}: 1 \leq t \leq d_j \}} disjoint from {\Gamma'}.

For each {1 \leq j \leq k}, let {\Gamma_j} denote the set of tuples {(s_1,\dots,s_k)} in {\Gamma'} with {s_j} not of the form {\{ s_{j,t}: 1 \leq t \leq d_j \}}. From the previous discussion we see that the {\Gamma_j} cover {\Gamma'}, and we clearly have {\pi_j(\Gamma_j) \leq |S_j| - d_j}, and hence from (8) we have (7) as claimed. \Box

As an instance of this proposition, we recover the computation of diagonal rank from the previous blog post:

Example 5 Let {V_1,\dots,V_k} be finite-dimensional vector spaces over a field {{\bf F}} for some {k \geq 2}. Let {d} be a natural number, and for {1 \leq j \leq k}, let {e_{j,1},\dots,e_{j,d}} be a linearly independent set in {V_j}. Let {c_1,\dots,c_d} be non-zero coefficients in {{\bf F}}. Then

\displaystyle \sum_{t=1}^d c_t e_{1,t} \otimes \dots \otimes e_{k,t}

has rank {d}. Indeed, one applies the proposition with {S_1,\dots,S_k} all equal to {\{1,\dots,d\}}, with {\Gamma} the diagonal in {S_1 \times \dots \times S_k}; this is an antichain if we give one of the {S_i} the standard ordering, and another of the {S_i} the opposite ordering (and ordering the remaining {S_i} arbitrarily). In this case, the {\pi_j} are all bijective, and so it is clear that the minimum in (4) is simply {d}.

The combinatorial minimisation problem in the above proposition can be solved asymptotically when working with tensor powers, using the notion of the Shannon entropy {h(X)} of a discrete random variable {X}.

Proposition 6 Let {V_1,\dots,V_k} be finite-dimensional vector spaces over a field {{\bf F}}. For each {1 \leq j \leq k}, let {(v_{j,s})_{s \in S_j}} be a linearly independent set in {V_j} indexed by some finite set {S_j}. Let {\Gamma} be a non-empty subset of {S_1 \times \dots \times S_k}.

Let {v \in \bigotimes_{j=1}^k V_j} be a tensor of the form (3) for some coefficients {c_{s_1,\dots,s_k}}. For each natural number {n}, let {v^{\otimes n}} be the tensor power of {n} copies of {v}, viewed as an element of {\bigotimes_{j=1}^k V_j^{\otimes n}}. Then

\displaystyle \hbox{rank}(v^{\otimes n}) \leq \exp( (H + o(1)) n ) \ \ \ \ \ (9)

as {n \rightarrow \infty}, where {H} is the quantity

\displaystyle H = \hbox{sup}_{(X_1,\dots,X_k)} \hbox{min}( h(X_1), \dots, h(X_k) ) \ \ \ \ \ (10)

and {(X_1,\dots,X_k)} range over the random variables taking values in {\Gamma}.

Now suppose that the coefficients {c_{s_1,\dots,s_k}} are all non-zero and that each of the {S_j} are equipped with a total ordering {\leq_j}. Let {\Gamma'} be the set of maximal elements of {\Gamma} in the product ordering, and let {H' = \hbox{sup}_{(X_1,\dots,X_k)} \hbox{min}( h(X_1), \dots, h(X_k) ) } where {(X_1,\dots,X_k)} range over random variables taking values in {\Gamma'}. Then

\displaystyle \hbox{rank}(v^{\otimes n}) \geq \exp( (H' + o(1)) n ) \ \ \ \ \ (11)

as {n \rightarrow \infty}. In particular, if the maximizer in (10) is supported on the maximal elements of {\Gamma} (which always holds if {\Gamma} is an antichain in the product ordering), then equality holds in (9).

Proof:

It will suffice to show that

\displaystyle \min_{\Gamma^n = \Gamma_{n,1} \cup \dots \cup \Gamma_{n,k}} |\pi_{n,1}(\Gamma_{n,1})| + \dots + |\pi_{n,k}(\Gamma_{n,k})| = \exp( (H + o(1)) n ) \ \ \ \ \ (12)

as {n \rightarrow \infty}, where {\pi_{n,j}: \prod_{i=1}^k S_i^n \rightarrow S_j^n} is the projection map. Then the same thing will apply to {\Gamma'} and {H'}. Then applying Proposition 4, using the lexicographical ordering on {S_j^n} and noting that, if {\Gamma'} are the maximal elements of {\Gamma}, then {\Gamma'^n} are the maximal elements of {\Gamma^n}, we obtain both (9) and (11).

We first prove the lower bound. By compactness (and the continuity properties of entropy), we can find a random variable {(X_1,\dots,X_k)} taking values in {\Gamma} such that

\displaystyle H = \hbox{min}( h(X_1), \dots, h(X_k) ). \ \ \ \ \ (13)

Let {\varepsilon = o(1)} be a small positive quantity that goes to zero sufficiently slowly with {n}. Let {\Sigma = \Sigma_{X_1,\dots,X_k} \subset \Gamma^n} denote the set of all tuples {(a_1, \dots, \vec a_n)} in {\Gamma^n} that are within {\varepsilon} of being distributed according to the law of {(X_1,\dots,X_k)}, in the sense that for all {a \in \Gamma}, one has

\displaystyle |\frac{|\{ 1 \leq l \leq n: a_l = a \}|}{n} - {\bf P}( (X_1,\dots,X_k) = a )| \leq \varepsilon.

By the asymptotic equipartition property, the cardinality of {\Sigma} can be computed to be

\displaystyle |\Sigma| = \exp( (h( X_1,\dots,X_k)+o(1)) n ) \ \ \ \ \ (14)

if {\varepsilon} goes to zero slowly enough. Similarly one has

\displaystyle |\pi_{n,j}(\Sigma)| = \exp( (h( X_j)+o(1)) n ),

and for each {s_{n,j} \in \pi_{n,j}(\Sigma)}, one has

\displaystyle |\{ \sigma \in \Sigma: \pi_{n,j}(\sigma) = s_{n,j} \}| \leq \exp( (h( X_1,\dots,X_k)-h(X_j)+o(1)) n ). \ \ \ \ \ (15)

Now let {\Gamma^n = \Gamma_{n,1} \cup \dots \cup \Gamma_{n,k}} be an arbitrary covering of {\Gamma^n}. By the pigeonhole principle, there exists {1 \leq j \leq k} such that

\displaystyle |\Gamma_{n,j} \cap \Sigma| \geq \frac{1}{k} |\Sigma|

and hence by (14), (15)

\displaystyle |\pi_{n,j}( \Gamma_{n,j} \cap \Sigma)| \geq \frac{1}{k} \exp( (h( X_j)+o(1)) n )

which by (13) implies that

\displaystyle |\pi_{n,1}(\Gamma_{n,1})| + \dots + |\pi_{n,k}(\Gamma_{n,k})| \geq \exp( (H + o(1)) n )

noting that the {\frac{1}{k}} factor can be absorbed into the {o(1)} error). This gives the lower bound in (12).

Now we prove the upper bound. We can cover {\Gamma^n} by {O(\exp(o(n))} sets of the form {\Sigma_{X_1,\dots,X_k}} for various choices of random variables {(X_1,\dots,X_k)} taking values in {\Gamma}. For each such random variable {(X_1,\dots,X_k)}, we can find {1 \leq j \leq k} such that {h(X_j) \leq H}; we then place all of {\Sigma_{X_1,\dots,X_k}} in {\Gamma_j}. It is then clear that the {\Gamma_j} cover {\Gamma} and that

\displaystyle |\Gamma_j| \leq \exp( (H+o(1)) n )

for all {j=1,\dots,n}, giving the required upper bound. \Box

It is of interest to compute the quantity {H} in (10). We have the following criterion for when a maximiser occurs:

Proposition 7 Let {S_1,\dots,S_k} be finite sets, and {\Gamma \subset S_1 \times \dots \times S_k} be non-empty. Let {H} be the quantity in (10). Let {(X_1,\dots,X_k)} be a random variable taking values in {\Gamma}, and let {\Gamma^* \subset \Gamma} denote the essential range of {(X_1,\dots,X_k)}, that is to say the set of tuples {(t_1,\dots,t_k)\in \Gamma} such that {{\bf P}( X_1=t_1, \dots, X_k = t_k)} is non-zero. Then the following are equivalent:

  • (i) {(X_1,\dots,X_k)} attains the maximum in (10).
  • (ii) There exist weights {w_1,\dots,w_k \geq 0} and a finite quantity {D \geq 0}, such that {w_j=0} whenever {h(X_j) > \min(h(X_1),\dots,h(X_k))}, and such that

    \displaystyle \sum_{j=1}^k w_j \log \frac{1}{{\bf P}(X_j = t_j)} \leq D \ \ \ \ \ (16)

    for all {(t_1,\dots,t_k) \in \Gamma}, with equality if {(t_1,\dots,t_k) \in \Gamma^*}. (In particular, {w_j} must vanish if there exists a {t_j \in \pi_i(\Gamma)} with {{\bf P}(X_j=t_j)=0}.)

Furthermore, when (i) and (ii) holds, one has

\displaystyle D = H \sum_{j=1}^k w_j. \ \ \ \ \ (17)

Proof: We first show that (i) implies (ii). The function {p \mapsto p \log \frac{1}{p}} is concave on {[0,1]}. As a consequence, if we define {C} to be the set of tuples {(h_1,\dots,h_k) \in [0,+\infty)^k} such that there exists a random variable {(Y_1,\dots,Y_k)} taking values in {\Gamma} with {h(Y_j) \geq h_j}, then {C} is convex. On the other hand, by (10), {C} is disjoint from the orthant {(H,+\infty)^k}. Thus, by the hyperplane separation theorem, we conclude that there exists a half-space

\displaystyle \{ (h_1,\dots,h_k) \in {\bf R}^k: w_1 h_1 + \dots + w_k h_k \geq c \},

where {w_1,\dots,w_k} are reals that are not all zero, and {c} is another real, which contains {(h(X_1),\dots,h(X_k))} on its boundary and {(H,+\infty)^k} in its interior, such that {C} avoids the interior of the half-space. Since {(h(X_1),\dots,h(X_k))} is also on the boundary of {(H,+\infty)^k}, we see that the {w_j} are non-negative, and that {w_j = 0} whenever {h(X_j) \neq H}.

By construction, the quantity

\displaystyle w_1 h(Y_1) + \dots + w_k h(Y_k)

is maximised when {(Y_1,\dots,Y_k) = (X_1,\dots,X_k)}. At this point we could use the method of Lagrange multipliers to obtain the required constraints, but because we have some boundary conditions on the {(Y_1,\dots,Y_k)} (namely, that the probability that they attain a given element of {\Gamma} has to be non-negative) we will work things out by hand. Let {t = (t_1,\dots,t_k)} be an element of {\Gamma}, and {s = (s_1,\dots,s_k)} an element of {\Gamma^*}. For {\varepsilon>0} small enough, we can form a random variable {(Y_1,\dots,Y_k)} taking values in {\Gamma}, whose probability distribution is the same as that for {(X_1,\dots,X_k)} except that the probability of attaining {(t_1,\dots,t_k)} is increased by {\varepsilon}, and the probability of attaining {(s_1,\dots,s_k)} is decreased by {\varepsilon}. If there is any {j} for which {{\bf P}(X_j = t_j)=0} and {w_j \neq 0}, then one can check that

\displaystyle w_1 h(Y_1) + \dots + w_k h(Y_k) - (w_1 h(X_1) + \dots + w_k h(X_k)) \gg \varepsilon \log \frac{1}{\varepsilon}

for sufficiently small {\varepsilon}, contradicting the maximality of {(X_1,\dots,X_k)}; thus we have {{\bf P}(X_j = t_j) > 0} whenever {w_j \neq 0}. Taylor expansion then gives

\displaystyle w_1 h(Y_1) + \dots + w_k h(Y_k) - (w_1 h(X_1) + \dots + w_k h(X_k)) = (A_t - A_s) \varepsilon + O(\varepsilon^2)

for small {\varepsilon}, where

\displaystyle A_t := \sum_{j=1}^k w_j \log \frac{1}{{\bf P}(X_j = t_j)}

and similarly for {A_s}. We conclude that {A_t \leq A_s} for all {s \in \Gamma^*} and {t \in \Gamma}, thus there exists a quantity {D} such that {A_s = D} for all {s \in \Gamma^*}, and {A_t \leq D} for all {t \in \Gamma}. By construction {D} must be nonnegative. Sampling {(t_1,\dots,t_k)} using the distribution of {(X_1,\dots,X_k)}, one has

\displaystyle \sum_{j=1}^k w_j \log \frac{1}{{\bf P}(X_j = t_j)} = D

almost surely; taking expectations we conclude that

\displaystyle \sum_{j=1}^k w_j \sum_{t_j \in S_j} {\bf P}( X_j = t_j) \log \frac{1}{{\bf P}(X_j = t_j)} = D.

The inner sum is {h(X_j)}, which equals {H} when {w_j} is non-zero, giving (17).

Now we show conversely that (ii) implies (i). As noted previously, the function {p \mapsto p \log \frac{1}{p}} is concave on {[0,1]}, with derivative {\log \frac{1}{p} - 1}. This gives the inequality

\displaystyle q \log \frac{1}{q} \leq p \log \frac{1}{p} + (q-p) ( \log \frac{1}{p} - 1 ) \ \ \ \ \ (18)

for any {0 \leq p,q \leq 1} (note the right-hand side may be infinite when {p=0} and {q>0}). Let {(Y_1,\dots,Y_k)} be any random variable taking values in {\Gamma}, then on applying the above inequality with {p = {\bf P}(X_j = t_j)} and {q = {\bf P}( Y_j = t_j )}, multiplying by {w_j}, and summing over {j=1,\dots,k} and {t_j \in S_j} gives

\displaystyle \sum_{j=1}^k w_j h(Y_j) \leq \sum_{j=1}^k w_j h(X_j)

\displaystyle + \sum_{j=1}^k \sum_{t_j \in S_j} w_j ({\bf P}(Y_j = t_j) - {\bf P}(X_j = t_j)) ( \log \frac{1}{{\bf P}(X_j=t_j)} - 1 ).

By construction, one has

\displaystyle \sum_{j=1}^k w_j h(X_j) = \min(h(X_1),\dots,h(X_k)) \sum_{j=1}^k w_j

and

\displaystyle \sum_{j=1}^k w_j h(Y_j) \geq \min(h(Y_1),\dots,h(Y_k)) \sum_{j=1}^k w_j

so to prove that {\min(h(Y_1),\dots,h(Y_k)) \leq \min(h(X_1),\dots,h(X_k))} (which would give (i)), it suffices to show that

\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j ({\bf P}(Y_j = t_j) - {\bf P}(X_j = t_j)) ( \log \frac{1}{{\bf P}(X_j=t_j)} - 1 ) \leq 0,

or equivalently that the quantity

\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j {\bf P}(Y_j = t_j) ( \log \frac{1}{{\bf P}(X_j=t_j)} - 1 )

is maximised when {(Y_1,\dots,Y_k) = (X_1,\dots,X_k)}. Since

\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j {\bf P}(Y_j = t_j) = \sum_{j=1}^k w_j

it suffices to show this claim for the quantity

\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j {\bf P}(Y_j = t_j) \log \frac{1}{{\bf P}(X_j=t_j)}.

One can view this quantity as

\displaystyle {\bf E}_{(Y_1,\dots,Y_k)} \sum_{j=1}^k w_j \log \frac{1}{{\bf P}_{X_j}(X_j=Y_j)}.

By (ii), this quantity is bounded by {D}, with equality if {(Y_1,\dots,Y_k)} is equal to {(X_1,\dots,X_k)} (and is in particular ranging in {\Gamma^*}), giving the claim. \Box

The second half of the proof of Proposition 7 only uses the marginal distributions {{{\bf P}(X_j=t_j)}} and the equation(16), not the actual distribution of {(X_1,\dots,X_k)}, so it can also be used to prove an upper bound on {H} when the exact maximizing distribution is not known, given suitable probability distributions in each variable. The logarithm of the probability distribution here plays the role that the weight functions do in BCCGNSU.

Remark 8 Suppose one is in the situation of (i) and (ii) above; assume the nondegeneracy condition that {H} is positive (or equivalently that {D} is positive). We can assign a “degree” {d_j(t_j)} to each element {t_j \in S_j} by the formula

\displaystyle d_j(t_j) := w_j \log \frac{1}{{\bf P}(X_j = t_j)}, \ \ \ \ \ (19)

then every tuple {(t_1,\dots,t_k)} in {\Gamma} has total degree at most {D}, and those tuples in {\Gamma^*} have degree exactly {D}. In particular, every tuple in {\Gamma^n} has degree at most {nD}, and hence by (17), each such tuple has a {j}-component of degree less than or equal to {nHw_j} for some {j} with {w_j>0}. On the other hand, we can compute from (19) and the fact that {h(X_j) = H} for {w_j > 0} that {Hw_j = {\bf E} d_j(X_j)}. Thus, by asymptotic equipartition, and assuming {w_j \neq 0}, the number of “monomials” in {S_j^n} of total degree at most {nHw_j} is at most {\exp( (h(X_j)+o(1)) n )}; one can in fact use (19) and (18) to show that this is in fact an equality. This gives a direct way to cover {\Gamma^n} by sets {\Gamma_{n,1},\dots,\Gamma_{n,k}} with {|\pi_j(\Gamma_{n,j})| \leq \exp( (H+o(1)) n)}, which is in the spirit of the Croot-Lev-Pach-Ellenberg-Gijswijt arguments from the previous post.

We can now show that the rank computation for the capset problem is sharp:

Proposition 9 Let {V_1^{\otimes n} = V_2^{\otimes n} = V_3^{\otimes n}} denote the space of functions from {{\bf F}_3^n} to {{\bf F}_3}. Then the function {(x,y,z) \mapsto \delta_{0^n}(x+y+z)} from {{\bf F}_3^n \times {\bf F}_3^n \times {\bf F}_3^n} to {{\bf F}}, viewed as an element of {V_1^{\otimes n} \otimes V_2^{\otimes n} \otimes V_3^{\otimes n}}, has rank {\exp( (H^*+o(1)) n )} as {n \rightarrow \infty}, where {H^* \approx 1.013445} is given by the formula

\displaystyle H^* = \alpha \log \frac{1}{\alpha} + \beta \log \frac{1}{\beta} + \gamma \log \frac{1}{\gamma} \ \ \ \ \ (20)

with

\displaystyle \alpha = \frac{32}{3(15 + \sqrt{33})} \approx 0.51419

\displaystyle \beta = \frac{4(\sqrt{33}-1)}{3(15+\sqrt{33})} \approx 0.30495

\displaystyle \gamma = \frac{(\sqrt{33}-1)^2}{6(15+\sqrt{33})} \approx 0.18086.

Proof: In {{\bf F}_3 \times {\bf F}_3 \times {\bf F}_3}, we have

\displaystyle \delta_0(x+y+z) = 1 - (x+y+z)^2

\displaystyle = (1-x^2) - y^2 - z^2 + xy + yz + zx.

Thus, if we let {V_1=V_2=V_3} be the space of functions from {{\bf F}_3} to {{\bf F}_3} (with domain variable denoted {x,y,z} respectively), and define the basis functions

\displaystyle v_{1,0} := 1; v_{1,1} := x; v_{1,2} := x^2

\displaystyle v_{2,0} := 1; v_{2,1} := y; v_{2,2} := y^2

\displaystyle v_{3,0} := 1; v_{3,1} := z; v_{3,2} := z^2

of {V_1,V_2,V_3} indexed by {S_1=S_2=S_3 := \{ 0,1,2\}} (with the usual ordering), respectively, and set {\Gamma \subset S_1 \times S_2 \times S_3} to be the set

\displaystyle \{ (2,0,0), (0,2,0), (0,0,2), (1,1,0), (0,1,1), (1,0,1),(0,0,0) \}

then {\delta_0(x+y+z)} is a linear combination of the {v_{1,t_1} \otimes v_{1,t_2} \otimes v_{1,t_3}} with {(t_1,t_2,t_3) \in \Gamma}, and all coefficients non-zero. Then we have {\Gamma'= \{ (2,0,0), (0,2,0), (0,0,2), (1,1,0), (0,1,1), (1,0,1) \}}. We will show that the quantity {H} of (10) agrees with the quantity {H^*} of (20), and that the optimizing distribution is supported on {\Gamma'}, so that by Proposition 6 the rank of {\delta_{0^n}(x+y+z)} is {\exp( (H+o(1)) n)}.

To compute the quantity at (10), we use the criterion in Proposition 7. We take {(X_1,X_2,X_3)} to be the random variable taking values in {\Gamma} that attains each of the values {(2,0,0), (0,2,0), (0,0,2)} with a probability of {\gamma \approx 0.18086}, and each of {(1,1,0), (0,1,1), (1,0,1)} with a probability of {\alpha - 2\gamma = \beta/2 \approx 0.15247}; then each of the {X_j} attains the values of {0,1,2} with probabilities {\alpha,\beta,\gamma} respectively, so in particular {h(X_1)=h(X_2)=h(X_3)} is equal to the quantity {H'} in (20). If we now set {w_1 = w_2 = w_3 := 1} and

\displaystyle D := 2\log \frac{1}{\alpha} + \log \frac{1}{\gamma} = \log \frac{1}{\alpha} + 2 \log \frac{1}{\beta} = 3H^* \approx 3.04036

we can verify the condition (16) with equality for all {(t_1,t_2,t_3) \in \Gamma'}, which from (17) gives {H=H'=H^*} as desired. \Box

This statement already follows from the result of Kleinberg-Sawin-Speyer, which gives a “tri-colored sum-free set” in {\mathbb F_3^n} of size {\exp((H'+o(1))n)}, as the slice rank of this tensor is an upper bound for the size of a tri-colored sum-free set. If one were to go over the proofs more carefully to evaluate the subexponential factors, this argument would give a stronger lower bound than KSS, as it does not deal with the substantial loss that comes from Behrend’s construction. However, because it actually constructs a set, the KSS result rules out more possible approaches to give an exponential improvement of the upper bound for capsets. The lower bound on slice rank shows that the bound cannot be improved using only the slice rank of this particular tensor, whereas KSS shows that the bound cannot be improved using any method that does not take advantage of the “single-colored” nature of the problem.

We can also show that the slice rank upper bound in a result of Naslund-Sawin is similarly sharp:

Proposition 10 Let {V_1^{\otimes n} = V_2^{\otimes n} = V_3^{\otimes n}} denote the space of functions from {\{0,1\}^n} to {\mathbb C}. Then the function {(x,y,z) \mapsto \prod_{i=1}^n (x_i+y_i+z_i)-1} from {\{0,1\}^n \times \{0,1\}^n \times \{0,1\}^n \rightarrow \mathbb C}, viewed as an element of {V_1^{\otimes n} \otimes V_2^{\otimes n} \otimes V_3^{\otimes n}}, has slice rank {(3/2^{2/3})^n e^{o(n)}}

Proof: Let {v_{1,0}=1} and {v_{1,1}=x} be a basis for the space {V_1} of functions on {\{0,1\}}, itself indexed by {S_1=\{0,1\}}. Choose similar bases for {V_2} and {V_3}, with {v_{2,0}=1, v_{2,1}=y} and {v_{3,0}=1,v_{3,1}=z-1}.

Set {\Gamma = \{(1,0,0),(0,1,0),(0,0,1)\}}. Then {x+y+z-1} is a linear combination of the {v_{1,t_1} \otimes v_{1,t_2} \otimes v_{1,t_3}} with {(t_1,t_2,t_3) \in \Gamma}, and all coefficients non-zero. Order {S_1,S_2,S_3} the usual way so that {\Gamma} is an antichain. We will show that the quantity {H} of (10) is {\log(3/2^{2/3})}, so that applying the last statement of Proposition 6, we conclude that the rank of {\delta_{0^n}(x+y+z)} is {\exp( (\log(3/2^{2/3})+o(1)) n)= (3/2^{2/3})^n e^{o(n)}} ,

Let {(X_1,X_2,X_3)} be the random variable taking values in {\Gamma} that attains each of the values {(1,0,0),(0,1,0),(0,0,1)} with a probability of {1/3}. Then each of the {X_i} attains the value {1} with probability {1/3} and {0} with probability {2/3}, so

\displaystyle h(X_1)=h(X_2)=h(X_3) = (1/3) \log (3) + (2/3) \log(3/2) = \log 3 - (2/3) \log 2= \log (3/2^{2/3})

Setting {w_1=w_2=w_3=1} and {D=3 \log(3/2^{2/3})=3 \log 3 - 2 \log 2}, we can verify the condition (16) with equality for all {(t_1,t_2,t_3) \in \Gamma'}, which from (17) gives {H=\log (3/2^{2/3})} as desired. \Box

We used a slightly different method in each of the last two results. In the first one, we use the most natural bases for all three vector spaces, and distinguish {\Gamma} from its set of maximal elements {\Gamma'}. In the second one we modify one basis element slightly, with {v_{3,1}=z-1} instead of the more obvious choice {z}, which allows us to work with {\Gamma = \{(1,0,0),(0,1,0),(0,0,1)\}} instead of {\Gamma=\{(1,0,0),(0,1,0),(0,0,1),(0,0,0)\}}. Because {\Gamma} is an antichain, we do not need to distinguish {\Gamma} and {\Gamma'}. Both methods in fact work with either problem, and they are both about equally difficult, but we include both as either might turn out to be substantially more convenient in future work.

Proposition 11 Let {k \geq 8} be a natural number and let {G} be a finite abelian group. Let {{\bf F}} be any field. Let {V_1 = \dots = V_k} denote the space of functions from {G} to {{\bf F}}.

Let {F} be any {{\bf F}}-valued function on {G^k} that is nonzero only when the {k} elements of {G^n} form a {k}-term arithmetic progression, and is nonzero on every {k}-term constant progression.

Then the slice rank of {F} is {|G|}.

Proof: We apply Proposition 4, using the standard bases of {V_1,\dots,V_k}. Let {\Gamma} be the support of {F}. Suppose that we have {k} orderings on {H} such that the constant progressions are maximal elements of {\Gamma} and thus all constant progressions lie in {\Gamma'}. Then for any partition {\Gamma_1,\dots, \Gamma_k} of {\Gamma'}, {\Gamma_j} can contain at most {|\pi_j(\Gamma_j)|} constant progressions, and as all {|G|} constant progressions must lie in one of the {\Gamma_j}, we must have {\sum_{j=1}^k |\pi_j(\Gamma_j)| \geq |G|}. By Proposition 4, this implies that the slice rank of {F} is at least {|G|}. Since {F} is a {|G| \times \dots \times |G|} tensor, the slice rank is at most {|G|}, hence exactly {|G|}.

So it is sufficient to find {k} orderings on {G} such that the constant progressions are maximal element of {\Gamma}. We make several simplifying reductions: We may as well assume that {\Gamma} consists of all the {k}-term arithmetic progressions, because if the constant progressions are maximal among the set of all progressions then they are maximal among its subset {\Gamma}. So we are looking for an ordering in which the constant progressions are maximal among all {k}-term arithmetic progressions. We may as well assume that {G} is cyclic, because if for each cyclic group we have an ordering where constant progressions are maximal, on an arbitrary finite abelian group the lexicographic product of these orderings is an ordering for which the constant progressions are maximal. We may assume {k=8}, as if we have an {8}-tuple of orderings where constant progressions are maximal, we may add arbitrary orderings and the constant progressions will remain maximal.

So it is sufficient to find {8} orderings on the cyclic group {\mathbb Z/n} such that the constant progressions are maximal elements of the set of {8}-term progressions in {\mathbb Z/n} in the {8}-fold product ordering. To do that, let the first, second, third, and fifth orderings be the usual order on {\{0,\dots,n-1\}} and let the fourth, sixth, seventh, and eighth orderings be the reverse of the usual order on {\{0,\dots,n-1\}}.

Then let {(c,c,c,c,c,c,c,c)} be a constant progression and for contradiction assume that {(a,a+b,a+2b,a+3b,a+4b,a+5b,a+6b,a+7b)} is a progression greater than {(c,c,c,c,c,c,c,c)} in this ordering. We may assume that {c \in [0, (n-1)/2]}, because otherwise we may reverse the order of the progression, which has the effect of reversing all eight orderings, and then apply the transformation {x \rightarrow n-1-x}, which again reverses the eight orderings, bringing us back to the original problem but with {c \in [0,(n-1)/2]}.

Take a representative of the residue class {b} in the interval {[-n/2,n/2]}. We will abuse notation and call this {b}. Observe that {a+b, a+2b,} {a+3b}, and {a+5b} are all contained in the interval {[0,c]} modulo {n}. Take a representative of the residue class {a} in the interval {[0,c]}. Then {a+b} is in the interval {[mn,mn+c]} for some {m}. The distance between any distinct pair of intervals of this type is greater than {n/2}, but the distance between {a} and {a+b} is at most {n/2}, so {a+b} is in the interval {[0,c]}. By the same reasoning, {a+2b} is in the interval {[0,c]}. Therefore {|b| \leq c/2< n/4}. But then the distance between {a+2b} and {a+4b} is at most {n/2}, so by the same reasoning {a+4b} is in the interval {[0,c]}. Because {a+3b} is between {a+2b} and {a+4b}, it also lies in the interval {[0,c]}. Because {a+3b} is in the interval {[0,c]}, and by assumption it is congruent mod {n} to a number in the set {\{0,\dots,n-1\}} greater than or equal to {c}, it must be exactly {c}. Then, remembering that {a+2b} and {a+4b} lie in {[0,c]}, we have {c-b \leq b} and {c+b \leq b}, so {b=0}, hence {a=c}, thus {(a,\dots,a+7b)=(c,\dots,c)}, which contradicts the assumption that {(a,\dots,a+7b)>(c,\dots,c)}. \Box

In fact, given a {k}-term progressions mod {n} and a constant, we can form a {k}-term binary sequence with a {1} for each step of the progression that is greater than the constant and a {0} for each step that is less. Because a rotation map, viewed as a dynamical system, has zero topological entropy, the number of {k}-term binary sequences that appear grows subexponentially in {k}. Hence there must be, for large enough {k}, at least one sequence that does not appear. In this proof we exploit a sequence that does not appear for {k=8}.