You are currently browsing the tag archive for the ‘polynomial method’ tag.

[This blog post was written jointly by Terry Tao and Will Sawin.]

In the previous blog post, one of us (Terry) implicitly introduced a notion of rank for tensors which is a little different from the usual notion of tensor rank, and which (following BCCGNSU) we will call “slice rank”. This notion of rank could then be used to encode the Croot-Lev-Pach-Ellenberg-Gijswijt argument that uses the polynomial method to control capsets.

Afterwards, several papers have applied the slice rank method to further problems – to control tri-colored sum-free sets in abelian groups (BCCGNSU, KSS) and from there to the triangle removal lemma in vector spaces over finite fields (FL), to control sunflowers (NS), and to bound progression-free sets in ${p}$-groups (P).

In this post we investigate the notion of slice rank more systematically. In particular, we show how to give lower bounds for the slice rank. In many cases, we can show that the upper bounds on slice rank given in the aforementioned papers are sharp to within a subexponential factor. This still leaves open the possibility of getting a better bound for the original combinatorial problem using the slice rank of some other tensor, but for very long arithmetic progressions (at least eight terms), we show that the slice rank method cannot improve over the trivial bound using any tensor.

It will be convenient to work in a “basis independent” formalism, namely working in the category of abstract finite-dimensional vector spaces over a fixed field ${{\bf F}}$. (In the applications to the capset problem one takes ${{\bf F}={\bf F}_3}$ to be the finite field of three elements, but most of the discussion here applies to arbitrary fields.) Given ${k}$ such vector spaces ${V_1,\dots,V_k}$, we can form the tensor product ${\bigotimes_{i=1}^k V_i}$, generated by the tensor products ${v_1 \otimes \dots \otimes v_k}$ with ${v_i \in V_i}$ for ${i=1,\dots,k}$, subject to the constraint that the tensor product operation ${(v_1,\dots,v_k) \mapsto v_1 \otimes \dots \otimes v_k}$ is multilinear. For each ${1 \leq j \leq k}$, we have the smaller tensor products ${\bigotimes_{1 \leq i \leq k: i \neq j} V_i}$, as well as the ${j^{th}}$ tensor product

$\displaystyle \otimes_j: V_j \times \bigotimes_{1 \leq i \leq k: i \neq j} V_i \rightarrow \bigotimes_{i=1}^k V_i$

defined in the obvious fashion. Elements of ${\bigotimes_{i=1}^k V_i}$ of the form ${v_j \otimes_j v_{\hat j}}$ for some ${v_j \in V_j}$ and ${v_{\hat j} \in \bigotimes_{1 \leq i \leq k: i \neq j} V_i}$ will be called rank one functions, and the slice rank (or rank for short) ${\hbox{rank}(v)}$ of an element ${v}$ of ${\bigotimes_{i=1}^k V_i}$ is defined to be the least nonnegative integer ${r}$ such that ${v}$ is a linear combination of ${r}$ rank one functions. If ${V_1,\dots,V_k}$ are finite-dimensional, then the rank is always well defined as a non-negative integer (in fact it cannot exceed ${\min( \hbox{dim}(V_1), \dots, \hbox{dim}(V_k))}$. It is also clearly subadditive:

$\displaystyle \hbox{rank}(v+w) \leq \hbox{rank}(v) + \hbox{rank}(w). \ \ \ \ \ (1)$

For ${k=1}$, ${\hbox{rank}(v)}$ is ${0}$ when ${v}$ is zero, and ${1}$ otherwise. For ${k=2}$, ${\hbox{rank}(v)}$ is the usual rank of the ${2}$-tensor ${v \in V_1 \otimes V_2}$ (which can for instance be identified with a linear map from ${V_1}$ to the dual space ${V_2^*}$). The usual notion of tensor rank for higher order tensors uses complete tensor products ${v_1 \otimes \dots \otimes v_k}$, ${v_i \in V_i}$ as the rank one objects, rather than ${v_j \otimes_j v_{\hat j}}$, giving a rank that is greater than or equal to the slice rank studied here.

From basic linear algebra we have the following equivalences:

Lemma 1 Let ${V_1,\dots,V_k}$ be finite-dimensional vector spaces over a field ${{\bf F}}$, let ${v}$ be an element of ${V_1 \otimes \dots \otimes V_k}$, and let ${r}$ be a non-negative integer. Then the following are equivalent:

• (i) One has ${\hbox{rank}(v) \leq r}$.
• (ii) One has a representation of the form

$\displaystyle v = \sum_{j=1}^k \sum_{s \in S_j} v_{j,s} \otimes_j v_{\hat j,s}$

where ${S_1,\dots,S_k}$ are finite sets of total cardinality ${|S_1|+\dots+|S_k|}$ at most ${r}$, and for each ${1 \leq j \leq k}$ and ${s \in S_j}$, ${v_{j,s} \in V_j}$ and ${v_{\hat j,s} \in \bigotimes_{1 \leq i \leq k: i \neq j} V_i}$.

• (iii) One has

$\displaystyle v \in \sum_{j=1}^k U_j \otimes_j \bigotimes_{1 \leq i \leq k: i \neq j} V_i$

where for each ${j=1,\dots,k}$, ${U_j}$ is a subspace of ${V_j}$ of total dimension ${\hbox{dim}(U_1)+\dots+\hbox{dim}(U_k)}$ at most ${r}$, and we view ${U_j \otimes_j \bigotimes_{1 \leq i \leq k: i \neq j} V_i}$ as a subspace of ${\bigotimes_{i=1}^k V_i}$ in the obvious fashion.

• (iv) (Dual formulation) There exist subspaces ${W_j}$ of the dual space ${V_j^*}$ for ${j=1,\dots,k}$, of total dimension at least ${\hbox{dim}(V_1)+\dots+\hbox{dim}(V_k) - r}$, such that ${v}$ is orthogonal to ${\bigotimes_{j=1}^k W_j}$, in the sense that one has the vanishing

$\displaystyle \langle \bigotimes_{j=1}^k w_j, v \rangle = 0$

for all ${w_j \in W_j}$, where ${\langle, \rangle: \bigotimes_{j=1}^k V_j^* \times \bigotimes_{j=1}^k V_j \rightarrow {\bf F}}$ is the obvious pairing.

Proof: The equivalence of (i) and (ii) is clear from definition. To get from (ii) to (iii) one simply takes ${U_j}$ to be the span of the ${v_{j,s}}$, and conversely to get from (iii) to (ii) one takes the ${v_{j,s}}$ to be a basis of the ${U_j}$ and computes ${v_{\hat j,s}}$ by using a basis for the tensor product ${\bigotimes_{j=1}^k U_j \otimes_j \bigotimes_{1 \leq i \leq k: i \neq j} V_i}$ consisting entirely of functions of the form ${v_{j,s} \otimes_j e}$ for various ${e}$. To pass from (iii) to (iv) one takes ${W_j}$ to be the annihilator ${\{ w_j \in V_j: \langle w_j, v_j \rangle = 0 \forall v_j \in U_j \}}$ of ${U_j}$, and conversely to pass from (iv) to (iii). $\Box$

One corollary of the formulation (iv), is that the set of tensors of slice rank at most ${r}$ is Zariski closed (if the field ${{\mathbf F}}$ is algebraically closed), and so the slice rank itself is a lower semi-continuous function. This is in contrast to the usual tensor rank, which is not necessarily semicontinuous.

Corollary 2 Let ${V_1,\dots, V_k}$ be finite-dimensional vector spaces over an algebraically closed field ${{\bf F}}$. Let ${r}$ be a nonnegative integer. The set of elements of ${V_1 \otimes \dots \otimes V_k}$ of slice rank at most ${r}$ is closed in the Zariski topology.

Proof: In view of Lemma 1(i and iv), this set is the union over tuples of integers ${d_1,\dots,d_k}$ with ${d_1 + \dots + d_k \geq \hbox{dim}(V_1)+\dots+\hbox{dim}(V_k) - r}$ of the projection from ${\hbox{Gr}(d_1, V_1) \times \dots \times \hbox{Gr}(d_k, V_k) \times ( V_1 \otimes \dots \otimes V_k)}$ of the set of tuples ${(W_1,\dots,W_k, v)}$ with ${ v}$ orthogonal to ${W_1 \times \dots \times W_k}$, where ${\hbox{Gr}(d,V)}$ is the Grassmanian parameterizing ${d}$-dimensional subspaces of ${V}$.

One can check directly that the set of tuples ${(W_1,\dots,W_k, v)}$ with ${ v}$ orthogonal to ${W_1 \times \dots \times W_k}$ is Zariski closed in ${\hbox{Gr}(d_1, V_1) \times \dots \times \hbox{Gr}(d_k, V_k) \times V_1 \otimes \dots \otimes V_k}$ using a set of equations of the form ${\langle \bigotimes_{j=1}^k w_j, v \rangle = 0}$ locally on ${\hbox{Gr}(d_1, V_1) \times \dots \times \hbox{Gr}(d_k, V_k) }$. Hence because the Grassmanian is a complete variety, the projection of this set to ${V_1 \otimes \dots \otimes V_k}$ is also Zariski closed. So the finite union over tuples ${d_1,\dots,d_k}$ of these projections is also Zariski closed.

$\Box$

We also have good behaviour with respect to linear transformations:

Lemma 3 Let ${V_1,\dots,V_k, W_1,\dots,W_k}$ be finite-dimensional vector spaces over a field ${{\bf F}}$, let ${v}$ be an element of ${V_1 \otimes \dots \otimes V_k}$, and for each ${1 \leq j \leq k}$, let ${\phi_j: V_j \rightarrow W_j}$ be a linear transformation, with ${\bigotimes_{j=1}^k \phi_j: \bigotimes_{j=1}^k V_k \rightarrow \bigotimes_{j=1}^k W_k}$ the tensor product of these maps. Then

$\displaystyle \hbox{rank}( (\bigotimes_{j=1}^k \phi_j)(v) ) \leq \hbox{rank}(v). \ \ \ \ \ (2)$

Furthermore, if the ${\phi_j}$ are all injective, then one has equality in (2).

Thus, for instance, the rank of a tensor ${v \in \bigotimes_{j=1}^k V_k}$ is intrinsic in the sense that it is unaffected by any enlargements of the spaces ${V_1,\dots,V_k}$.

Proof: The bound (2) is clear from the formulation (ii) of rank in Lemma 1. For equality, apply (2) to the injective ${\phi_j}$, as well as to some arbitrarily chosen left inverses ${\phi_j^{-1}: W_j \rightarrow V_j}$ of the ${\phi_j}$. $\Box$

Computing the rank of a tensor is difficult in general; however, the problem becomes a combinatorial one if one has a suitably sparse representation of that tensor in some basis, where we will measure sparsity by the property of being an antichain.

Proposition 4 Let ${V_1,\dots,V_k}$ be finite-dimensional vector spaces over a field ${{\bf F}}$. For each ${1 \leq j \leq k}$, let ${(v_{j,s})_{s \in S_j}}$ be a linearly independent set in ${V_j}$ indexed by some finite set ${S_j}$. Let ${\Gamma}$ be a subset of ${S_1 \times \dots \times S_k}$.

Let ${v \in \bigotimes_{j=1}^k V_j}$ be a tensor of the form

$\displaystyle v = \sum_{(s_1,\dots,s_k) \in \Gamma} c_{s_1,\dots,s_k} v_{1,s_1} \otimes \dots \otimes v_{k,s_k} \ \ \ \ \ (3)$

where for each ${(s_1,\dots,s_k)}$, ${c_{s_1,\dots,s_k}}$ is a coefficient in ${{\bf F}}$. Then one has

$\displaystyle \hbox{rank}(v) \leq \min_{\Gamma = \Gamma_1 \cup \dots \cup \Gamma_k} |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)| \ \ \ \ \ (4)$

where the minimum ranges over all coverings of ${\Gamma}$ by sets ${\Gamma_1,\dots,\Gamma_k}$, and ${\pi_j: S_1 \times \dots \times S_k \rightarrow S_j}$ for ${j=1,\dots,k}$ are the projection maps.

Now suppose that the coefficients ${c_{s_1,\dots,s_k}}$ are all non-zero, that each of the ${S_j}$ are equipped with a total ordering ${\leq_j}$, and ${\Gamma'}$ is the set of maximal elements of ${\Gamma}$, thus there do not exist distinct ${(s_1,\dots,s_k) \in \Gamma'}$, ${(t_1,\dots,t_k) \in \Gamma}$ such that ${s_j \leq t_j}$ for all ${j=1,\dots,k}$. Then one has

$\displaystyle \hbox{rank}(v) \geq \min_{\Gamma' = \Gamma_1 \cup \dots \cup \Gamma_k} |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)|. \ \ \ \ \ (5)$

In particular, if ${\Gamma}$ is an antichain (i.e. every element is maximal), then equality holds in (4).

Proof: By Lemma 3 (or by enlarging the bases ${v_{j,s_j}}$), we may assume without loss of generality that each of the ${V_j}$ is spanned by the ${v_{j,s_j}}$. By relabeling, we can also assume that each ${S_j}$ is of the form

$\displaystyle S_j = \{1,\dots,|S_j|\}$

with the usual ordering, and by Lemma 3 we may take each ${V_j}$ to be ${{\bf F}^{|S_j|}}$, with ${v_{j,s_j} = e_{s_j}}$ the standard basis.

Let ${r}$ denote the rank of ${v}$. To show (4), it suffices to show the inequality

$\displaystyle r \leq |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)| \ \ \ \ \ (6)$

for any covering of ${\Gamma}$ by ${\Gamma_1,\dots,\Gamma_k}$. By removing repeated elements we may assume that the ${\Gamma_i}$ are disjoint. For each ${1 \leq j \leq k}$, the tensor

$\displaystyle \sum_{(s_1,\dots,s_k) \in \Gamma_j} c_{s_1,\dots,s_k} e_{s_1} \otimes \dots \otimes e_{s_k}$

can (after collecting terms) be written as

$\displaystyle \sum_{s_j \in \pi_j(\Gamma_j)} e_{s_j} \otimes_j v_{\hat j,s_j}$

for some ${v_{\hat j, s_j} \in \bigotimes_{1 \leq i \leq k: i \neq j} {\bf F}^{|S_i|}}$. Summing and using (1), we conclude the inequality (6).

Now assume that the ${c_{s_1,\dots,s_k}}$ are all non-zero and that ${\Gamma'}$ is the set of maximal elements of ${\Gamma}$. To conclude the proposition, it suffices to show that the reverse inequality

$\displaystyle r \geq |\pi_1(\Gamma_1)| + \dots + |\pi_k(\Gamma_k)| \ \ \ \ \ (7)$

holds for some ${\Gamma_1,\dots,\Gamma_k}$ covering ${\Gamma'}$. By Lemma 1(iv), there exist subspaces ${W_j}$ of ${({\bf F}^{|S_j|})^*}$ whose dimension ${d_j := \hbox{dim}(W_j)}$ sums to

$\displaystyle \sum_{j=1}^k d_j = \sum_{j=1}^k |S_j| - r \ \ \ \ \ (8)$

such that ${v}$ is orthogonal to ${\bigotimes_{j=1}^k W_j}$.

Let ${1 \leq j \leq k}$. Using Gaussian elimination, one can find a basis ${w_{j,1},\dots,w_{j,d_j}}$ of ${W_j}$ whose representation in the standard dual basis ${e^*_{1},\dots,e^*_{|S_j|}}$ of ${({\bf F}^{|S_j|})^*}$ is in row-echelon form. That is to say, there exist natural numbers

$\displaystyle 1 \leq s_{j,1} < \dots < s_{j,d_j} \leq |S_j|$

such that for all ${1 \leq t \leq d_j}$, ${w_{j,t}}$ is a linear combination of the dual vectors ${e^*_{s_{j,t}},\dots,e^*_{|S_j|}}$, with the ${e^*_{s_{j,t}}}$ coefficient equal to one.

We now claim that ${\prod_{j=1}^k \{ s_{j,t}: 1 \leq t \leq d_j \}}$ is disjoint from ${\Gamma'}$. Suppose for contradiction that this were not the case, thus there exists ${1 \leq t_j \leq d_j}$ for each ${1 \leq j \leq k}$ such that

$\displaystyle (s_{1,t_1}, \dots, s_{k,t_k}) \in \Gamma'.$

As ${\Gamma'}$ is the set of maximal elements of ${\Gamma}$, this implies that

$\displaystyle (s'_1,\dots,s'_k) \not \in \Gamma$

for any tuple ${(s'_1,\dots,s'_k) \in \prod_{j=1}^k \{ s_{j,t_j}, \dots, |S_j|\}}$ other than ${(s_{1,t_1}, \dots, s_{k,t_k})}$. On the other hand, we know that ${w_{j,t_j}}$ is a linear combination of ${e^*_{s_{j,t_j}},\dots,e^*_{|S_j|}}$, with the ${e^*_{s_{j,t_j}}}$ coefficient one. We conclude that the tensor product ${\bigotimes_{j=1}^k w_{j,t_j}}$ is equal to

$\displaystyle \bigotimes_{j=1}^k e^*_{s_{j,t_j}}$

plus a linear combination of other tensor products ${\bigotimes_{j=1}^k e^*_{s'_j}}$ with ${(s'_1,\dots,s'_k)}$ not in ${\Gamma}$. Taking inner products with (3), we conclude that ${\langle v, \bigotimes_{j=1}^k w_{j,t_j}\rangle = c_{s_{1,t_1},\dots,s_{k,t_k}} \neq 0}$, contradicting the fact that ${v}$ is orthogonal to ${\prod_{j=1}^k W_j}$. Thus we have ${\prod_{j=1}^k \{ s_{j,t}: 1 \leq t \leq d_j \}}$ disjoint from ${\Gamma'}$.

For each ${1 \leq j \leq k}$, let ${\Gamma_j}$ denote the set of tuples ${(s_1,\dots,s_k)}$ in ${\Gamma'}$ with ${s_j}$ not of the form ${\{ s_{j,t}: 1 \leq t \leq d_j \}}$. From the previous discussion we see that the ${\Gamma_j}$ cover ${\Gamma'}$, and we clearly have ${\pi_j(\Gamma_j) \leq |S_j| - d_j}$, and hence from (8) we have (7) as claimed. $\Box$

As an instance of this proposition, we recover the computation of diagonal rank from the previous blog post:

Example 5 Let ${V_1,\dots,V_k}$ be finite-dimensional vector spaces over a field ${{\bf F}}$ for some ${k \geq 2}$. Let ${d}$ be a natural number, and for ${1 \leq j \leq k}$, let ${e_{j,1},\dots,e_{j,d}}$ be a linearly independent set in ${V_j}$. Let ${c_1,\dots,c_d}$ be non-zero coefficients in ${{\bf F}}$. Then

$\displaystyle \sum_{t=1}^d c_t e_{1,t} \otimes \dots \otimes e_{k,t}$

has rank ${d}$. Indeed, one applies the proposition with ${S_1,\dots,S_k}$ all equal to ${\{1,\dots,d\}}$, with ${\Gamma}$ the diagonal in ${S_1 \times \dots \times S_k}$; this is an antichain if we give one of the ${S_i}$ the standard ordering, and another of the ${S_i}$ the opposite ordering (and ordering the remaining ${S_i}$ arbitrarily). In this case, the ${\pi_j}$ are all bijective, and so it is clear that the minimum in (4) is simply ${d}$.

The combinatorial minimisation problem in the above proposition can be solved asymptotically when working with tensor powers, using the notion of the Shannon entropy ${h(X)}$ of a discrete random variable ${X}$.

Proposition 6 Let ${V_1,\dots,V_k}$ be finite-dimensional vector spaces over a field ${{\bf F}}$. For each ${1 \leq j \leq k}$, let ${(v_{j,s})_{s \in S_j}}$ be a linearly independent set in ${V_j}$ indexed by some finite set ${S_j}$. Let ${\Gamma}$ be a non-empty subset of ${S_1 \times \dots \times S_k}$.

Let ${v \in \bigotimes_{j=1}^k V_j}$ be a tensor of the form (3) for some coefficients ${c_{s_1,\dots,s_k}}$. For each natural number ${n}$, let ${v^{\otimes n}}$ be the tensor power of ${n}$ copies of ${v}$, viewed as an element of ${\bigotimes_{j=1}^k V_j^{\otimes n}}$. Then

$\displaystyle \hbox{rank}(v^{\otimes n}) \leq \exp( (H + o(1)) n ) \ \ \ \ \ (9)$

as ${n \rightarrow \infty}$, where ${H}$ is the quantity

$\displaystyle H = \hbox{sup}_{(X_1,\dots,X_k)} \hbox{min}( h(X_1), \dots, h(X_k) ) \ \ \ \ \ (10)$

and ${(X_1,\dots,X_k)}$ range over the random variables taking values in ${\Gamma}$.

Now suppose that the coefficients ${c_{s_1,\dots,s_k}}$ are all non-zero and that each of the ${S_j}$ are equipped with a total ordering ${\leq_j}$. Let ${\Gamma'}$ be the set of maximal elements of ${\Gamma}$ in the product ordering, and let ${H' = \hbox{sup}_{(X_1,\dots,X_k)} \hbox{min}( h(X_1), \dots, h(X_k) ) }$ where ${(X_1,\dots,X_k)}$ range over random variables taking values in ${\Gamma'}$. Then

$\displaystyle \hbox{rank}(v^{\otimes n}) \geq \exp( (H' + o(1)) n ) \ \ \ \ \ (11)$

as ${n \rightarrow \infty}$. In particular, if the maximizer in (10) is supported on the maximal elements of ${\Gamma}$ (which always holds if ${\Gamma}$ is an antichain in the product ordering), then equality holds in (9).

Proof:

It will suffice to show that

$\displaystyle \min_{\Gamma^n = \Gamma_{n,1} \cup \dots \cup \Gamma_{n,k}} |\pi_{n,1}(\Gamma_{n,1})| + \dots + |\pi_{n,k}(\Gamma_{n,k})| = \exp( (H + o(1)) n ) \ \ \ \ \ (12)$

as ${n \rightarrow \infty}$, where ${\pi_{n,j}: \prod_{i=1}^k S_i^n \rightarrow S_j^n}$ is the projection map. Then the same thing will apply to ${\Gamma'}$ and ${H'}$. Then applying Proposition 4, using the lexicographical ordering on ${S_j^n}$ and noting that, if ${\Gamma'}$ are the maximal elements of ${\Gamma}$, then ${\Gamma'^n}$ are the maximal elements of ${\Gamma^n}$, we obtain both (9) and (11).

We first prove the lower bound. By compactness (and the continuity properties of entropy), we can find a random variable ${(X_1,\dots,X_k)}$ taking values in ${\Gamma}$ such that

$\displaystyle H = \hbox{min}( h(X_1), \dots, h(X_k) ). \ \ \ \ \ (13)$

Let ${\varepsilon = o(1)}$ be a small positive quantity that goes to zero sufficiently slowly with ${n}$. Let ${\Sigma = \Sigma_{X_1,\dots,X_k} \subset \Gamma^n}$ denote the set of all tuples ${(a_1, \dots, \vec a_n)}$ in ${\Gamma^n}$ that are within ${\varepsilon}$ of being distributed according to the law of ${(X_1,\dots,X_k)}$, in the sense that for all ${a \in \Gamma}$, one has

$\displaystyle |\frac{|\{ 1 \leq l \leq n: a_l = a \}|}{n} - {\bf P}( (X_1,\dots,X_k) = a )| \leq \varepsilon.$

By the asymptotic equipartition property, the cardinality of ${\Sigma}$ can be computed to be

$\displaystyle |\Sigma| = \exp( (h( X_1,\dots,X_k)+o(1)) n ) \ \ \ \ \ (14)$

if ${\varepsilon}$ goes to zero slowly enough. Similarly one has

$\displaystyle |\pi_{n,j}(\Sigma)| = \exp( (h( X_j)+o(1)) n ),$

and for each ${s_{n,j} \in \pi_{n,j}(\Sigma)}$, one has

$\displaystyle |\{ \sigma \in \Sigma: \pi_{n,j}(\sigma) = s_{n,j} \}| \leq \exp( (h( X_1,\dots,X_k)-h(X_j)+o(1)) n ). \ \ \ \ \ (15)$

Now let ${\Gamma^n = \Gamma_{n,1} \cup \dots \cup \Gamma_{n,k}}$ be an arbitrary covering of ${\Gamma^n}$. By the pigeonhole principle, there exists ${1 \leq j \leq k}$ such that

$\displaystyle |\Gamma_{n,j} \cap \Sigma| \geq \frac{1}{k} |\Sigma|$

and hence by (14), (15)

$\displaystyle |\pi_{n,j}( \Gamma_{n,j} \cap \Sigma)| \geq \frac{1}{k} \exp( (h( X_j)+o(1)) n )$

which by (13) implies that

$\displaystyle |\pi_{n,1}(\Gamma_{n,1})| + \dots + |\pi_{n,k}(\Gamma_{n,k})| \geq \exp( (H + o(1)) n )$

noting that the ${\frac{1}{k}}$ factor can be absorbed into the ${o(1)}$ error). This gives the lower bound in (12).

Now we prove the upper bound. We can cover ${\Gamma^n}$ by ${O(\exp(o(n))}$ sets of the form ${\Sigma_{X_1,\dots,X_k}}$ for various choices of random variables ${(X_1,\dots,X_k)}$ taking values in ${\Gamma}$. For each such random variable ${(X_1,\dots,X_k)}$, we can find ${1 \leq j \leq k}$ such that ${h(X_j) \leq H}$; we then place all of ${\Sigma_{X_1,\dots,X_k}}$ in ${\Gamma_j}$. It is then clear that the ${\Gamma_j}$ cover ${\Gamma}$ and that

$\displaystyle |\Gamma_j| \leq \exp( (H+o(1)) n )$

for all ${j=1,\dots,n}$, giving the required upper bound. $\Box$

It is of interest to compute the quantity ${H}$ in (10). We have the following criterion for when a maximiser occurs:

Proposition 7 Let ${S_1,\dots,S_k}$ be finite sets, and ${\Gamma \subset S_1 \times \dots \times S_k}$ be non-empty. Let ${H}$ be the quantity in (10). Let ${(X_1,\dots,X_k)}$ be a random variable taking values in ${\Gamma}$, and let ${\Gamma^* \subset \Gamma}$ denote the essential range of ${(X_1,\dots,X_k)}$, that is to say the set of tuples ${(t_1,\dots,t_k)\in \Gamma}$ such that ${{\bf P}( X_1=t_1, \dots, X_k = t_k)}$ is non-zero. Then the following are equivalent:

• (i) ${(X_1,\dots,X_k)}$ attains the maximum in (10).
• (ii) There exist weights ${w_1,\dots,w_k \geq 0}$ and a finite quantity ${D \geq 0}$, such that ${w_j=0}$ whenever ${h(X_j) > \min(h(X_1),\dots,h(X_k))}$, and such that

$\displaystyle \sum_{j=1}^k w_j \log \frac{1}{{\bf P}(X_j = t_j)} \leq D \ \ \ \ \ (16)$

for all ${(t_1,\dots,t_k) \in \Gamma}$, with equality if ${(t_1,\dots,t_k) \in \Gamma^*}$. (In particular, ${w_j}$ must vanish if there exists a ${t_j \in \pi_i(\Gamma)}$ with ${{\bf P}(X_j=t_j)=0}$.)

Furthermore, when (i) and (ii) holds, one has

$\displaystyle D = H \sum_{j=1}^k w_j. \ \ \ \ \ (17)$

Proof: We first show that (i) implies (ii). The function ${p \mapsto p \log \frac{1}{p}}$ is concave on ${[0,1]}$. As a consequence, if we define ${C}$ to be the set of tuples ${(h_1,\dots,h_k) \in [0,+\infty)^k}$ such that there exists a random variable ${(Y_1,\dots,Y_k)}$ taking values in ${\Gamma}$ with ${h(Y_j) \geq h_j}$, then ${C}$ is convex. On the other hand, by (10), ${C}$ is disjoint from the orthant ${(H,+\infty)^k}$. Thus, by the hyperplane separation theorem, we conclude that there exists a half-space

$\displaystyle \{ (h_1,\dots,h_k) \in {\bf R}^k: w_1 h_1 + \dots + w_k h_k \geq c \},$

where ${w_1,\dots,w_k}$ are reals that are not all zero, and ${c}$ is another real, which contains ${(h(X_1),\dots,h(X_k))}$ on its boundary and ${(H,+\infty)^k}$ in its interior, such that ${C}$ avoids the interior of the half-space. Since ${(h(X_1),\dots,h(X_k))}$ is also on the boundary of ${(H,+\infty)^k}$, we see that the ${w_j}$ are non-negative, and that ${w_j = 0}$ whenever ${h(X_j) \neq H}$.

By construction, the quantity

$\displaystyle w_1 h(Y_1) + \dots + w_k h(Y_k)$

is maximised when ${(Y_1,\dots,Y_k) = (X_1,\dots,X_k)}$. At this point we could use the method of Lagrange multipliers to obtain the required constraints, but because we have some boundary conditions on the ${(Y_1,\dots,Y_k)}$ (namely, that the probability that they attain a given element of ${\Gamma}$ has to be non-negative) we will work things out by hand. Let ${t = (t_1,\dots,t_k)}$ be an element of ${\Gamma}$, and ${s = (s_1,\dots,s_k)}$ an element of ${\Gamma^*}$. For ${\varepsilon>0}$ small enough, we can form a random variable ${(Y_1,\dots,Y_k)}$ taking values in ${\Gamma}$, whose probability distribution is the same as that for ${(X_1,\dots,X_k)}$ except that the probability of attaining ${(t_1,\dots,t_k)}$ is increased by ${\varepsilon}$, and the probability of attaining ${(s_1,\dots,s_k)}$ is decreased by ${\varepsilon}$. If there is any ${j}$ for which ${{\bf P}(X_j = t_j)=0}$ and ${w_j \neq 0}$, then one can check that

$\displaystyle w_1 h(Y_1) + \dots + w_k h(Y_k) - (w_1 h(X_1) + \dots + w_k h(X_k)) \gg \varepsilon \log \frac{1}{\varepsilon}$

for sufficiently small ${\varepsilon}$, contradicting the maximality of ${(X_1,\dots,X_k)}$; thus we have ${{\bf P}(X_j = t_j) > 0}$ whenever ${w_j \neq 0}$. Taylor expansion then gives

$\displaystyle w_1 h(Y_1) + \dots + w_k h(Y_k) - (w_1 h(X_1) + \dots + w_k h(X_k)) = (A_t - A_s) \varepsilon + O(\varepsilon^2)$

for small ${\varepsilon}$, where

$\displaystyle A_t := \sum_{j=1}^k w_j \log \frac{1}{{\bf P}(X_j = t_j)}$

and similarly for ${A_s}$. We conclude that ${A_t \leq A_s}$ for all ${s \in \Gamma^*}$ and ${t \in \Gamma}$, thus there exists a quantity ${D}$ such that ${A_s = D}$ for all ${s \in \Gamma^*}$, and ${A_t \leq D}$ for all ${t \in \Gamma}$. By construction ${D}$ must be nonnegative. Sampling ${(t_1,\dots,t_k)}$ using the distribution of ${(X_1,\dots,X_k)}$, one has

$\displaystyle \sum_{j=1}^k w_j \log \frac{1}{{\bf P}(X_j = t_j)} = D$

almost surely; taking expectations we conclude that

$\displaystyle \sum_{j=1}^k w_j \sum_{t_j \in S_j} {\bf P}( X_j = t_j) \log \frac{1}{{\bf P}(X_j = t_j)} = D.$

The inner sum is ${h(X_j)}$, which equals ${H}$ when ${w_j}$ is non-zero, giving (17).

Now we show conversely that (ii) implies (i). As noted previously, the function ${p \mapsto p \log \frac{1}{p}}$ is concave on ${[0,1]}$, with derivative ${\log \frac{1}{p} - 1}$. This gives the inequality

$\displaystyle q \log \frac{1}{q} \leq p \log \frac{1}{p} + (q-p) ( \log \frac{1}{p} - 1 ) \ \ \ \ \ (18)$

for any ${0 \leq p,q \leq 1}$ (note the right-hand side may be infinite when ${p=0}$ and ${q>0}$). Let ${(Y_1,\dots,Y_k)}$ be any random variable taking values in ${\Gamma}$, then on applying the above inequality with ${p = {\bf P}(X_j = t_j)}$ and ${q = {\bf P}( Y_j = t_j )}$, multiplying by ${w_j}$, and summing over ${j=1,\dots,k}$ and ${t_j \in S_j}$ gives

$\displaystyle \sum_{j=1}^k w_j h(Y_j) \leq \sum_{j=1}^k w_j h(X_j)$

$\displaystyle + \sum_{j=1}^k \sum_{t_j \in S_j} w_j ({\bf P}(Y_j = t_j) - {\bf P}(X_j = t_j)) ( \log \frac{1}{{\bf P}(X_j=t_j)} - 1 ).$

By construction, one has

$\displaystyle \sum_{j=1}^k w_j h(X_j) = \min(h(X_1),\dots,h(X_k)) \sum_{j=1}^k w_j$

and

$\displaystyle \sum_{j=1}^k w_j h(Y_j) \geq \min(h(Y_1),\dots,h(Y_k)) \sum_{j=1}^k w_j$

so to prove that ${\min(h(Y_1),\dots,h(Y_k)) \leq \min(h(X_1),\dots,h(X_k))}$ (which would give (i)), it suffices to show that

$\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j ({\bf P}(Y_j = t_j) - {\bf P}(X_j = t_j)) ( \log \frac{1}{{\bf P}(X_j=t_j)} - 1 ) \leq 0,$

or equivalently that the quantity

$\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j {\bf P}(Y_j = t_j) ( \log \frac{1}{{\bf P}(X_j=t_j)} - 1 )$

is maximised when ${(Y_1,\dots,Y_k) = (X_1,\dots,X_k)}$. Since

$\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j {\bf P}(Y_j = t_j) = \sum_{j=1}^k w_j$

it suffices to show this claim for the quantity

$\displaystyle \sum_{j=1}^k \sum_{t_j \in S_j} w_j {\bf P}(Y_j = t_j) \log \frac{1}{{\bf P}(X_j=t_j)}.$

One can view this quantity as

$\displaystyle {\bf E}_{(Y_1,\dots,Y_k)} \sum_{j=1}^k w_j \log \frac{1}{{\bf P}_{X_j}(X_j=Y_j)}.$

By (ii), this quantity is bounded by ${D}$, with equality if ${(Y_1,\dots,Y_k)}$ is equal to ${(X_1,\dots,X_k)}$ (and is in particular ranging in ${\Gamma^*}$), giving the claim. $\Box$

The second half of the proof of Proposition 7 only uses the marginal distributions ${{{\bf P}(X_j=t_j)}}$ and the equation(16), not the actual distribution of ${(X_1,\dots,X_k)}$, so it can also be used to prove an upper bound on ${H}$ when the exact maximizing distribution is not known, given suitable probability distributions in each variable. The logarithm of the probability distribution here plays the role that the weight functions do in BCCGNSU.

Remark 8 Suppose one is in the situation of (i) and (ii) above; assume the nondegeneracy condition that ${H}$ is positive (or equivalently that ${D}$ is positive). We can assign a “degree” ${d_j(t_j)}$ to each element ${t_j \in S_j}$ by the formula

$\displaystyle d_j(t_j) := w_j \log \frac{1}{{\bf P}(X_j = t_j)}, \ \ \ \ \ (19)$

then every tuple ${(t_1,\dots,t_k)}$ in ${\Gamma}$ has total degree at most ${D}$, and those tuples in ${\Gamma^*}$ have degree exactly ${D}$. In particular, every tuple in ${\Gamma^n}$ has degree at most ${nD}$, and hence by (17), each such tuple has a ${j}$-component of degree less than or equal to ${nHw_j}$ for some ${j}$ with ${w_j>0}$. On the other hand, we can compute from (19) and the fact that ${h(X_j) = H}$ for ${w_j > 0}$ that ${Hw_j = {\bf E} d_j(X_j)}$. Thus, by asymptotic equipartition, and assuming ${w_j \neq 0}$, the number of “monomials” in ${S_j^n}$ of total degree at most ${nHw_j}$ is at most ${\exp( (h(X_j)+o(1)) n )}$; one can in fact use (19) and (18) to show that this is in fact an equality. This gives a direct way to cover ${\Gamma^n}$ by sets ${\Gamma_{n,1},\dots,\Gamma_{n,k}}$ with ${|\pi_j(\Gamma_{n,j})| \leq \exp( (H+o(1)) n)}$, which is in the spirit of the Croot-Lev-Pach-Ellenberg-Gijswijt arguments from the previous post.

We can now show that the rank computation for the capset problem is sharp:

Proposition 9 Let ${V_1^{\otimes n} = V_2^{\otimes n} = V_3^{\otimes n}}$ denote the space of functions from ${{\bf F}_3^n}$ to ${{\bf F}_3}$. Then the function ${(x,y,z) \mapsto \delta_{0^n}(x+y+z)}$ from ${{\bf F}_3^n \times {\bf F}_3^n \times {\bf F}_3^n}$ to ${{\bf F}}$, viewed as an element of ${V_1^{\otimes n} \otimes V_2^{\otimes n} \otimes V_3^{\otimes n}}$, has rank ${\exp( (H^*+o(1)) n )}$ as ${n \rightarrow \infty}$, where ${H^* \approx 1.013445}$ is given by the formula

$\displaystyle H^* = \alpha \log \frac{1}{\alpha} + \beta \log \frac{1}{\beta} + \gamma \log \frac{1}{\gamma} \ \ \ \ \ (20)$

with

$\displaystyle \alpha = \frac{32}{3(15 + \sqrt{33})} \approx 0.51419$

$\displaystyle \beta = \frac{4(\sqrt{33}-1)}{3(15+\sqrt{33})} \approx 0.30495$

$\displaystyle \gamma = \frac{(\sqrt{33}-1)^2}{6(15+\sqrt{33})} \approx 0.18086.$

Proof: In ${{\bf F}_3 \times {\bf F}_3 \times {\bf F}_3}$, we have

$\displaystyle \delta_0(x+y+z) = 1 - (x+y+z)^2$

$\displaystyle = (1-x^2) - y^2 - z^2 + xy + yz + zx.$

Thus, if we let ${V_1=V_2=V_3}$ be the space of functions from ${{\bf F}_3}$ to ${{\bf F}_3}$ (with domain variable denoted ${x,y,z}$ respectively), and define the basis functions

$\displaystyle v_{1,0} := 1; v_{1,1} := x; v_{1,2} := x^2$

$\displaystyle v_{2,0} := 1; v_{2,1} := y; v_{2,2} := y^2$

$\displaystyle v_{3,0} := 1; v_{3,1} := z; v_{3,2} := z^2$

of ${V_1,V_2,V_3}$ indexed by ${S_1=S_2=S_3 := \{ 0,1,2\}}$ (with the usual ordering), respectively, and set ${\Gamma \subset S_1 \times S_2 \times S_3}$ to be the set

$\displaystyle \{ (2,0,0), (0,2,0), (0,0,2), (1,1,0), (0,1,1), (1,0,1),(0,0,0) \}$

then ${\delta_0(x+y+z)}$ is a linear combination of the ${v_{1,t_1} \otimes v_{1,t_2} \otimes v_{1,t_3}}$ with ${(t_1,t_2,t_3) \in \Gamma}$, and all coefficients non-zero. Then we have ${\Gamma'= \{ (2,0,0), (0,2,0), (0,0,2), (1,1,0), (0,1,1), (1,0,1) \}}$. We will show that the quantity ${H}$ of (10) agrees with the quantity ${H^*}$ of (20), and that the optimizing distribution is supported on ${\Gamma'}$, so that by Proposition 6 the rank of ${\delta_{0^n}(x+y+z)}$ is ${\exp( (H+o(1)) n)}$.

To compute the quantity at (10), we use the criterion in Proposition 7. We take ${(X_1,X_2,X_3)}$ to be the random variable taking values in ${\Gamma}$ that attains each of the values ${(2,0,0), (0,2,0), (0,0,2)}$ with a probability of ${\gamma \approx 0.18086}$, and each of ${(1,1,0), (0,1,1), (1,0,1)}$ with a probability of ${\alpha - 2\gamma = \beta/2 \approx 0.15247}$; then each of the ${X_j}$ attains the values of ${0,1,2}$ with probabilities ${\alpha,\beta,\gamma}$ respectively, so in particular ${h(X_1)=h(X_2)=h(X_3)}$ is equal to the quantity ${H'}$ in (20). If we now set ${w_1 = w_2 = w_3 := 1}$ and

$\displaystyle D := 2\log \frac{1}{\alpha} + \log \frac{1}{\gamma} = \log \frac{1}{\alpha} + 2 \log \frac{1}{\beta} = 3H^* \approx 3.04036$

we can verify the condition (16) with equality for all ${(t_1,t_2,t_3) \in \Gamma'}$, which from (17) gives ${H=H'=H^*}$ as desired. $\Box$

This statement already follows from the result of Kleinberg-Sawin-Speyer, which gives a “tri-colored sum-free set” in ${\mathbb F_3^n}$ of size ${\exp((H'+o(1))n)}$, as the slice rank of this tensor is an upper bound for the size of a tri-colored sum-free set. If one were to go over the proofs more carefully to evaluate the subexponential factors, this argument would give a stronger lower bound than KSS, as it does not deal with the substantial loss that comes from Behrend’s construction. However, because it actually constructs a set, the KSS result rules out more possible approaches to give an exponential improvement of the upper bound for capsets. The lower bound on slice rank shows that the bound cannot be improved using only the slice rank of this particular tensor, whereas KSS shows that the bound cannot be improved using any method that does not take advantage of the “single-colored” nature of the problem.

We can also show that the slice rank upper bound in a result of Naslund-Sawin is similarly sharp:

Proposition 10 Let ${V_1^{\otimes n} = V_2^{\otimes n} = V_3^{\otimes n}}$ denote the space of functions from ${\{0,1\}^n}$ to ${\mathbb C}$. Then the function ${(x,y,z) \mapsto \prod_{i=1}^n (x_i+y_i+z_i)-1}$ from ${\{0,1\}^n \times \{0,1\}^n \times \{0,1\}^n \rightarrow \mathbb C}$, viewed as an element of ${V_1^{\otimes n} \otimes V_2^{\otimes n} \otimes V_3^{\otimes n}}$, has slice rank ${(3/2^{2/3})^n e^{o(n)}}$

Proof: Let ${v_{1,0}=1}$ and ${v_{1,1}=x}$ be a basis for the space ${V_1}$ of functions on ${\{0,1\}}$, itself indexed by ${S_1=\{0,1\}}$. Choose similar bases for ${V_2}$ and ${V_3}$, with ${v_{2,0}=1, v_{2,1}=y}$ and ${v_{3,0}=1,v_{3,1}=z-1}$.

Set ${\Gamma = \{(1,0,0),(0,1,0),(0,0,1)\}}$. Then ${x+y+z-1}$ is a linear combination of the ${v_{1,t_1} \otimes v_{1,t_2} \otimes v_{1,t_3}}$ with ${(t_1,t_2,t_3) \in \Gamma}$, and all coefficients non-zero. Order ${S_1,S_2,S_3}$ the usual way so that ${\Gamma}$ is an antichain. We will show that the quantity ${H}$ of (10) is ${\log(3/2^{2/3})}$, so that applying the last statement of Proposition 6, we conclude that the rank of ${\delta_{0^n}(x+y+z)}$ is ${\exp( (\log(3/2^{2/3})+o(1)) n)= (3/2^{2/3})^n e^{o(n)}}$ ,

Let ${(X_1,X_2,X_3)}$ be the random variable taking values in ${\Gamma}$ that attains each of the values ${(1,0,0),(0,1,0),(0,0,1)}$ with a probability of ${1/3}$. Then each of the ${X_i}$ attains the value ${1}$ with probability ${1/3}$ and ${0}$ with probability ${2/3}$, so

$\displaystyle h(X_1)=h(X_2)=h(X_3) = (1/3) \log (3) + (2/3) \log(3/2) = \log 3 - (2/3) \log 2= \log (3/2^{2/3})$

Setting ${w_1=w_2=w_3=1}$ and ${D=3 \log(3/2^{2/3})=3 \log 3 - 2 \log 2}$, we can verify the condition (16) with equality for all ${(t_1,t_2,t_3) \in \Gamma'}$, which from (17) gives ${H=\log (3/2^{2/3})}$ as desired. $\Box$

We used a slightly different method in each of the last two results. In the first one, we use the most natural bases for all three vector spaces, and distinguish ${\Gamma}$ from its set of maximal elements ${\Gamma'}$. In the second one we modify one basis element slightly, with ${v_{3,1}=z-1}$ instead of the more obvious choice ${z}$, which allows us to work with ${\Gamma = \{(1,0,0),(0,1,0),(0,0,1)\}}$ instead of ${\Gamma=\{(1,0,0),(0,1,0),(0,0,1),(0,0,0)\}}$. Because ${\Gamma}$ is an antichain, we do not need to distinguish ${\Gamma}$ and ${\Gamma'}$. Both methods in fact work with either problem, and they are both about equally difficult, but we include both as either might turn out to be substantially more convenient in future work.

Proposition 11 Let ${k \geq 8}$ be a natural number and let ${G}$ be a finite abelian group. Let ${{\bf F}}$ be any field. Let ${V_1 = \dots = V_k}$ denote the space of functions from ${G}$ to ${{\bf F}}$.

Let ${F}$ be any ${{\bf F}}$-valued function on ${G^k}$ that is nonzero only when the ${k}$ elements of ${G^n}$ form a ${k}$-term arithmetic progression, and is nonzero on every ${k}$-term constant progression.

Then the slice rank of ${F}$ is ${|G|}$.

Proof: We apply Proposition 4, using the standard bases of ${V_1,\dots,V_k}$. Let ${\Gamma}$ be the support of ${F}$. Suppose that we have ${k}$ orderings on ${H}$ such that the constant progressions are maximal elements of ${\Gamma}$ and thus all constant progressions lie in ${\Gamma'}$. Then for any partition ${\Gamma_1,\dots, \Gamma_k}$ of ${\Gamma'}$, ${\Gamma_j}$ can contain at most ${|\pi_j(\Gamma_j)|}$ constant progressions, and as all ${|G|}$ constant progressions must lie in one of the ${\Gamma_j}$, we must have ${\sum_{j=1}^k |\pi_j(\Gamma_j)| \geq |G|}$. By Proposition 4, this implies that the slice rank of ${F}$ is at least ${|G|}$. Since ${F}$ is a ${|G| \times \dots \times |G|}$ tensor, the slice rank is at most ${|G|}$, hence exactly ${|G|}$.

So it is sufficient to find ${k}$ orderings on ${G}$ such that the constant progressions are maximal element of ${\Gamma}$. We make several simplifying reductions: We may as well assume that ${\Gamma}$ consists of all the ${k}$-term arithmetic progressions, because if the constant progressions are maximal among the set of all progressions then they are maximal among its subset ${\Gamma}$. So we are looking for an ordering in which the constant progressions are maximal among all ${k}$-term arithmetic progressions. We may as well assume that ${G}$ is cyclic, because if for each cyclic group we have an ordering where constant progressions are maximal, on an arbitrary finite abelian group the lexicographic product of these orderings is an ordering for which the constant progressions are maximal. We may assume ${k=8}$, as if we have an ${8}$-tuple of orderings where constant progressions are maximal, we may add arbitrary orderings and the constant progressions will remain maximal.

So it is sufficient to find ${8}$ orderings on the cyclic group ${\mathbb Z/n}$ such that the constant progressions are maximal elements of the set of ${8}$-term progressions in ${\mathbb Z/n}$ in the ${8}$-fold product ordering. To do that, let the first, second, third, and fifth orderings be the usual order on ${\{0,\dots,n-1\}}$ and let the fourth, sixth, seventh, and eighth orderings be the reverse of the usual order on ${\{0,\dots,n-1\}}$.

Then let ${(c,c,c,c,c,c,c,c)}$ be a constant progression and for contradiction assume that ${(a,a+b,a+2b,a+3b,a+4b,a+5b,a+6b,a+7b)}$ is a progression greater than ${(c,c,c,c,c,c,c,c)}$ in this ordering. We may assume that ${c \in [0, (n-1)/2]}$, because otherwise we may reverse the order of the progression, which has the effect of reversing all eight orderings, and then apply the transformation ${x \rightarrow n-1-x}$, which again reverses the eight orderings, bringing us back to the original problem but with ${c \in [0,(n-1)/2]}$.

Take a representative of the residue class ${b}$ in the interval ${[-n/2,n/2]}$. We will abuse notation and call this ${b}$. Observe that ${a+b, a+2b,}$ ${a+3b}$, and ${a+5b}$ are all contained in the interval ${[0,c]}$ modulo ${n}$. Take a representative of the residue class ${a}$ in the interval ${[0,c]}$. Then ${a+b}$ is in the interval ${[mn,mn+c]}$ for some ${m}$. The distance between any distinct pair of intervals of this type is greater than ${n/2}$, but the distance between ${a}$ and ${a+b}$ is at most ${n/2}$, so ${a+b}$ is in the interval ${[0,c]}$. By the same reasoning, ${a+2b}$ is in the interval ${[0,c]}$. Therefore ${|b| \leq c/2< n/4}$. But then the distance between ${a+2b}$ and ${a+4b}$ is at most ${n/2}$, so by the same reasoning ${a+4b}$ is in the interval ${[0,c]}$. Because ${a+3b}$ is between ${a+2b}$ and ${a+4b}$, it also lies in the interval ${[0,c]}$. Because ${a+3b}$ is in the interval ${[0,c]}$, and by assumption it is congruent mod ${n}$ to a number in the set ${\{0,\dots,n-1\}}$ greater than or equal to ${c}$, it must be exactly ${c}$. Then, remembering that ${a+2b}$ and ${a+4b}$ lie in ${[0,c]}$, we have ${c-b \leq b}$ and ${c+b \leq b}$, so ${b=0}$, hence ${a=c}$, thus ${(a,\dots,a+7b)=(c,\dots,c)}$, which contradicts the assumption that ${(a,\dots,a+7b)>(c,\dots,c)}$. $\Box$

In fact, given a ${k}$-term progressions mod ${n}$ and a constant, we can form a ${k}$-term binary sequence with a ${1}$ for each step of the progression that is greater than the constant and a ${0}$ for each step that is less. Because a rotation map, viewed as a dynamical system, has zero topological entropy, the number of ${k}$-term binary sequences that appear grows subexponentially in ${k}$. Hence there must be, for large enough ${k}$, at least one sequence that does not appear. In this proof we exploit a sequence that does not appear for ${k=8}$.

A capset in the vector space ${{\bf F}_3^n}$ over the finite field ${{\bf F}_3}$ of three elements is a subset ${A}$ of ${{\bf F}_3^n}$ that does not contain any lines ${\{ x,x+r,x+2r\}}$, where ${x,r \in {\bf F}_3^n}$ and ${r \neq 0}$. A basic problem in additive combinatorics (discussed in one of the very first posts on this blog) is to obtain good upper and lower bounds for the maximal size of a capset in ${{\bf F}_3^n}$.

Trivially, one has ${|A| \leq 3^n}$. Using Fourier methods (and the density increment argument of Roth), the bound of ${|A| \leq O( 3^n / n )}$ was obtained by Meshulam, and improved only as late as 2012 to ${O( 3^n /n^{1+c})}$ for some absolute constant ${c>0}$ by Bateman and Katz. But in a very recent breakthrough, Ellenberg (and independently Gijswijt) obtained the exponentially superior bound ${|A| \leq O( 2.756^n )}$, using a version of the polynomial method recently introduced by Croot, Lev, and Pach. (In the converse direction, a construction of Edel gives capsets as large as ${(2.2174)^n}$.) Given the success of the polynomial method in superficially similar problems such as the finite field Kakeya problem (discussed in this previous post), it was natural to wonder that this method could be applicable to the cap set problem (see for instance this MathOverflow comment of mine on this from 2010), but it took a surprisingly long time before Croot, Lev, and Pach were able to identify the precise variant of the polynomial method that would actually work here.

The proof of the capset bound is very short (Ellenberg’s and Gijswijt’s preprints are both 3 pages long, and Croot-Lev-Pach is 6 pages), but I thought I would present a slight reformulation of the argument which treats the three points on a line in ${{\bf F}_3}$ symmetrically (as opposed to treating the third point differently from the first two, as is done in the Ellenberg and Gijswijt papers; Croot-Lev-Pach also treat the middle point of a three-term arithmetic progression differently from the two endpoints, although this is a very natural thing to do in their context of ${({\bf Z}/4{\bf Z})^n}$). The basic starting point is this: if ${A}$ is a capset, then one has the identity

$\displaystyle \delta_{0^n}( x+y+z ) = \sum_{a \in A} \delta_a(x) \delta_a(y) \delta_a(z) \ \ \ \ \ (1)$

for all ${(x,y,z) \in A^3}$, where ${\delta_a(x) := 1_{a=x}}$ is the Kronecker delta function, which we view as taking values in ${{\bf F}_3}$. Indeed, (1) reflects the fact that the equation ${x+y+z=0}$ has solutions precisely when ${x,y,z}$ are either all equal, or form a line, and the latter is ruled out precisely when ${A}$ is a capset.

To exploit (1), we will show that the left-hand side of (1) is “low rank” in some sense, while the right-hand side is “high rank”. Recall that a function ${F: A \times A \rightarrow {\bf F}}$ taking values in a field ${{\bf F}}$ is of rank one if it is non-zero and of the form ${(x,y) \mapsto f(x) g(y)}$ for some ${f,g: A \rightarrow {\bf F}}$, and that the rank of a general function ${F: A \times A \rightarrow {\bf F}}$ is the least number of rank one functions needed to express ${F}$ as a linear combination. More generally, if ${k \geq 2}$, we define the rank of a function ${F: A^k \rightarrow {\bf F}}$ to be the least number of “rank one” functions of the form

$\displaystyle (x_1,\dots,x_k) \mapsto f(x_i) g(x_1,\dots,x_{i-1},x_{i+1},\dots,x_k)$

for some ${i=1,\dots,k}$ and some functions ${f: A \rightarrow {\bf F}}$, ${g: A^{k-1} \rightarrow {\bf F}}$, that are needed to generate ${F}$ as a linear combination. For instance, when ${k=3}$, the rank one functions take the form ${(x,y,z) \mapsto f(x) g(y,z)}$, ${(x,y,z) \mapsto f(y) g(x,z)}$, ${(x,y,z) \mapsto f(z) g(x,y)}$, and linear combinations of ${r}$ such rank one functions will give a function of rank at most ${r}$.

It is a standard fact in linear algebra that the rank of a diagonal matrix is equal to the number of non-zero entries. This phenomenon extends to higher dimensions:

Lemma 1 (Rank of diagonal hypermatrices) Let ${k \geq 2}$, let ${A}$ be a finite set, let ${{\bf F}}$ be a field, and for each ${a \in A}$, let ${c_a \in {\bf F}}$ be a coefficient. Then the rank of the function

$\displaystyle (x_1,\dots,x_k) \mapsto \sum_{a \in A} c_a \delta_a(x_1) \dots \delta_a(x_k) \ \ \ \ \ (2)$

is equal to the number of non-zero coefficients ${c_a}$.

Proof: We induct on ${k}$. As mentioned above, the case ${k=2}$ follows from standard linear algebra, so suppose now that ${k>2}$ and the claim has already been proven for ${k-1}$.

It is clear that the function (2) has rank at most equal to the number of non-zero ${c_a}$ (since the summands on the right-hand side are rank one functions), so it suffices to establish the lower bound. By deleting from ${A}$ those elements ${a \in A}$ with ${c_a=0}$ (which cannot increase the rank), we may assume without loss of generality that all the ${c_a}$ are non-zero. Now suppose for contradiction that (2) has rank at most ${|A|-1}$, then we obtain a representation

$\displaystyle \sum_{a \in A} c_a \delta_a(x_1) \dots \delta_a(x_k)$

$\displaystyle = \sum_{i=1}^k \sum_{\alpha \in I_i} f_{i,\alpha}(x_i) g_{i,\alpha}( x_1,\dots,x_{i-1},x_{i+1},\dots,x_k) \ \ \ \ \ (3)$

for some sets ${I_1,\dots,I_k}$ of cardinalities adding up to at most ${|A|-1}$, and some functions ${f_{i,\alpha}: A \rightarrow {\bf F}}$ and ${g_{i,\alpha}: A^{k-1} \rightarrow {\bf R}}$.

Consider the space of functions ${h: A \rightarrow {\bf F}}$ that are orthogonal to all the ${f_{k,\alpha}}$, ${\alpha \in I_k}$ in the sense that

$\displaystyle \sum_{x \in A} f_{k,\alpha}(x) h(x) = 0$

for all ${\alpha \in I_k}$. This space is a vector space whose dimension ${d}$ is at least ${|A| - |I_k|}$. A basis of this space generates a ${d \times |A|}$ coordinate matrix of full rank, which implies that there is at least one non-singular ${d \times d}$ minor. This implies that there exists a function ${h: A \rightarrow {\bf F}}$ in this space which is nowhere vanishing on some subset ${A'}$ of ${A}$ of cardinality at least ${|A|-|I_k|}$.

If we multiply (3) by ${h(x_k)}$ and sum in ${x_k}$, we conclude that

$\displaystyle \sum_{a \in A} c_a h(a) \delta_a(x_1) \dots \delta_a(x_{k-1})$

$\displaystyle = \sum_{i=1}^{k-1} \sum_{\alpha \in I_i} f_{i,\alpha}(x_i)\tilde g_{i,\alpha}( x_1,\dots,x_{i-1},x_{i+1},\dots,x_{k-1})$

where

$\displaystyle \tilde g_{i,\alpha}(x_1,\dots,x_{i-1},x_{i+1},\dots,x_{k-1})$

$\displaystyle := \sum_{x_k \in A} g_{i,\alpha}(x_1,\dots,x_{i-1},x_{i+1},\dots,x_k) h(x_k).$

The right-hand side has rank at most ${|A|-1-|I_k|}$, since the summands are rank one functions. On the other hand, from induction hypothesis the left-hand side has rank at least ${|A|-|I_k|}$, giving the required contradiction. $\Box$

On the other hand, we have the following (symmetrised version of a) beautifully simple observation of Croot, Lev, and Pach:

Lemma 2 On ${({\bf F}_3^n)^3}$, the rank of the function ${(x,y,z) \mapsto \delta_{0^n}(x+y+z)}$ is at most ${3N}$, where

$\displaystyle N := \sum_{a,b,c \geq 0: a+b+c=n, b+2c \leq 2n/3} \frac{n!}{a!b!c!}.$

Proof: Using the identity ${\delta_0(x) = 1 - x^2}$ for ${x \in {\bf F}_3}$, we have

$\displaystyle \delta_{0^n}(x+y+z) = \prod_{i=1}^n (1 - (x_i+y_i+z_i)^2).$

The right-hand side is clearly a polynomial of degree ${2n}$ in ${x,y,z}$, which is then a linear combination of monomials

$\displaystyle x_1^{i_1} \dots x_n^{i_n} y_1^{j_1} \dots y_n^{j_n} z_1^{k_1} \dots z_n^{k_n}$

with ${i_1,\dots,i_n,j_1,\dots,j_n,k_1,\dots,k_n \in \{0,1,2\}}$ with

$\displaystyle i_1 + \dots + i_n + j_1 + \dots + j_n + k_1 + \dots + k_n \leq 2n.$

In particular, from the pigeonhole principle, at least one of ${i_1 + \dots + i_n, j_1 + \dots + j_n, k_1 + \dots + k_n}$ is at most ${2n/3}$.

Consider the contribution of the monomials for which ${i_1 + \dots + i_n \leq 2n/3}$. We can regroup this contribution as

$\displaystyle \sum_\alpha f_\alpha(x) g_\alpha(y,z)$

where ${\alpha}$ ranges over those ${(i_1,\dots,i_n) \in \{0,1,2\}^n}$ with ${i_1 + \dots + i_n \leq 2n/3}$, ${f_\alpha}$ is the monomial

$\displaystyle f_\alpha(x_1,\dots,x_n) := x_1^{i_1} \dots x_n^{i_n}$

and ${g_\alpha: {\bf F}_3^n \times {\bf F}_3^n \rightarrow {\bf F}_3}$ is some explicitly computable function whose exact form will not be of relevance to our argument. The number of such ${\alpha}$ is equal to ${N}$, so this contribution has rank at most ${N}$. The remaining contributions arising from the cases ${j_1 + \dots + j_n \leq 2n/3}$ and ${k_1 + \dots + k_n \leq 2n/3}$ similarly have rank at most ${N}$ (grouping the monomials so that each monomial is only counted once), so the claim follows.

Upon restricting from ${({\bf F}_3^n)^3}$ to ${A^3}$, the rank of ${(x,y,z) \mapsto \delta_{0^n}(x+y+z)}$ is still at most ${3N}$. The two lemmas then combine to give the Ellenberg-Gijswijt bound

$\displaystyle |A| \leq 3N.$

All that remains is to compute the asymptotic behaviour of ${N}$. This can be done using the general tool of Cramer’s theorem, but can also be derived from Stirling’s formula (discussed in this previous post). Indeed, if ${a = (\alpha+o(1)) n}$, ${b = (\beta+o(1)) n}$, ${c = (\gamma+o(1)) n}$ for some ${\alpha,\beta,\gamma \geq 0}$ summing to ${1}$, Stirling’s formula gives

$\displaystyle \frac{n!}{a!b!c!} = \exp( n (h(\alpha,\beta,\gamma) + o(1)) )$

where ${h}$ is the entropy function

$\displaystyle h(\alpha,\beta,\gamma) = \alpha \log \frac{1}{\alpha} + \beta \log \frac{1}{\beta} + \gamma \log \frac{1}{\gamma}.$

We then have

$\displaystyle N = \exp( n (X + o(1))$

where ${X}$ is the maximum entropy ${h(\alpha,\beta,\gamma)}$ subject to the constraints

$\displaystyle \alpha,\beta,\gamma \geq 0; \alpha+\beta+\gamma=1; \beta+2\gamma \leq 2/3.$

A routine Lagrange multiplier computation shows that the maximum occurs when

$\displaystyle \alpha = \frac{32}{3(15 + \sqrt{33})}$

$\displaystyle \beta = \frac{4(\sqrt{33}-1)}{3(15+\sqrt{33})}$

$\displaystyle \gamma = \frac{(\sqrt{33}-1)^2}{6(15+\sqrt{33})}$

and ${h(\alpha,\beta,\gamma)}$ is approximately ${1.013455}$, giving rise to the claimed bound of ${O( 2.756^n )}$.

Remark 3 As noted in the Ellenberg and Gijswijt papers, the above argument extends readily to other fields than ${{\bf F}_3}$ to control the maximal size of subset of ${{\bf F}^n}$ that has no non-trivial solutions to the equation ${ax+by+cz=0}$, where ${a,b,c \in {\bf F}}$ are non-zero constants that sum to zero. Of course one replaces the function ${(x,y,z) \mapsto \delta_{0^n}(x+y+z)}$ in Lemma 2 by ${(x,y,z) \mapsto \delta_{0^n}(ax+by+cz)}$ in this case.

Remark 4 This symmetrised formulation suggests that one possible way to improve slightly on the numerical quantity ${2.756}$ by finding a more efficient way to decompose ${\delta_{0^n}(x+y+z)}$ into rank one functions, however I was not able to do so (though such improvements are reminiscent of the Strassen type algorithms for fast matrix multiplication).

Remark 5 It is tempting to see if this method can get non-trivial upper bounds for sets ${A}$ with no length ${4}$ progressions, in (say) ${{\bf F}_5^n}$. One can run the above arguments, replacing the function

$\displaystyle (x,y,z) \mapsto \delta_{0^n}(x+y+z)$

with

$\displaystyle (x,y,z,w) \mapsto \delta_{0^n}(x-2y+z) \delta_{0^n}(y-2z+w);$

this leads to the bound ${|A| \leq 4N}$ where

$\displaystyle N := \sum_{a,b,c,d,e \geq 0: a+b+c+d+e=n, b+2c+3d+4e \leq 2n} \frac{n!}{a!b!c!d!e!}.$

Unfortunately, ${N}$ is asymptotic to ${\frac{1}{2} 5^n}$ and so this bound is in fact slightly worse than the trivial bound ${|A| \leq 5^n}$! However, there is a slim chance that there is a more efficient way to decompose ${\delta_{0^n}(x-2y+z) \delta_{0^n}(y-2z+w)}$ into rank one functions that would give a non-trivial bound on ${A}$. I experimented with a few possible such decompositions but unfortunately without success.

Remark 6 Return now to the capset problem. Since Lemma 1 is valid for any field ${{\bf F}}$, one could perhaps hope to get better bounds by viewing the Kronecker delta function ${\delta}$ as taking values in another field than ${{\bf F}_3}$, such as the complex numbers ${{\bf C}}$. However, as soon as one works in a field of characteristic other than ${3}$, one can adjoin a cube root ${\omega}$ of unity, and one now has the Fourier decomposition

$\displaystyle \delta_{0^n}(x+y+z) = \frac{1}{3^n} \sum_{\xi \in {\bf F}_3^n} \omega^{\xi \cdot x} \omega^{\xi \cdot y} \omega^{\xi \cdot z}.$

Moving to the Fourier basis, we conclude from Lemma 1 that the function ${(x,y,z) \mapsto \delta_{0^n}(x+y+z)}$ on ${{\bf F}_3^n}$ now has rank exactly ${3^n}$, and so one cannot improve upon the trivial bound of ${|A| \leq 3^n}$ by this method using fields of characteristic other than three as the range field. So it seems one has to stick with ${{\bf F}_3}$ (or the algebraic completion thereof).

Thanks to Jordan Ellenberg and Ben Green for helpful discussions.

Let ${{\bf F}_q}$ be a finite field of order ${q = p^n}$, and let ${C}$ be an absolutely irreducible smooth projective curve defined over ${{\bf F}_q}$ (and hence over the algebraic closure ${k := \overline{{\bf F}_q}}$ of that field). For instance, ${C}$ could be the projective elliptic curve

$\displaystyle C = \{ [x,y,z]: y^2 z = x^3 + ax z^2 + b z^3 \}$

in the projective plane ${{\bf P}^2 = \{ [x,y,z]: (x,y,z) \neq (0,0,0) \}}$, where ${a,b \in {\bf F}_q}$ are coefficients whose discriminant ${-16(4a^3+27b^2)}$ is non-vanishing, which is the projective version of the affine elliptic curve

$\displaystyle \{ (x,y): y^2 = x^3 + ax + b \}.$

To each such curve ${C}$ one can associate a genus ${g}$, which we will define later; for instance, elliptic curves have genus ${1}$. We can also count the cardinality ${|C({\bf F}_q)|}$ of the set ${C({\bf F}_q)}$ of ${{\bf F}_q}$-points of ${C}$. The Hasse-Weil bound relates the two:

Theorem 1 (Hasse-Weil bound) ${||C({\bf F}_q)| - q - 1| \leq 2g\sqrt{q}}$.

The usual proofs of this bound proceed by first establishing a trace formula of the form

$\displaystyle |C({\bf F}_{p^n})| = p^n - \sum_{i=1}^{2g} \alpha_i^n + 1 \ \ \ \ \ (1)$

for some complex numbers ${\alpha_1,\dots,\alpha_{2g}}$ independent of ${n}$; this is in fact a special case of the Lefschetz-Grothendieck trace formula, and can be interpreted as an assertion that the zeta function associated to the curve ${C}$ is rational. The task is then to establish a bound ${|\alpha_i| \leq \sqrt{p}}$ for all ${i=1,\dots,2g}$; this (or more precisely, the slightly stronger assertion ${|\alpha_i| = \sqrt{p}}$) is the Riemann hypothesis for such curves. This can be done either by passing to the Jacobian variety of ${C}$ and using a certain duality available on the cohomology of such varieties, known as Rosati involution; alternatively, one can pass to the product surface ${C \times C}$ and apply the Riemann-Roch theorem for that surface.

In 1969, Stepanov introduced an elementary method (a version of what is now known as the polynomial method) to count (or at least to upper bound) the quantity ${|C({\bf F}_q)|}$. The method was initially restricted to hyperelliptic curves, but was soon extended to general curves. In particular, Bombieri used this method to give a short proof of the following weaker version of the Hasse-Weil bound:

Theorem 2 (Weak Hasse-Weil bound) If ${q}$ is a perfect square, and ${q \geq (g+1)^4}$, then ${|C({\bf F}_q)| \leq q + (2g+1) \sqrt{q} + 1}$.

In fact, the bound on ${|C({\bf F}_q)|}$ can be sharpened a little bit further, as we will soon see.

Theorem 2 is only an upper bound on ${|C({\bf F}_q)|}$, but there is a Galois-theoretic trick to convert (a slight generalisation of) this upper bound to a matching lower bound, and if one then uses the trace formula (1) (and the “tensor power trick” of sending ${n}$ to infinity to control the weights ${\alpha_i}$) one can then recover the full Hasse-Weil bound. We discuss these steps below the fold.

I’ve discussed Bombieri’s proof of Theorem 2 in this previous post (in the special case of hyperelliptic curves), but now wish to present the full proof, with some minor simplifications from Bombieri’s original presentation; it is mostly elementary, with the deepest fact from algebraic geometry needed being Riemann’s inequality (a weak form of the Riemann-Roch theorem).

The first step is to reinterpret ${|C({\bf F}_q)|}$ as the number of points of intersection between two curves ${C_1,C_2}$ in the surface ${C \times C}$. Indeed, if we define the Frobenius endomorphism ${\hbox{Frob}_q}$ on any projective space by

$\displaystyle \hbox{Frob}_q( [x_0,\dots,x_n] ) := [x_0^q, \dots, x_n^q]$

then this map preserves the curve ${C}$, and the fixed points of this map are precisely the ${{\bf F}_q}$ points of ${C}$:

$\displaystyle C({\bf F}_q) = \{ z \in C: \hbox{Frob}_q(z) = z \}.$

Thus one can interpret ${|C({\bf F}_q)|}$ as the number of points of intersection between the diagonal curve

$\displaystyle \{ (z,z): z \in C \}$

and the Frobenius graph

$\displaystyle \{ (z, \hbox{Frob}_q(z)): z \in C \}$

which are copies of ${C}$ inside ${C \times C}$. But we can use the additional hypothesis that ${q}$ is a perfect square to write this more symmetrically, by taking advantage of the fact that the Frobenius map has a square root

$\displaystyle \hbox{Frob}_q = \hbox{Frob}_{\sqrt{q}}^2$

with ${\hbox{Frob}_{\sqrt{q}}}$ also preserving ${C}$. One can then also interpret ${|C({\bf F}_q)|}$ as the number of points of intersection between the curve

$\displaystyle C_1 := \{ (z, \hbox{Frob}_{\sqrt{q}}(z)): z \in C \} \ \ \ \ \ (2)$

and its transpose

$\displaystyle C_2 := \{ (\hbox{Frob}_{\sqrt{q}}(w), w): w \in C \}.$

Let ${k(C \times C)}$ be the field of rational functions on ${C \times C}$ (with coefficients in ${k}$), and define ${k(C_1)}$, ${k(C_2)}$, and ${k(C_1 \cap C_2)}$ analogously )(although ${C_1 \cap C_2}$ is likely to be disconnected, so ${k(C_1 \cap C_2)}$ will just be a ring rather than a field. We then (morally) have the commuting square

$\displaystyle \begin{array}{ccccc} && k(C \times C) && \\ & \swarrow & & \searrow & \\ k(C_1) & & & & k(C_2) \\ & \searrow & & \swarrow & \\ && k(C_1 \cap C_2) && \end{array},$

if we ignore the issue that a rational function on, say, ${C \times C}$, might blow up on all of ${C_1}$ and thus not have a well-defined restriction to ${C_1}$. We use ${\pi_1: k(C \times C) \rightarrow k(C_1)}$ and ${\pi_2: k(C \times C) \rightarrow k(C_2)}$ to denote the restriction maps. Furthermore, we have obvious isomorphisms ${\iota_1: k(C_1) \rightarrow k(C)}$, ${\iota_2: k(C_2) \rightarrow k(C)}$ coming from composing with the graphing maps ${z \mapsto (z, \hbox{Frob}_{\sqrt{q}}(z))}$ and ${w \mapsto (\hbox{Frob}_{\sqrt{q}}(w), w)}$.

The idea now is to find a rational function ${f \in k(C \times C)}$ on the surface ${C \times C}$ of controlled degree which vanishes when restricted to ${C_1}$, but is non-vanishing (and not blowing up) when restricted to ${C_2}$. On ${C_2}$, we thus get a non-zero rational function ${f \downharpoonright_{C_2}}$ of controlled degree which vanishes on ${C_1 \cap C_2}$ – which then lets us bound the cardinality of ${C_1 \cap C_2}$ in terms of the degree of ${f \downharpoonright_{C_2}}$. (In Bombieri’s original argument, one required vanishing to high order on the ${C_1}$ side, but in our presentation, we have factored out a ${\hbox{Frob}_{\sqrt{q}}}$ term which removes this high order vanishing condition.)

To find this ${f}$, we will use linear algebra. Namely, we will locate a finite-dimensional subspace ${V}$ of ${k(C \times C)}$ (consisting of certain “controlled degree” rational functions) which projects injectively to ${k(C_2)}$, but whose projection to ${k(C_1)}$ has strictly smaller dimension than ${V}$ itself. The rank-nullity theorem then forces the existence of a non-zero element ${P}$ of ${V}$ whose projection to ${k(C_1)}$ vanishes, but whose projection to ${k(C_2)}$ is non-zero.

Now we build ${V}$. Pick a ${{\bf F}_q}$ point ${P_\infty}$ of ${C}$, which we will think of as being a point at infinity. (For the purposes of proving Theorem 2, we may clearly assume that ${C({\bf F}_q)}$ is non-empty.) Thus ${P_\infty}$ is fixed by ${\hbox{Frob}_q}$. To simplify the exposition, we will also assume that ${P_\infty}$ is fixed by the square root ${\hbox{Frob}_{\sqrt{q}}}$ of ${\hbox{Frob}_q}$; in the opposite case when ${\hbox{Frob}_{\sqrt{q}}}$ has order two when acting on ${P_\infty}$, the argument is essentially the same, but all references to ${P_\infty}$ in the second factor of ${C \times C}$ need to be replaced by ${\hbox{Frob}_{\sqrt{q}} P_\infty}$ (we leave the details to the interested reader).

For any natural number ${n}$, define ${R_n}$ to be the set of rational functions ${f \in k(C)}$ which are allowed to have a pole of order up to ${n}$ at ${P_\infty}$, but have no other poles on ${C}$; note that as we are assuming ${C}$ to be smooth, it is unambiguous what a pole is (and what order it will have). (In the fancier language of divisors and Cech cohomology, we have ${R_n = H^0( C, {\mathcal O}_C(-n P_\infty) )}$.) The space ${R_n}$ is clearly a vector space over ${k}$; one can view intuitively as the space of “polynomials” on ${C}$ of “degree” at most ${n}$. When ${n=0}$, ${R_0}$ consists just of the constant functions. Indeed, if ${f \in R_0}$, then the image ${f(C)}$ of ${f}$ avoids ${\infty}$ and so lies in the affine line ${k = {\mathbf P}^1 \backslash \{\infty\}}$; but as ${C}$ is projective, the image ${f(C)}$ needs to be compact (hence closed) in ${{\mathbf P}^1}$, and must therefore be a point, giving the claim.

For higher ${n \geq 1}$, we have the easy relations

$\displaystyle \hbox{dim}(R_{n-1}) \leq \hbox{dim}(R_n) \leq \hbox{dim}(R_{n-1})+1. \ \ \ \ \ (3)$

The former inequality just comes from the trivial inclusion ${R_{n-1} \subset R_n}$. For the latter, observe that if two functions ${f, g}$ lie in ${R_n}$, so that they each have a pole of order at most ${n}$ at ${P_\infty}$, then some linear combination of these functions must have a pole of order at most ${n-1}$ at ${P_\infty}$; thus ${R_{n-1}}$ has codimension at most one in ${R_n}$, giving the claim.

From (3) and induction we see that each of the ${R_n}$ are finite dimensional, with the trivial upper bound

$\displaystyle \hbox{dim}(R_n) \leq n+1. \ \ \ \ \ (4)$

Riemann’s inequality complements this with the lower bound

$\displaystyle \hbox{dim}(R_n) \geq n+1-g, \ \ \ \ \ (5)$

thus one has ${\hbox{dim}(R_n) = \hbox{dim}(R_{n-1})+1}$ for all but at most ${g}$ exceptions (in fact, exactly ${g}$ exceptions as it turns out). This is a consequence of the Riemann-Roch theorem; it can be proven from abstract nonsense (the snake lemma) if one defines the genus ${g}$ in a non-standard fashion (as the dimension of the first Cech cohomology ${H^1(C)}$ of the structure sheaf ${{\mathcal O}_C}$ of ${C}$), but to obtain this inequality with a standard definition of ${g}$ (e.g. as the dimension of the zeroth Cech cohomolgy ${H^0(C, \Omega_C^1)}$ of the line bundle of differentials) requires the more non-trivial tool of Serre duality.

At any rate, now that we have these vector spaces ${R_n}$, we will define ${V \subset k(C \times C)}$ to be a tensor product space

$\displaystyle V = R_\ell \otimes R_m$

for some natural numbers ${\ell, m \geq 0}$ which we will optimise in later. That is to say, ${V}$ is spanned by functions of the form ${(z,w) \mapsto f(z) g(w)}$ with ${f \in R_\ell}$ and ${g \in R_m}$. This is clearly a linear subspace of ${k(C \times C)}$ of dimension ${\hbox{dim}(R_\ell) \hbox{dim}(R_m)}$, and hence by Rieman’s inequality we have

$\displaystyle \hbox{dim}(V) \geq (\ell+1-g) (m+1-g) \ \ \ \ \ (6)$

if

$\displaystyle \ell,m \geq g-1. \ \ \ \ \ (7)$

Observe that ${\iota_1 \circ \pi_1}$ maps a tensor product ${(z,w) \mapsto f(z) g(w)}$ to a function ${z \mapsto f(z) g(\hbox{Frob}_{\sqrt{q}} z)}$. If ${f \in R_\ell}$ and ${g \in R_m}$, then we see that the function ${z \mapsto f(z) g(\hbox{Frob}_{\sqrt{q}} z)}$ has a pole of order at most ${\ell+m\sqrt{q}}$ at ${P_\infty}$. We conclude that

$\displaystyle \iota_1 \circ \pi_1( V ) \subset R_{\ell + m\sqrt{q}} \ \ \ \ \ (8)$

and in particular by (4)

$\displaystyle \hbox{dim}(\pi_1(V)) \leq \ell + m \sqrt{q} + 1 \ \ \ \ \ (9)$

and similarly

$\displaystyle \hbox{dim}(\pi_2(V)) \leq \ell \sqrt{q} + m + 1. \ \ \ \ \ (10)$

We will choose ${m}$ to be a bit bigger than ${\ell}$, to make the ${\pi_2}$ image of ${V}$ smaller than that of ${\pi_1}$. From (6), (10) we see that if we have the inequality

$\displaystyle (\ell+1-g) (m+1-g) > \ell \sqrt{q}+m + 1 \ \ \ \ \ (11)$

(together with (7)) then ${\pi_2}$ cannot be injective.

On the other hand, we have the following basic fact:

Lemma 3 (Injectivity) If

$\displaystyle \ell < \sqrt{q}, \ \ \ \ \ (12)$

then ${\pi_1: V \rightarrow \pi_1(V)}$ is injective.

Proof: From (3), we can find a linear basis ${f_1,\dots,f_a}$ of ${R_\ell}$ such that each of the ${f_i}$ has a distinct order ${d_i}$ of pole at ${P_\infty}$ (somewhere between ${0}$ and ${\ell}$ inclusive). Similarly, we may find a linear basis ${g_1,\dots,g_b}$ of ${R_m}$ such that each of the ${g_j}$ has a distinct order ${e_j}$ of pole at ${P_\infty}$ (somewhere between ${0}$ and ${m}$ inclusive). The functions ${z \mapsto f_i(z) g_j(\hbox{Frob}_{\sqrt{q}} z)}$ then span ${\iota_1(\pi_1(V))}$, and the order of pole at ${P_\infty}$ is ${d_i + \sqrt{q} e_j}$. But since ${\ell < \sqrt{q}}$, these orders are all distinct, and so these functions must be linearly independent. The claim follows. $\Box$

This gives us the following bound:

Proposition 4 Let ${\ell,m}$ be natural numbers such that (7), (11), (12) hold. Then ${|C({\bf F}_q)| \leq \ell + m \sqrt{q}}$.

Proof: As ${\pi_2}$ is not injective, we can find ${f \in V}$ with ${\pi_2(f)}$ vanishing. By the above lemma, the function ${\iota_1(\pi_1(f))}$ is then non-zero, but it must also vanish on ${\iota_1(C_1 \cap C_2)}$, which has cardinality ${|C({\bf F}_q)|}$. On the other hand, by (8), ${\iota_1(\pi_1(f))}$ has a pole of order at most ${\ell+m\sqrt{q}}$ at ${P_\infty}$ and no other poles. Since the number of poles and zeroes of a rational function on a projective curve must add up to zero, the claim follows. $\Box$

If ${q \geq (g+1)^4}$, we may make the explicit choice

$\displaystyle m := \sqrt{q}+2g; \quad \ell := \lfloor \frac{g}{g+1} \sqrt{q} \rfloor + g + 1$

and a brief calculation then gives Theorem 2. In some cases one can optimise things a bit further. For instance, in the genus zero case ${g=0}$ (e.g. if ${C}$ is just the projective line ${{\mathbf P}^1}$) one may take ${\ell=1, m = \sqrt{q}}$ and conclude the absolutely sharp bound ${|C({\bf F}_q)| \leq q+1}$ in this case; in the case of the projective line ${{\mathbf P}^1}$, the function ${f}$ is in fact the very concrete function ${f(z,w) := z - w^{\sqrt{q}}}$.

Remark 1 When ${q = p^{2n+1}}$ is not a perfect square, one can try to run the above argument using the factorisation ${\hbox{Frob}_q = \hbox{Frob}_{p^n} \hbox{Frob}_{p^{n+1}}}$ instead of ${\hbox{Frob}_q = \hbox{Frob}_{\sqrt{q}} \hbox{Frob}_{\sqrt{q}}}$. This gives a weaker version of the above bound, of the shape ${|C({\bf F}_q)| \leq q + O( \sqrt{p} \sqrt{q} )}$. In the hyperelliptic case at least, one can erase this loss by working with a variant of the argument in which one requires ${f}$ to vanish to high order at ${C_1}$, rather than just to first order; see this survey article of mine for details.

Let ${F}$ be a finite field, with algebraic closure ${\overline{F}}$, and let ${V}$ be an (affine) algebraic variety defined over ${\overline{F}}$, by which I mean a set of the form

$\displaystyle V = \{ x \in \overline{F}^d: P_1(x) = \ldots = P_m(x) = 0 \}$

for some ambient dimension ${d \geq 0}$, and some finite number of polynomials ${P_1,\ldots,P_m: \overline{F}^d \rightarrow \overline{F}}$. In order to reduce the number of subscripts later on, let us say that ${V}$ has complexity at most ${M}$ if ${d}$, ${m}$, and the degrees of the ${P_1,\ldots,P_m}$ are all less than or equal to ${M}$. Note that we do not require at this stage that ${V}$ be irreducible (i.e. not the union of two strictly smaller varieties), or defined over ${F}$, though we will often specialise to these cases later in this post. (Also, everything said here can also be applied with almost no changes to projective varieties, but we will stick with affine varieties for sake of concreteness.)

One can consider two crude measures of how “big” the variety ${V}$ is. The first measure, which is algebraic geometric in nature, is the dimension ${\hbox{dim}(V)}$ of the variety ${V}$, which is an integer between ${0}$ and ${d}$ (or, depending on convention, ${-\infty}$, ${-1}$, or undefined, if ${V}$ is empty) that can be defined in a large number of ways (e.g. it is the largest ${r}$ for which the generic linear projection from ${V}$ to ${\overline{F}^r}$ is dominant, or the smallest ${r}$ for which the intersection with a generic codimension ${r}$ subspace is non-empty). The second measure, which is number-theoretic in nature, is the number ${|V(F)| = |V \cap F^d|}$ of ${F}$-points of ${V}$, i.e. points ${x = (x_1,\ldots,x_d)}$ in ${V}$ all of whose coefficients lie in the finite field, or equivalently the number of solutions to the system of equations ${P_i(x_1,\ldots,x_d) = 0}$ for ${i=1,\ldots,m}$ with variables ${x_1,\ldots,x_d}$ in ${F}$.

These two measures are linked together in a number of ways. For instance, we have the basic Schwarz-Zippel type bound (which, in this qualitative form, goes back at least to Lemma 1 of the work of Lang and Weil in 1954).

Lemma 1 (Schwarz-Zippel type bound) Let ${V}$ be a variety of complexity at most ${M}$. Then we have ${|V(F)| \ll_M |F|^{\hbox{dim}(V)}}$.

Proof: (Sketch) For the purposes of exposition, we will not carefully track the dependencies of implied constants on the complexity ${M}$, instead simply assuming that all of these quantities remain controlled throughout the argument. (If one wished, one could obtain ineffective bounds on these quantities by an ultralimit argument, as discussed in this previous post, or equivalently by moving everything over to a nonstandard analysis framework; one could also obtain such uniformity using the machinery of schemes.)

We argue by induction on the ambient dimension ${d}$ of the variety ${V}$. The ${d=0}$ case is trivial, so suppose ${d \geq 1}$ and that the claim has already been proven for ${d-1}$. By breaking up ${V}$ into irreducible components we may assume that ${V}$ is irreducible (this requires some control on the number and complexity of these components, but this is available, as discussed in this previous post). For each ${x_1,\ldots,x_{d-1} \in \overline{F}}$, the fibre ${\{ x_d \in \overline{F}: (x_1,\ldots,x_{d-1},x_d) \in V \}}$ is either one-dimensional (and thus all of ${\overline{F}}$) or zero-dimensional. In the latter case, one has ${O_M(1)}$ points in the fibre from the fundamental theorem of algebra (indeed one has a bound of ${D}$ in this case), and ${(x_1,\ldots,x_{d-1})}$ lives in the projection of ${V}$ to ${\overline{F}^{d-1}}$, which is a variety of dimension at most ${\hbox{dim}(V)}$ and controlled complexity, so the contribution of this case is acceptable from the induction hypothesis. In the former case, the fibre contributes ${|F|}$ ${F}$-points, but ${(x_1,\ldots,x_{d-1})}$ lies in a variety in ${\overline{F}^{d-1}}$ of dimension at most ${\hbox{dim}(V)-1}$ (since otherwise ${V}$ would contain a subvariety of dimension at least ${\hbox{dim}(V)+1}$, which is absurd) and controlled complexity, and so the contribution of this case is also acceptable from the induction hypothesis. $\Box$

One can improve the bound on the implied constant to be linear in the degree of ${V}$ (see e.g. Claim 7.2 of this paper of Dvir, Kollar, and Lovett, or Lemma A.3 of this paper of Ellenberg, Oberlin, and myself), but we will not be concerned with these improvements here.

Without further hypotheses on ${V}$, the above upper bound is sharp (except for improvements in the implied constants). For instance, the variety

$\displaystyle V := \{ (x_1,\ldots,x_d) \in \overline{F}^d: \prod_{j=1}^D (x_d - a_j) = 0\},$

where ${a_1,\ldots,a_D \in F}$ are distict, is the union of ${D}$ distinct hyperplanes of dimension ${d-1}$, with ${|V(F)| = D |F|^{d-1}}$ and complexity ${\max(D,d)}$; similar examples can easily be concocted for other choices of ${\hbox{dim}(V)}$. In the other direction, there is also no non-trivial lower bound for ${|V(F)|}$ without further hypotheses on ${V}$. For a trivial example, if ${a}$ is an element of ${\overline{F}}$ that does not lie in ${F}$, then the hyperplane

$\displaystyle V := \{ (x_1,\ldots,x_d) \in \overline{F}^d: x_d - a = 0 \}$

clearly has no ${F}$-points whatsoever, despite being a ${d-1}$-dimensional variety in ${\overline{F}^d}$ of complexity ${d}$. For a slightly less non-trivial example, if ${a}$ is an element of ${F}$ that is not a quadratic residue, then the variety

$\displaystyle V := \{ (x_1,\ldots,x_d) \in \overline{F}^d: x_d^2 - a = 0 \},$

which is the union of two hyperplanes, still has no ${F}$-points, even though this time the variety is defined over ${F}$ instead of ${\overline{F}}$ (by which we mean that the defining polynomial(s) have all of their coefficients in ${F}$). There is however the important Lang-Weil bound that allows for a much better estimate as long as ${V}$ is both defined over ${F}$ and irreducible:

Theorem 2 (Lang-Weil bound) Let ${V}$ be a variety of complexity at most ${M}$. Assume that ${V}$ is defined over ${F}$, and that ${V}$ is irreducible as a variety over ${\overline{F}}$ (i.e. ${V}$ is geometrically irreducible or absolutely irreducible). Then

$\displaystyle |V(F)| = (1 + O_M(|F|^{-1/2})) |F|^{\hbox{dim}(V)}.$

Again, more explicit bounds on the implied constant here are known, but will not be the focus of this post. As the previous examples show, the hypotheses of definability over ${F}$ and geometric irreducibility are both necessary.

The Lang-Weil bound is already non-trivial in the model case ${d=2, \hbox{dim}(V)=1}$ of plane curves:

Theorem 3 (Hasse-Weil bound) Let ${P: \overline{F}^2 \rightarrow \overline{F}}$ be an irreducible polynomial of degree ${D}$ with coefficients in ${F}$. Then

$\displaystyle |\{ (x,y) \in F^2: P(x,y) = 0 \}| = |F| + O_D( |F|^{1/2} ).$

Thus, for instance, if ${a,b \in F}$, then the elliptic curve ${\{ (x,y) \in F^2: y^2 = x^3 + ax + b \}}$ has ${|F| + O(|F|^{1/2})}$ ${F}$-points, a result first established by Hasse. The Hasse-Weil bound is already quite non-trivial, being the analogue of the Riemann hypothesis for plane curves. For hyper-elliptic curves, an elementary proof (due to Stepanov) is discussed in this previous post. For general plane curves, the first proof was by Weil (leading to his famous Weil conjectures); there is also a nice version of Stepanov’s argument due to Bombieri covering this case which is a little less elementary (relying crucially on the Riemann-Roch theorem for the upper bound, and a lifting trick to then get the lower bound), which I briefly summarise later in this post. The full Lang-Weil bound is deduced from the Hasse-Weil bound by an induction argument using generic hyperplane slicing, as I will also summarise later in this post.

The hypotheses of definability over ${F}$ and geometric irreducibility in the Lang-Weil can be removed after inserting a geometric factor:

Corollary 4 (Lang-Weil bound, alternate form) Let ${V}$ be a variety of complexity at most ${M}$. Then one has

$\displaystyle |V(F)| = (c(V) + O_M(|F|^{-1/2})) |F|^{\hbox{dim}(V)}$

where ${c(V)}$ is the number of top-dimensional components of ${V}$ (i.e. geometrically irreducible components of ${V}$ of dimension ${\hbox{dim}(V)}$) that are definable over ${F}$, or equivalently are invariant with respect to the Frobenius endomorphism ${x \mapsto x^{|F|}}$ that defines ${F}$.

Proof: By breaking up a general variety ${V}$ into components (and using Lemma 1 to dispose of any lower-dimensional components), it suffices to establish this claim when ${V}$ is itself geometrically irreducible. If ${V}$ is definable over ${F}$, the claim follows from Theorem 2. If ${V}$ is not definable over ${F}$, then it is not fixed by the Frobenius endomorphism ${Frob}$ (since otherwise one could produce a set of defining polynomials that were fixed by Frobenius and thus defined over ${F}$ by using some canonical basis (such as a reduced Grobner basis) for the associated ideal), and so ${V \cap Frob(V)}$ has strictly smaller dimension than ${V}$. But ${V \cap Frob(V)}$ captures all the ${F}$-points of ${V}$, so in this case the claim follows from Lemma 1. $\Box$

Note that if ${V}$ is reducible but is itself defined over ${F}$, then the Frobenius endomorphism preserves ${V}$ itself, but may permute the components of ${V}$ around. In this case, ${c(V)}$ is the number of fixed points of this permutation action of Frobenius on the components. In particular, ${c(V)}$ is always a natural number between ${0}$ and ${O_M(1)}$; thus we see that regardless of the geometry of ${V}$, the normalised count ${|V(F)|/|F|^{\hbox{dim}(V)}}$ is asymptotically restricted to a bounded range of natural numbers (in the regime where the complexity stays bounded and ${|F|}$ goes to infinity).

Example 1 Consider the variety

$\displaystyle V := \{ (x,y) \in \overline{F}^2: x^2 - ay^2 = 0 \}$

for some non-zero parameter ${a \in F}$. Geometrically (by which we basically mean “when viewed over the algebraically closed field ${\overline{F}}$“), this is the union of two lines, with slopes corresponding to the two square roots of ${a}$. If ${a}$ is a quadratic residue, then both of these lines are defined over ${F}$, and are fixed by Frobenius, and ${c(V) = 2}$ in this case. If ${a}$ is not a quadratic residue, then the lines are not defined over ${F}$, and the Frobenius automorphism permutes the two lines while preserving ${V}$ as a whole, giving ${c(V)=0}$ in this case.

Corollary 4 effectively computes (at least to leading order) the number-theoretic size ${|V(F)|}$ of a variety in terms of geometric information about ${V}$, namely its dimension ${\hbox{dim}(V)}$ and the number ${c(V)}$ of top-dimensional components fixed by Frobenius. It turns out that with a little bit more effort, one can extend this connection to cover not just a single variety ${V}$, but a family of varieties indexed by points in some base space ${W}$. More precisely, suppose we now have two affine varieties ${V,W}$ of bounded complexity, together with a regular map ${\phi: V \rightarrow W}$ of bounded complexity (the definition of complexity of a regular map is a bit technical, see e.g. this paper, but one can think for instance of a polynomial or rational map of bounded degree as a good example). It will be convenient to assume that the base space ${W}$ is irreducible. If the map ${\phi}$ is a dominant map (i.e. the image ${\phi(V)}$ is Zariski dense in ${W}$), then standard algebraic geometry results tell us that the fibres ${\phi^{-1}(\{w\})}$ are an unramified family of ${\hbox{dim}(V)-\hbox{dim}(W)}$-dimensional varieties outside of an exceptional subset ${W'}$ of ${W}$ of dimension strictly smaller than ${\hbox{dim}(W)}$ (and with ${\phi^{-1}(W')}$ having dimension strictly smaller than ${\hbox{dim}(V)}$); see e.g. Section I.6.3 of Shafarevich.

Now suppose that ${V}$, ${W}$, and ${\phi}$ are defined over ${F}$. Then, by Lang-Weil, ${W(F)}$ has ${(1 + O(|F|^{-1/2})) |F|^{\hbox{dim}(W)}}$ ${F}$-points, and by Schwarz-Zippel, for all but ${O( |F|^{\hbox{dim}(W)-1})}$ of these ${F}$-points ${w}$ (the ones that lie in the subvariety ${W'}$), the fibre ${\phi^{-1}(\{w\})}$ is an algebraic variety defined over ${F}$ of dimension ${\hbox{dim}(V)-\hbox{dim}(W)}$. By using ultraproduct arguments (see e.g. Lemma 3.7 of this paper of mine with Emmanuel Breuillard and Ben Green), this variety can be shown to have bounded complexity, and thus by Corollary 4, has ${(c(\phi^{-1}(\{w\})) + O(|F|^{-1/2}) |F|^{\hbox{dim}(V)-\hbox{dim}(W)}}$ ${F}$-points. One can then ask how the quantity ${c(\phi^{-1}(\{w\})}$ is distributed. A simple but illustrative example occurs when ${V=W=F}$ and ${\phi: F \rightarrow F}$ is the polynomial ${\phi(x) := x^2}$. Then ${c(\phi^{-1}(\{w\})}$ equals ${2}$ when ${w}$ is a non-zero quadratic residue and ${0}$ when ${w}$ is a non-zero quadratic non-residue (and ${1}$ when ${w}$ is zero, but this is a negligible fraction of all ${w}$). In particular, in the asymptotic limit ${|F| \rightarrow \infty}$, ${c(\phi^{-1}(\{w\})}$ is equal to ${2}$ half of the time and ${0}$ half of the time.

Now we describe the asymptotic distribution of the ${c(\phi^{-1}(\{w\}))}$. We need some additional notation. Let ${w_0}$ be an ${F}$-point in ${W \backslash W'}$, and let ${\pi_0( \phi^{-1}(\{w_0\}) )}$ be the connected components of the fibre ${\phi^{-1}(\{w_0\})}$. As ${\phi^{-1}(\{w_0\})}$ is defined over ${F}$, this set of components is permuted by the Frobenius endomorphism ${Frob}$. But there is also an action by monodromy of the fundamental group ${\pi_1(W \backslash W')}$ (this requires a certain amount of étale machinery to properly set up, as we are working over a positive characteristic field rather than over the complex numbers, but I am going to ignore this rather important detail here, as I still don’t fully understand it). This fundamental group may be infinite, but (by the étale construction) is always profinite, and in particular has a Haar probability measure, in which every finite index subgroup (and their cosets) are measurable. Thus we may meaningfully talk about elements drawn uniformly at random from this group, so long as we work only with the profinite ${\sigma}$-algebra on ${\pi_1(W \backslash W')}$ that is generated by the cosets of the finite index subgroups of this group (which will be the only relevant sets we need to measure when considering the action of this group on finite sets, such as the components of a generic fibre).

Theorem 5 (Lang-Weil with parameters) Let ${V, W}$ be varieties of complexity at most ${M}$ with ${W}$ irreducible, and let ${\phi: V \rightarrow W}$ be a dominant map of complexity at most ${M}$. Let ${w_0}$ be an ${F}$-point of ${W \backslash W'}$. Then, for any natural number ${a}$, one has ${c(\phi^{-1}(\{w\})) = a}$ for ${(\mathop{\bf P}( X = a ) + O_M(|F|^{-1/2})) |F|^{\hbox{dim}(W)}}$ values of ${w \in W(F)}$, where ${X}$ is the random variable that counts the number of components of a generic fibre ${\phi^{-1}(w_0)}$ that are invariant under ${g \circ Frob}$, where ${g}$ is an element chosen uniformly at random from the étale fundamental group ${\pi_1(W \backslash W')}$. In particular, in the asymptotic limit ${|F| \rightarrow \infty}$, and with ${w}$ chosen uniformly at random from ${W(F)}$, ${c(\phi^{-1}(\{w\}))}$ (or, equivalently, ${|\phi^{-1}(\{w\})(F)| / |F|^{\hbox{dim}(V)-\hbox{dim}(W)}}$) and ${X}$ have the same asymptotic distribution.

This theorem generalises Corollary 4 (which is the case when ${W}$ is just a point, so that ${\phi^{-1}(\{w\})}$ is just ${V}$ and ${g}$ is trivial). Informally, the effect of a non-trivial parameter space ${W}$ on the Lang-Weil bound is to push around the Frobenius map by monodromy for the purposes of counting invariant components, and a randomly chosen set of parameters corresponds to a randomly chosen loop on which to perform monodromy.

Example 2 Let ${V=W=F}$ and ${\phi(x) = x^m}$ for some fixed ${m \geq 1}$; to avoid some technical issues let us suppose that ${m}$ is coprime to ${|F|}$. Then ${W'}$ can be taken to be ${\{0\}}$, and for a base point ${w_0 \in W \backslash W'}$ we can take ${w_0=1}$. The fibre ${\phi^{-1}(\{1\})}$ – the ${m^{th}}$ roots of unity – can be identified with the cyclic group ${{\bf Z}/m{\bf Z}}$ by using a primitive root of unity. The étale fundamental group ${\pi(W \backslash W') = \pi(\overline{F} \backslash 0)}$ is (I think) isomorphic to the profinite closure ${\hat {\bf Z}}$ of the integers ${{\bf Z}}$ (excluding the part of that closure coming from the characteristic of ${F}$). Not coincidentally, the integers ${{\bf Z}}$ are the fundamental group of the complex analogue ${{\bf C} \backslash \{0\}}$ of ${W \backslash W'}$. (Brian Conrad points out to me though that for more complicated varieties, such as covers of ${\overline{F} \backslash \{0\}}$ by a power of the characteristic, the etale fundamental group is more complicated than just a profinite closure of the ordinary fundamental group, due to the presence of Artin-Schreier covers that are only ramified at infinity.) The action of this fundamental group on the fibres ${{\bf Z}/m{\bf Z}}$ can given by translation. Meanwhile, the Frobenius map ${Frob}$ on ${{\bf Z}/m{\bf Z}}$ is given by multiplication by ${|F|}$. A random element ${g \circ Frob}$ then becomes a random affine map ${x \mapsto |F|x+b}$ on ${{\bf Z}/m{\bf Z}}$, where ${b}$ chosen uniformly at random from ${{\bf Z}/m{\bf Z}}$. The number of fixed points of this map is equal to the greatest common divisor ${(|F|-1,m)}$ of ${|F|-1}$ and ${m}$ when ${b}$ is divisible by ${(|F|-1,m)}$, and equal to ${0}$ otherwise. This matches up with the elementary number fact that a randomly chosen non-zero element of ${F}$ will be an ${m^{th}}$ power with probability ${1/(|F|-1,m)}$, and when this occurs, the number of ${m^{th}}$ roots in ${F}$ will be ${(|F|-1,m)}$.

Example 3 (Thanks to Jordan Ellenberg for this example.) Consider a random elliptic curve ${E = \{ y^2 = x^3 + ax + b \}}$, where ${a,b}$ are chosen uniformly at random, and let ${m \geq 1}$. Let ${E[m]}$ be the ${m}$-torsion points of ${E}$ (i.e. those elements ${g \in E}$ with ${mg = 0}$ using the elliptic curve addition law); as a group, this is isomorphic to ${{\bf Z}/m{\bf Z} \times {\bf Z}/m{\bf Z}}$ (assuming that ${F}$ has sufficiently large characteristic, for simplicity), and consider the number of ${F}$ points of ${E[m]}$, which is a random variable taking values in the natural numbers between ${0}$ and ${m^2}$. In this case, the base variety ${W}$ is the modular curve ${X(1)}$, and the covering variety ${V}$ is the modular curve ${X_1(m)}$. The generic fibre here can be identified with ${{\bf Z}/m{\bf Z} \times {\bf Z}/m{\bf Z}}$, the monodromy action projects down to the action of ${SL_2({\bf Z}/m{\bf Z})}$, and the action of Frobenius on this fibre can be shown to be given by a ${2 \times 2}$ matrix with determinant ${|F|}$ (with the exact choice of matrix depending on the choice of fibre and of the identification), so the distribution of the number of ${F}$-points of ${E[m]}$ is asymptotic to the distribution of the number of fixed points ${X}$ of a random linear map of determinant ${|F|}$ on ${{\bf Z}/m{\bf Z} \times {\bf Z}/m{\bf Z}}$.

Theorem 5 seems to be well known “folklore” among arithmetic geometers, though I do not know of an explicit reference for it. I enjoyed deriving it for myself (though my derivation is somewhat incomplete due to my lack of understanding of étale cohomology) from the ordinary Lang-Weil theorem and the moment method. I’m recording this derivation later in this post, mostly for my own benefit (as I am still in the process of learning this material), though perhaps some other readers may also be interested in it.

Caveat: not all details are fully fleshed out in this writeup, particularly those involving the finer points of algebraic geometry and étale cohomology, as my understanding of these topics is not as complete as I would like it to be.

Many thanks to Brian Conrad and Jordan Ellenberg for helpful discussions on these topics.

The ham sandwich theorem asserts that, given ${d}$ bounded open sets ${U_1,\ldots,U_d}$ in ${{\bf R}^d}$, there exists a hyperplane ${\{ x \in {\bf R}^d: x \cdot v = c \}}$ that bisects each of these sets ${U_i}$, in the sense that each of the two half-spaces ${\{ x \in {\bf R}^d: x \cdot v < c \}, \{ x \in {\bf R}^d: x \cdot v > c \}}$ on either side of the hyperplane captures exactly half of the volume of ${U_i}$. The shortest proof of this result proceeds by invoking the Borsuk-Ulam theorem.

A useful generalisation of the ham sandwich theorem is the polynomial ham sandwich theorem, which asserts that given ${m}$ bounded open sets ${U_1,\ldots,U_m}$ in ${{\bf R}^d}$, there exists a hypersurface ${\{ x \in {\bf R}^d: Q(x)=0\}}$ of degree ${O_d( m^{1/d} )}$ (thus ${P: {\bf R}^d \rightarrow {\bf R}}$ is a polynomial of degree ${O(m^{1/n})}$ such that the two semi-algebraic sets ${\{ Q > 0 \}}$ and ${\{ Q < 0\}}$ capture half the volume of each of the ${U_i}$. (More precisely, the degree will be at most ${D}$, where ${D}$ is the first positive integer for which ${\binom{D+d}{d}}$ exceeds ${m}$.) This theorem can be deduced from the Borsuk-Ulam theorem in the same manner that the ordinary ham sandwich theorem is (and can also be deduced directly from the ordinary ham sandwich theorem via the Veronese embedding).

The polynomial ham sandwich theorem is a theorem about continuous bodies (bounded open sets), but a simple limiting argument leads one to the following discrete analogue: given ${m}$ finite sets ${S_1,\ldots,S_m}$ in ${{\bf R}^d}$, there exists a hypersurface ${\{ x \in {\bf R}^d: Q(x)=0\}}$ of degree ${O_d( m^{1/d} )}$, such that each of the two semi-algebraic sets ${\{ Q > 0 \}}$ and ${\{ Q < 0\}}$ contain at most half of the points on ${S_i}$ (note that some of the points of ${S_i}$ can certainly lie on the boundary ${\{Q=0\}}$). This can be iterated to give a useful cell decomposition:

Proposition 1 (Cell decomposition) Let ${P}$ be a finite set of points in ${{\bf R}^d}$, and let ${D}$ be a positive integer. Then there exists a polynomial ${Q}$ of degree at most ${D}$, and a decomposition

$\displaystyle {\bf R}^d = \{ Q = 0\} \cup C_1 \cup \ldots \cup C_m$

into the hypersurface ${\{Q=0\}}$ and a collection ${C_1,\ldots,C_m}$ of cells bounded by ${\{P=0\}}$, such that ${m = O_d(D^d)}$, and such that each cell ${C_i}$ contains at most ${O_d( |P|/D^d )}$ points.

A proof is sketched in this previous blog post. The cells in the argument are not necessarily connected (being instead formed by intersecting together a number of semi-algebraic sets such as ${\{ Q > 0\}}$ and ${\{Q<0\}}$), but it is a classical result (established independently by Oleinik-Petrovskii, Milnor, and Thom) that any degree ${D}$ hypersurface ${\{Q=0\}}$ divides ${{\bf R}^d}$ into ${O_d(D^d)}$ connected components, so one can easily assume that the cells are connected if desired. (Incidentally, one does not need the full machinery of the results in the above cited papers – which control not just the number of components, but all the Betti numbers of the complement of ${\{Q=0\}}$ – to get the bound on connected components; one can instead observe that every bounded connected component has a critical point where ${\nabla Q = 0}$, and one can control the number of these points by Bezout’s theorem, after perturbing ${Q}$ slightly to enforce genericity, and then count the unbounded components by an induction on dimension.)

Remark 1 By setting ${D}$ as large as ${O_d(|P|^{1/m})}$, we obtain as a limiting case of the cell decomposition the fact that any finite set ${P}$ of points in ${{\bf R}^d}$ can be captured by a hypersurface of degree ${O_d(|P|^{1/m})}$. This fact is in fact true over arbitrary fields (not just over ${{\bf R}}$), and can be proven by a simple linear algebra argument (see e.g. this previous blog post). However, the cell decomposition is more flexible than this algebraic fact due to the ability to arbitrarily select the degree parameter ${D}$.

The cell decomposition can be viewed as a structural theorem for arbitrary large configurations of points in space, much as the Szemerédi regularity lemma can be viewed as a structural theorem for arbitrary large dense graphs. Indeed, just as many problems in the theory of large dense graphs can be profitably attacked by first applying the regularity lemma and then inspecting the outcome, it now seems that many problems in combinatorial incidence geometry can be attacked by applying the cell decomposition (or a similar such decomposition), with a parameter ${D}$ to be optimised later, to a relevant set of points, and seeing how the cells interact with each other and with the other objects in the configuration (lines, planes, circles, etc.). This strategy was spectacularly illustrated recently with Guth and Katz‘s use of the cell decomposition to resolve the Erdös distinct distance problem (up to logarithmic factors), as discussed in this blog post.

In this post, I wanted to record a simpler (but still illustrative) version of this method (that I learned from Nets Katz), namely to provide yet another proof of the Szemerédi-Trotter theorem in incidence geometry:

Theorem 2 (Szemerédi-Trotter theorem) Given a finite set of points ${P}$ and a finite set of lines ${L}$ in ${{\bf R}^2}$, the set of incidences ${I(P,L):= \{ (p,\ell) \in P \times L: p \in \ell \}}$ has cardinality

$\displaystyle |I(P,L)| \ll |P|^{2/3} |L|^{2/3} + |P| + |L|.$

This theorem has many short existing proofs, including one via crossing number inequalities (as discussed in this previous post) or via a slightly different type of cell decomposition (as discussed here). The proof given below is not that different, in particular, from the latter proof, but I believe it still serves as a good introduction to the polynomial method in combinatorial incidence geometry.

Combinatorial incidence geometry is the study of the possible combinatorial configurations between geometric objects such as lines and circles. One of the basic open problems in the subject has been the Erdős distance problem, posed in 1946:

Problem 1 (Erdős distance problem) Let ${N}$ be a large natural number. What is the least number ${\# \{ |x_i-x_j|: 1 \leq i < j \leq N \}}$ of distances that are determined by ${N}$ points ${x_1,\ldots,x_N}$ in the plane?

Erdős called this least number ${g(N)}$. For instance, one can check that ${g(3)=1}$ and ${g(4)=2}$, although the precise computation of ${g}$ rapidly becomes more difficult after this. By considering ${N}$ points in arithmetic progression, we see that ${g(N) \leq N-1}$. By considering the slightly more sophisticated example of a ${\sqrt{N} \times \sqrt{N}}$ lattice grid (assuming that ${N}$ is a square number for simplicity), and using some analytic number theory, one can obtain the slightly better asymptotic bound ${g(N) = O( N / \sqrt{\log N} )}$.

On the other hand, lower bounds are more difficult to obtain. As observed by Erdős, an easy argument, ultimately based on the incidence geometry fact that any two circles intersect in at most two points, gives the lower bound ${g(N) \gg N^{1/2}}$. The exponent ${1/2}$ has been slowly increasing over the years by a series of increasingly intricate arguments combining incidence geometry facts with other known results in combinatorial incidence geometry (most notably the Szemerédi-Trotter theorem) and also some tools from additive combinatorics; however, these methods seemed to fall quite short of getting to the optimal exponent of ${1}$. (Indeed, previously to last week, the best lower bound known was approximately ${N^{0.8641}}$, due to Katz and Tardos.)

Very recently, though, Guth and Katz have obtained a near-optimal result:

Theorem 2 One has ${g(N) \gg N / \log N}$.

The proof neatly combines together several powerful and modern tools in a new way: a recent geometric reformulation of the problem due to Elekes and Sharir; the polynomial method as used recently by Dvir, Guth, and Guth-Katz on related incidence geometry problems (and discussed previously on this blog); and the somewhat older method of cell decomposition (also discussed on this blog). A key new insight is that the polynomial method (and more specifically, the polynomial Ham Sandwich theorem, also discussed previously on this blog) can be used to efficiently create cells.

In this post, I thought I would sketch some of the key ideas used in the proof, though I will not give the full argument here (the paper itself is largely self-contained, well motivated, and of only moderate length). In particular I will not go through all the various cases of configuration types that one has to deal with in the full argument, but only some illustrative special cases.

To simplify the exposition, I will repeatedly rely on “pigeonholing cheats”. A typical such cheat: if I have ${n}$ objects (e.g. ${n}$ points or ${n}$ lines), each of which could be of one of two types, I will assume that either all ${n}$ of the objects are of the first type, or all ${n}$ of the objects are of the second type. (In truth, I can only assume that at least ${n/2}$ of the objects are of the first type, or at least ${n/2}$ of the objects are of the second type; but in practice, having ${n/2}$ instead of ${n}$ only ends up costing an unimportant multiplicative constant in the type of estimates used here.) A related such cheat: if one has ${n}$ objects ${A_1,\ldots,A_n}$ (again, think of ${n}$ points or ${n}$ circles), and to each object ${A_i}$ one can associate some natural number ${k_i}$ (e.g. some sort of “multiplicity” for ${A_i}$) that is of “polynomial size” (of size ${O(N^{O(1)})}$), then I will assume in fact that all the ${k_i}$ are in a fixed dyadic range ${[k,2k]}$ for some ${k}$. (In practice, the dyadic pigeonhole principle can only achieve this after throwing away all but about ${n/\log N}$ of the original ${n}$ objects; it is this type of logarithmic loss that eventually leads to the logarithmic factor in the main theorem.) Using the notation ${X \sim Y}$ to denote the assertion that ${C^{-1} Y \leq X \leq CY}$ for an absolute constant ${C}$, we thus have ${k_i \sim k}$ for all ${i}$, thus ${k_i}$ is morally constant.

I will also use asymptotic notation rather loosely, to avoid cluttering the exposition with a certain amount of routine but tedious bookkeeping of constants. In particular, I will use the informal notation ${X \lll Y}$ or ${Y \ggg X}$ to denote the statement that ${X}$ is “much less than” ${Y}$ or ${Y}$ is “much larger than” ${X}$, by some large constant factor.

Below the fold is a version of my talk “Recent progress on the Kakeya conjecture” that I gave at the Fefferman conference.

Jordan Ellenberg, Richard Oberlin, and I have just uploaded to the arXiv the paper “The Kakeya set and maximal conjectures for algebraic varieties over finite fields“, submitted to Mathematika.  This paper builds upon some work of Dvir and later authors on the Kakeya problem in finite fields, which I have discussed in this earlier blog post.  Dvir established the following:

Kakeya set conjecture for finite fields. Let F be a finite field, and let E be a subset of $F^n$ that contains a line in every direction.  Then E has cardinality at least $c_n |F|^n$ for some $c_n > 0$.

The initial argument of Dvir gave $c_n = 1/n!$.  This was improved to $c_n = c^n$ for some explicit $0 < c < 1$ by Saraf and Sudan, and recently to $c_n =1/2^n$ by Dvir, Kopparty, Saraf, and Sudan, which is within a factor 2 of the optimal result.

In our work we investigate a somewhat different set of improvements to Dvir’s result.  The first concerns the Kakeya maximal function $f^*: {\Bbb P}^{n-1}(F) \to {\Bbb R}$ of a function $f: F^n \to {\Bbb R}$, defined for all directions $\xi \in {\Bbb P}^{n-1}(F)$ in the projective hyperplane at infinity by the formula

$f^*(\xi) = \sup_{\ell // \xi} \sum_{x \in \ell} |f(x)|$

where the supremum ranges over all lines $\ell$ in $F^n$ oriented in the direction $\xi$.  Our first result is the endpoint $L^p$ estimate for this operator, namely

Kakeya maximal function conjecture in finite fields. We have $\| f^* \|_{\ell^n({\Bbb P}^{n-1}(F))} \leq C_n |F|^{(n-1)/n} \|f\|_{\ell^n(F^n)}$ for some constant $C_n > 0$.

This result implies Dvir’s result, since if f is the indicator function of the set E in Dvir’s result, then $f^*(\xi) = |F|$ for every $\xi \in {\Bbb P}^{n-1}(F)$.  However, it also gives information on more general sets E which do not necessarily contain a line in every direction, but instead contain a certain fraction of a line in a subset of directions.  The exponents here are best possible in the sense that all other $\ell^p \to \ell^q$ mapping properties of the operator can be deduced (with bounds that are optimal up to constants) by interpolating the above estimate with more trivial estimates.  This result is the finite field analogue of a long-standing (and still open) conjecture for the Kakeya maximal function in Euclidean spaces; we rely on the polynomial method of Dvir, which thus far has not extended to the Euclidean setting (but note the very interesting variant of this method by Guth that has established the endpoint multilinear Kakeya maximal function estimate in this setting, see this blog post for further discussion).

It turns out that a direct application of the polynomial method is not sufficient to recover the full strength of the maximal function estimate; but by combining the polynomial method with the Nikishin-Maurey-Pisier-Stein “method of random rotations” (as interpreted nowadays by Stein and later by Bourgain, and originally inspired by the factorisation theorems of Nikishin, Maurey, and Pisier), one can already recover a “restricted weak type” version of the above estimate.  If one then enhances the polynomial method with the “method of multiplicities” (as introduced by Saraf and Sudan) we can then recover the full “strong type” estimate; a few more details below the fold.

It turns out that one can generalise the above results to more general affine or projective algebraic varieties over finite fields.  In particular, we showed

Kakeya maximal function conjecture in algebraic varieties. Suppose that $W \subset {\Bbb P}^N$ is an (n-1)-dimensional algebraic variety.  Let $d \geq 1$ be an integer. Then we have

$\| \sup_{\gamma \ni x; \gamma \not \subset W} \sum_{y \in \gamma} f(y) \|_{\ell^n_x(W(F))} \leq C_{n,d,N,W} |F|^{(n-1)/n} \|f\|_{\ell^n({\Bbb P}^N(F))}$

for some constant $C_{n,d,N,W} > 0$, where the supremum is over all irreducible algebraic curves $\gamma$ of degree at most d that pass through x but do not lie in W, and W(F) denotes the F-points of W.

The ordinary Kakeya maximal function conjecture corresponds to the case when N=n, W is the hyperplane at infinity, and the degree d is equal to 1.  One corollary of this estimate is a Dvir-type result: a subset of ${\Bbb P}^N(F)$ which contains, for each x in W, an irreducible algebraic curve of degree d passing through x but not lying in W, has cardinality $\gg |F|^n$ if $|W| \gg |F|^{n-1}$.  (In particular this implies a lower bound for Nikodym sets worked out by Li.)  The dependence of the implied constant on W is only via the degree of W.

The techniques used in the flat case can easily handle curves $\gamma$ of higher degree (provided that we allow the implied constants to depend on d), but the method of random rotations does not seem to work directly on the algebraic variety W as there are usually no symmetries of this variety to exploit.  Fortunately, we can get around this by using a “random projection trick” to “flatten” W into a hyperplane (after first expressing W as the zero locus of some polynomials, and then composing with the graphing map for such polynomials), reducing the non-flat case to the flat case.

Below the fold, I wish to sketch two of the key ingredients in our arguments, the random rotations method and the random projections trick.  (We of course also use some algebraic geometry, but mostly low-tech stuff, on the level of Bezout’s theorem, though we do need one non-trivial result of Kleiman (from SGA6), that asserts that bounded degree varieties can be cut out by a bounded number of polynomials of bounded degree.)

[Update, March 14: See also Jordan’s own blog post on our paper.]

One of my favourite family of conjectures (and one that has preoccupied a significant fraction of my own research) is the family of Kakeya conjectures in geometric measure theory and harmonic analysis.  There are many (not quite equivalent) conjectures in this family.  The cleanest one to state is the set conjecture:

Kakeya set conjecture: Let $n \geq 1$, and let $E \subset {\Bbb R}^n$ contain a unit line segment in every direction (such sets are known as Kakeya sets or Besicovitch sets).  Then E has Hausdorff dimension and Minkowski dimension equal to n.

One reason why I find these conjectures fascinating is the sheer variety of mathematical fields that arise both in the partial results towards this conjecture, and in the applications of those results to other problems.  See for instance this survey of Wolff, my Notices article and this article of Łaba on the connections between this problem and other problems in Fourier analysis, PDE, and additive combinatorics; there have even been some connections to number theory and to cryptography.  At the other end of the pipeline, the mathematical tools that have gone into the proofs of various partial results have included:

[This list is not exhaustive.]

Very recently, I was pleasantly surprised to see yet another mathematical tool used to obtain new progress on the Kakeya conjecture, namely (a generalisation of) the famous Ham Sandwich theorem from algebraic topology.  This was recently used by Guth to establish a certain endpoint multilinear Kakeya estimate left open by the work of Bennett, Carbery, and myself.  With regards to the Kakeya set conjecture, Guth’s arguments assert, roughly speaking, that the only Kakeya sets that can fail to have full dimension are those which obey a certain “planiness” property, which informally means that the line segments that pass through a typical point in the set must be essentially coplanar. (This property first surfaced in my paper with Katz and Łaba.)  Guth’s arguments can be viewed as a partial analogue of Dvir’s arguments in the finite field setting (which I discussed in this blog post) to the Euclidean setting; in particular, both arguments rely crucially on the ability to create a polynomial of controlled degree that vanishes at or near a large number of points.  Unfortunately, while these arguments fully settle the Kakeya conjecture in the finite field setting, it appears that some new ideas are still needed to finish off the problem in the Euclidean setting.  Nevertheless this is an interesting new development in the long history of this conjecture, in particular demonstrating that the polynomial method can be successfully applied to continuous Euclidean problems (i.e. it is not confined to the finite field setting).

In this post I would like to sketch some of the key ideas in Guth’s paper, in particular the role of the Ham Sandwich theorem (or more precisely, a polynomial generalisation of this theorem first observed by Gromov).

One of my favourite unsolved problems in mathematics is the Kakeya conjecture in geometric measure theory. This conjecture is descended from the

Kakeya needle problem. (1917) What is the least area in the plane required to continuously rotate a needle of unit length and zero thickness around completely (i.e. by $360^\circ$)?

For instance, one can rotate a unit needle inside a unit disk, which has area $\pi/4$. By using a deltoid one requires only $\pi/8$ area.

In 1928, Besicovitch showed that that in fact one could rotate a unit needle using an arbitrarily small amount of positive area. This unintuitive fact was a corollary of two observations. The first, which is easy, is that one can translate a needle using arbitrarily small area, by sliding the needle along the direction it points in for a long distance (which costs zero area), turning it slightly (costing a small amount of area), sliding back, and then undoing the turn. The second fact, which is less obvious, can be phrased as follows. Define a Kakeya set in ${\Bbb R}^2$ to be any set which contains a unit line segment in each direction. (See this Java applet of mine, or the wikipedia page, for some pictures of such sets.)

Theorem. (Besicovitch, 1919) There exists Kakeya sets ${\Bbb R}^2$ of arbitrarily small area (or more precisely, Lebesgue measure).

In fact, one can construct such sets with zero Lebesgue measure. On the other hand, it was shown by Davies that even though these sets had zero area, they were still necessarily two-dimensional (in the sense of either Hausdorff dimension or Minkowski dimension). This led to an analogous conjecture in higher dimensions:

Kakeya conjecture. A Besicovitch set in ${\Bbb R}^n$ (i.e. a subset of ${\Bbb R}^n$ that contains a unit line segment in every direction) has Minkowski and Hausdorff dimension equal to n.

This conjecture remains open in dimensions three and higher (and gets more difficult as the dimension increases), although many partial results are known. For instance, when n=3, it is known that Besicovitch sets have Hausdorff dimension at least 5/2 and (upper) Minkowski dimension at least $5/2 + 10^{-10}$. See my Notices article for a general survey of this problem (and its connections with Fourier analysis, additive combinatorics, and PDE), my paper with Katz for a more technical survey, and Wolff’s survey for a systematic treatment of the field (up to about 1998 or so).

In 1999, Wolff proposed a simpler finite field analogue of the Kakeya conjecture as a model problem that avoided all the technical issues involving Minkowski and Hausdorff dimension. If $F^n$ is a vector space over a finite field F, define a Kakeya set to be a subset of $F^n$ which contains a line in every direction.

Finite field Kakeya conjecture. Let $E \subset F^n$ be a Kakeya set. Then E has cardinality at least $c_n |F|^n$, where $c_n > 0$ depends only on n.

This conjecture has had a significant influence in the subject, in particular inspiring work on the sum-product phenomenon in finite fields, which has since proven to have many applications in number theory and computer science. Modulo minor technicalities, the progress on the finite field Kakeya conjecture was, until very recently, essentially the same as that of the original “Euclidean” Kakeya conjecture.

Last week, the finite field Kakeya conjecture was proven using a beautifully simple argument by Zeev Dvir, using the polynomial method in algebraic extremal combinatorics. The proof is so short that I can present it in full here.