*[Corrected, thanks – T.]*

Nice proof!

*[Corrected, thanks – WS.]*

Oh, that makes sense. My mistake!

]]>It is correct. There are actually fewer rank-one functions for the usual tensor rank than for the slice rank, not more as you say. Rank-one functions for the usual tensor rank must split into independent functions of each factor, whereas for slice rank they must only split into a function of one factor times a function of all the other factors.

]]>That should be a “less than or equal,” since there are more rank 1 functions for the usual tensor rank than for the slice rank.

]]>*[Corrected, thanks – T.]*

You can interpret the combinatorial problem appearing in Proposition 4 as a particular kind of information problem, where Alice knows a k-tuple of solutions to a problem and must send a message to Bob in the minimum amount of space, and then Bob must give one of the k possible solutions (and Alice and Bob both know the set of possible problems and agree on a communication protocol beforehand). Here S_1,…,S_k are sets of possible solutions and Gamma is the set of possible problems. I don’t know much about this kind of information problem, other than the asymptotic formula in Proposition 6.

Surely something more can be said about this combinatorial problem. But, as Terry says, it’s hard to connect it to slice rank, outside this very special antichain case.

]]>I don’t think there is a direct information-theoretic interpretation of slice rank for any fixed dimension, but when considering the slice rank of a tensor power in the limit it seems that the asymptotic (normalised) slice rank has an information-theoretic interpretation. Right now we can only formalise this in the case that has a representation in terms of suitable basis vectors in which the nonzero coefficients are restricted to an antichain (see Proposition 6), but this connection may well hold in greater generality (maybe involving some “noncommutative” version of entropy?).

]]>