This problem lies in the highly interconnected interface between algebraic combinatorics (esp. the combinatorics of Young tableaux and related objects, including honeycombs and puzzles), algebraic geometry (particularly classical and quantum intersection theory and geometric invariant theory), linear algebra (additive and multiplicative, real and tropical), and the representation theory (classical, quantum, crystal, etc.) of classical groups. (Another open problem in this subject is to find a succinct and descriptive name for the field.) I myself haven’t actively worked in this area for several years, but I still find it a fascinating and beautiful subject. (With respect to the dichotomy between structure and randomness, this subject lies deep within the “structure” end of the spectrum.)

As mentioned above, the problems in this area can be approached from a variety of quite diverse perspectives, but here I will focus on the linear algebra perspective, which is perhaps the most accessible. About nine years ago, Allen Knutson and I introduced a combinatorial gadget, called a honeycomb, which among other things controlled the relationship between the eigenvalues of two arbitrary Hermitian matrices A, B, and the eigenvalues of their sum A+B; this was not the first such gadget that achieved this purpose, but it was a particularly convenient one for studying this problem, in particular it was used to resolve two conjectures in the subject, the saturation conjecture and the Horn conjecture. (These conjectures have since been proven by a variety of other methods.) There is a natural multiplicative version of these problems, which now relates the eigenvalues of two arbitrary unitary matrices U, V and the eigenvalues of their product UV; this led to the “quantum saturation” and “quantum Horn” conjectures, which were proven a couple years ago. However, the quantum analogue of a “honeycomb” remains a mystery; this is the main topic of the current post.

Let us first briefly review the additive situation. Consider three n\times n Hermitian matrices A,B,C such that A+B=C. Being Hermitian, the matrices A,B,C are all diagonalisable with real eigenvalues. Accordingly, let us arrange the eigenvalues of A (with multiplicity) in decreasing order as

\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n

the eigenvalues of B as

\mu_1 \geq \mu_2 \geq \ldots \geq \mu_n

and the eigenvalues of C as

\nu_1 \geq \nu_2 \geq \ldots \geq \nu_n.

Thus for instance \mu_2 is the second largest eigenvalue of B, etc.

An old question (essentially due to Sylvester, though this particular formulation is due to Weyl) was to determine the complete set of relationships between the \lambda_i, the \mu_j, and the \nu_k. There are a number of reasonably obvious equalities and inequalities that one can obtain here. For instance, from the obvious identity tr(A)+tr(B)=tr(C) we conclude the trace identity


while from the minimax characterisation of the largest eigenvalue,

\lambda_1 = \sup_{\hbox{dim}(V)=1} \hbox{tr}(A|_V), etc.

one easily obtains the triangle inequality

\nu_1 \leq \lambda_1 + \mu_1.

And so on and so forth. It turns out that the set of all possible \lambda_i, \mu_j, \nu_k form a convex cone, determined by a finite number of linear inequalities; this can be derived from symplectic geometry considerations (the Atiyah/Guillemin-Sternberg convexity theorem, or more precisely a refinement due to Kirwan). A complete (in fact, overcomplete) list of such inequalities, generated by a beautifully recursive formula, was conjectured by Alfred Horn (no relation to Roger Horn, who also works in linear algebra). The Horn conjecture was finally settled in two papers: one by Klyachko, which used geometric invariant theory to reduce the problem to a simpler problem known as the saturation conjecture, and one by Allen Knutson and myself, which established the saturation conjecture by a combinatorial argument using honeycombs.

Here is what a honeycomb looks like:


Note that the lengths of the edges in the honeycomb are variable, but there are only three possible orientations, 120 degree angles apart. This is a honeycomb of order 4, with four half-infinite edges going in a NW direction, four in a NE direction, and four in the S direction; the coordinates of these edges are the boundary data of this honeycomb. [For more precise definitions, see our survey paper or our original article.] One can also play with honeycombs using our honeycomb applet.

Allen and I observed that honeycombs “solve” Weyl’s problem in the following sense: a collection \lambda_i, \mu_j, \nu_k of putative eigenvalues can arise as the eigenvalues of a triplet A+B=C of matrices if and only if they arise as the boundary values of at least one honeycomb. In fact, there is a more quantitative relationship: the “volume” of the set of all A+B=C with the desired eigenvalues is equal (up to some explicit normalising factors) to the “volume” of all the honeycombs with the specified boundary data.

There is also a discrete or quantized version of the above connection. One can define an integral honeycomb to be one where all lengths and coordinates are integers (in particular, the boundary data will also be integers). It turns out that an integral honeycomb with boundary values \lambda_i, \mu_j, \nu_k exists if and only if the irreducible representation of SU(n) with weight vector (\nu_1,\ldots,\nu_n) appears at least once in the tensor product of the irreducible representations of weight vectors (\lambda_1,\ldots,\lambda_n) and (\mu_1,\ldots,\mu_n), and furthermore the number of integral honeycombs counts the multiplicity of this representation. This multiplicity is in fact a Littlewood-Richardson coefficient, and appears in a number of other contexts, such as Schubert calculus (intersection numbers for Schubert classes); see this survey of Fulton for details. The precise relationship between the discrete and continuous formulations of this problem is in fact the key to the Horn conjecture (and the closely related saturation conjecture), but I will not discuss this in detail here (it is covered in the above references). Honeycombs can also be linked to several other combinatorial gadgets, such as puzzles, Berenstein-Zelevinsky patterns, and Young tableaux: see this paper for several of these connections (and this paper for some further recent developments).

But let us leave this additive setting for now and turn to the analogous multiplicative problem. Here, instead of three Hermitian matrices A+B=C, we now have three unitary matrices UV=W. It is convenient to normalise these matrices to have determinant 1, in which case we can uniquely express the eigenvalues of U as

e(\lambda_1), \ldots, e(\lambda_n), where \lambda_1 \geq \ldots \geq \lambda_n \geq \lambda_1-1 and \lambda_1+\ldots+\lambda_n=0

where e(x) := e^{2\pi i x}, and similarly express the eigenvalues of V as

e(\mu_1), \ldots, e(\mu_n), where \mu_1 \geq \ldots \geq \mu_n \geq \mu_1-1 and \mu_1+\ldots+\mu_n=0

and the eigenvalues of W as

e(\nu_1), \ldots, e(\nu_n), where \nu_1 \geq \ldots \geq \nu_n \geq \nu_1-1 and \nu_1+\ldots+\nu_n=0.

We can now ask the multiplicative version of Weyl’s problem, namely to characterise all the relationships that exist between the \lambda_i, the \mu_j, and the \nu_k. For instance, it is possible to view \lambda_1 as the “maximum anti-clockwise angle” that U can rotate a vector, which can eventually lead to the inequality

\nu_1 \leq \lambda_1 + \mu_1.

One can continue creating inequalities of this type, and there will be a strong resemblance of those inequalities with those in the additive problem. This is not so surprising, since the additive problem emerges as a limiting case of the multiplicative one (if U = \exp( \epsilon A ), V = \exp( \epsilon B), W = \exp(\epsilon C) and UV=W, then A+B = C + O(\epsilon) when \epsilon is small, by the Baker-Campbell-Hausdorff formula. What is more surprising is that when the \lambda_i, \mu_j, \nu_k are sufficiently small, that the inequalities which describe the multiplicative problem are exactly those that describe the additive problem! In fact, it is known that the space of all possible \lambda_i, \mu_j, \nu_k for the multiplicative problem is a convex polytope contained within the convex cone for the additive problem, and in fact a quantum version of the Horn conjecture (i.e. an explicit recursive description of the faces of this polytope) was proven by Belkale (building upon earlier work by Agnihotri-Woodward and Belkale). For instance, while for the additive problem there is the constraint

\nu_{i+j-1} \leq \lambda_i + \mu_j

whenever i+j -1 \leq n (the Weyl inequalities), in the multiplicative problem one also has the additional constraint

\lambda_i + \mu_j \leq \nu_{i+j}+1.

As with the additive problem, the complete set of all inequalities of this form turns out to be rather messy to describe and I will not do so here.

Just as the additive Weyl problem turned out to be linked to Schubert calculus (the intersection numbers of Schubert classes), the multiplicative problem turned out to be linked to quantum Schubert calculus (the Gromov-Witten numbers of the same classes), and making this link precise turned out to be the key to the proof of the quantum Horn conjecture.

This solves the “qualitative” version of the multiplicative Weyl problem, namely whether there exists any triple UV=W with the specified eigenvalues. However, one can still ask “quantitative” versions, namely to compute the volume of the space of all such triples. There is also the discretised quantitative version, which concerns either the Gromov-Witten numbers for Schubert classes, or else the multiplicities of fusion products in the Verlinde algebra of SU(n); these are rather technical and we refer to Agnihotri-Woodward for details. There should exist some concept of “quantum honeycomb” which computes all of these numbers, in much the same way that the usual honeycombs computes the volume of the space of solutions to A+B=C with specified eigenvalues, intersection numbers for Schubert classes, or multiplicities of tensor products of SU(n) irreducible representations. Vaguely speaking it seems that one wants to construct an analogue of the planar honeycomb which lives instead on something like a two-dimensional torus, but it is not entirely clear (even when n=2) what the precise definition of this object should be.

It may seem like one needs to learn a fearsome amount of machinery to attack this problem, but actually I think one can at least guess what the quantum honeycomb should be just by experimentation with small cases n=1,2,3 and by using various sanity checks (this is how Allen and I discovered the additive honeycombs). For instance, the equation UV=W has the cyclic symmetry (U,V,W) \mapsto (V,W^{-1},U^{-1}) and so the quantum honeycomb should enjoy a similar symmetry. There is also the translation symmetry (U,V,W) \mapsto (e(\alpha)U, e(\beta)V, e(\gamma)W) whenever \alpha+\beta+\gamma=0, so quantum honeycombs should be translation invariant. When the honeycomb is small (all vertices close to the origin) there should be a bijective correspondence between the quantum honeycomb and the regular honeycomb. The constraints between all the boundary values are already known due to the resolution of the quantum Horn conjecture. There are some other extreme cases which are also understood quite well, for instance when one of the matrices is very close to the identity but the other two are not.

My guess is that once a reasonable candidate for a quantum honeycomb is found which passes all the obvious sanity checks, actually verifying that it computes everything that it should will be a relatively routine matter (we have many different combinatorial ways of establishing things like this). This will give a combinatorial tool for computing a number of interesting quantities, and will probably shed some light also as to why these honeycombs appear in the subject in the first place. (It seems to be somehow related to the Dynkin diagram A_n for the underlying group SU(n); it has proven a little tricky to try to find analogues of these objects for the other Dynkin diagrams.) Certainly they seem to be computing something rather non-trivial; for instance the Littlewood-Richardson numbers that are computed by additive honeycombs have even been proposed to play a role in lower bounds in complexity theory, and specifically the P \neq NP problem!

[Thanks to Allen Knutson for suggestions and encouragement.]