This problem lies in the highly interconnected interface between algebraic combinatorics (esp. the combinatorics of Young tableaux and related objects, including honeycombs and puzzles), algebraic geometry (particularly classical and quantum intersection theory and geometric invariant theory), linear algebra (additive and multiplicative, real and tropical), and the representation theory (classical, quantum, crystal, etc.) of classical groups. (Another open problem in this subject is to find a succinct and descriptive name for the field.) I myself haven’t actively worked in this area for several years, but I still find it a fascinating and beautiful subject. (With respect to the dichotomy between structure and randomness, this subject lies deep within the “structure” end of the spectrum.)

As mentioned above, the problems in this area can be approached from a variety of quite diverse perspectives, but here I will focus on the linear algebra perspective, which is perhaps the most accessible. About nine years ago, Allen Knutson and I introduced a combinatorial gadget, called a *honeycomb*, which among other things controlled the relationship between the eigenvalues of two arbitrary Hermitian matrices A, B, and the eigenvalues of their sum A+B; this was not the first such gadget that achieved this purpose, but it was a particularly convenient one for studying this problem, in particular it was used to resolve two conjectures in the subject, the *saturation conjecture* and the *Horn conjecture*. (These conjectures have since been proven by a variety of other methods.) There is a natural multiplicative version of these problems, which now relates the eigenvalues of two arbitrary *unitary* matrices U, V and the eigenvalues of their product UV; this led to the “quantum saturation” and “quantum Horn” conjectures, which were proven a couple years ago. However, the quantum analogue of a “honeycomb” remains a mystery; this is the main topic of the current post.

Let us first briefly review the additive situation. Consider three Hermitian matrices A,B,C such that A+B=C. Being Hermitian, the matrices A,B,C are all diagonalisable with real eigenvalues. Accordingly, let us arrange the eigenvalues of A (with multiplicity) in decreasing order as

the eigenvalues of B as

and the eigenvalues of C as

.

Thus for instance is the second largest eigenvalue of B, etc.

An old question (essentially due to Sylvester, though this particular formulation is due to Weyl) was to determine the complete set of relationships between the , the , and the . There are a number of reasonably obvious equalities and inequalities that one can obtain here. For instance, from the obvious identity tr(A)+tr(B)=tr(C) we conclude the *trace identity*

,

while from the minimax characterisation of the largest eigenvalue,

, etc.

one easily obtains the triangle inequality

.

And so on and so forth. It turns out that the set of all possible form a convex cone, determined by a finite number of linear inequalities; this can be derived from symplectic geometry considerations (the Atiyah/Guillemin-Sternberg convexity theorem, or more precisely a refinement due to Kirwan). A complete (in fact, overcomplete) list of such inequalities, generated by a beautifully recursive formula, was conjectured by Alfred Horn (no relation to Roger Horn, who also works in linear algebra). The Horn conjecture was finally settled in two papers: one by Klyachko, which used geometric invariant theory to reduce the problem to a simpler problem known as the *saturation conjecture*, and one by Allen Knutson and myself, which established the saturation conjecture by a combinatorial argument using honeycombs.

Here is what a honeycomb looks like:

Note that the lengths of the edges in the honeycomb are variable, but there are only three possible orientations, 120 degree angles apart. This is a honeycomb of order 4, with four half-infinite edges going in a NW direction, four in a NE direction, and four in the S direction; the coordinates of these edges are the *boundary data* of this honeycomb. [For more precise definitions, see our survey paper or our original article.] One can also play with honeycombs using our honeycomb applet.

Allen and I observed that honeycombs “solve” Weyl’s problem in the following sense: a collection of putative eigenvalues can arise as the eigenvalues of a triplet A+B=C of matrices if and only if they arise as the boundary values of at least one honeycomb. In fact, there is a more quantitative relationship: the “volume” of the set of all A+B=C with the desired eigenvalues is equal (up to some explicit normalising factors) to the “volume” of all the honeycombs with the specified boundary data.

There is also a discrete or quantized version of the above connection. One can define an *integral* honeycomb to be one where all lengths and coordinates are integers (in particular, the boundary data will also be integers). It turns out that an integral honeycomb with boundary values exists if and only if the irreducible representation of SU(n) with weight vector appears at least once in the tensor product of the irreducible representations of weight vectors and , and furthermore the number of integral honeycombs counts the multiplicity of this representation. This multiplicity is in fact a *Littlewood-Richardson coefficient*, and appears in a number of other contexts, such as Schubert calculus (intersection numbers for Schubert classes); see this survey of Fulton for details. The precise relationship between the discrete and continuous formulations of this problem is in fact the key to the Horn conjecture (and the closely related saturation conjecture), but I will not discuss this in detail here (it is covered in the above references). Honeycombs can also be linked to several other combinatorial gadgets, such as puzzles, Berenstein-Zelevinsky patterns, and Young tableaux: see this paper for several of these connections (and this paper for some further recent developments).

But let us leave this additive setting for now and turn to the analogous multiplicative problem. Here, instead of three Hermitian matrices A+B=C, we now have three *unitary* matrices UV=W. It is convenient to normalise these matrices to have determinant 1, in which case we can uniquely express the eigenvalues of U as

where and

where , and similarly express the eigenvalues of V as

where and

and the eigenvalues of W as

where and .

We can now ask the multiplicative version of Weyl’s problem, namely to characterise all the relationships that exist between the , the , and the . For instance, it is possible to view as the “maximum anti-clockwise angle” that U can rotate a vector, which can eventually lead to the inequality

.

One can continue creating inequalities of this type, and there will be a strong resemblance of those inequalities with those in the additive problem. This is not so surprising, since the additive problem emerges as a limiting case of the multiplicative one (if and UV=W, then when is small, by the Baker-Campbell-Hausdorff formula. What is more surprising is that when the are sufficiently small, that the inequalities which describe the multiplicative problem are *exactly *those that describe the additive problem! In fact, it is known that the space of all possible for the multiplicative problem is a convex polytope contained within the convex cone for the additive problem, and in fact a quantum version of the Horn conjecture (i.e. an explicit recursive description of the faces of this polytope) was proven by Belkale (building upon earlier work by Agnihotri-Woodward and Belkale). For instance, while for the additive problem there is the constraint

whenever (the Weyl inequalities), in the multiplicative problem one also has the additional constraint

.

As with the additive problem, the complete set of all inequalities of this form turns out to be rather messy to describe and I will not do so here.

Just as the additive Weyl problem turned out to be linked to Schubert calculus (the intersection numbers of Schubert classes), the multiplicative problem turned out to be linked to quantum Schubert calculus (the Gromov-Witten numbers of the same classes), and making this link precise turned out to be the key to the proof of the quantum Horn conjecture.

This solves the “qualitative” version of the multiplicative Weyl problem, namely whether there exists any triple UV=W with the specified eigenvalues. However, one can still ask “quantitative” versions, namely to compute the volume of the space of all such triples. There is also the discretised quantitative version, which concerns either the Gromov-Witten numbers for Schubert classes, or else the multiplicities of fusion products in the Verlinde algebra of SU(n); these are rather technical and we refer to Agnihotri-Woodward for details. There should exist some concept of “quantum honeycomb” which computes all of these numbers, in much the same way that the usual honeycombs computes the volume of the space of solutions to A+B=C with specified eigenvalues, intersection numbers for Schubert classes, or multiplicities of tensor products of SU(n) irreducible representations. Vaguely speaking it seems that one wants to construct an analogue of the planar honeycomb which lives instead on something like a two-dimensional torus, but it is not entirely clear (even when n=2) what the precise definition of this object should be.

It may seem like one needs to learn a fearsome amount of machinery to attack this problem, but actually I think one can at least *guess* what the quantum honeycomb should be just by experimentation with small cases n=1,2,3 and by using various sanity checks (this is how Allen and I discovered the additive honeycombs). For instance, the equation UV=W has the cyclic symmetry and so the quantum honeycomb should enjoy a similar symmetry. There is also the translation symmetry whenever , so quantum honeycombs should be translation invariant. When the honeycomb is small (all vertices close to the origin) there should be a bijective correspondence between the quantum honeycomb and the regular honeycomb. The constraints between all the boundary values are already known due to the resolution of the quantum Horn conjecture. There are some other extreme cases which are also understood quite well, for instance when one of the matrices is very close to the identity but the other two are not.

My guess is that once a reasonable candidate for a quantum honeycomb is found which passes all the obvious sanity checks, actually verifying that it computes everything that it should will be a relatively routine matter (we have many different combinatorial ways of establishing things like this). This will give a combinatorial tool for computing a number of interesting quantities, and will probably shed some light also as to why these honeycombs appear in the subject in the first place. (It seems to be somehow related to the Dynkin diagram for the underlying group SU(n); it has proven a little tricky to try to find analogues of these objects for the other Dynkin diagrams.) Certainly they seem to be computing something rather non-trivial; for instance the Littlewood-Richardson numbers that are computed by additive honeycombs have even been proposed to play a role in lower bounds in complexity theory, and specifically the problem!

[Thanks to Allen Knutson for suggestions and encouragement.]

## 27 comments

Comments feed for this article

19 April, 2007 at 2:17 pm

KeaAnother open problem in this subject is to find a succinct and descriptive name for the field.How about M theory?

22 April, 2007 at 7:17 pm

DougI do not have access to this paper, but look at a 1986 abstract for mathematical modeling of the honeycomb with Fourier Transformation:

Journal of Mathematical Biology

‘Mathematical model of honeycomb construction’ MR Bell et al

http://www.springerlink.com/content/n3rrj73658488631/

Although this is certainly at a scale different from the quantum, comparison of the Bell paper with your effort may be warranted?

23 April, 2007 at 5:29 pm

StephenI wonder if ideas related to these Honeycombs could help analyze adiabatic quantum algorithms. Eddie Farhi and collaborators proposed an adiabatic quantum algorithm for solving NP-hard problems. The algorithm provably works in the limit where the runtime is allowed to go to infinity, but it is not known what the minimal runtime is. This hinges on the gap between the smallest and second smallest eigenvalue of the Hamiltonian, which is the sum of two 2^n by 2^n Hermitian matrices, each of which have well understood spectra. (The runtime is upper bounded by something like O(1/gap^3).) Figuring out how this eigenvalue gap scales for large n is a longstanding* open problem. Although some feel it is unlikely that the runtime could be polynomial in the worst case (this would imply that NP is contained in BQP), it seems quite plausible that runtime is polynomial for random instances, and this would still be a very exciting result. Have you ever thought about this?

*by quantum computation standards!

24 April, 2007 at 12:37 am

Gil KalaiAre honeycomb related to amoebas (or non-archimedean amoebas) considered by Kontsevich, Mikhalkin, Shustin and various othres? (this is a “tropical” area, I suppose.) At least the pictures look rather similar.

24 April, 2007 at 7:28 am

Terence TaoDear Stephen, I know some people are looking at the limiting distribution of “random” honeycombs in the large n limit, which should have connections both with random tilings and to free convolution, but I don’t believe there are published results on this problem yet (though it is an interesting direction to look into). There is also the question of what “random” means; typically for this type of analysis one assumes that the probability distributions of the input matrices are U(n)-invariant, which may be too strong for the quantum computing application.

Dear Gil, there is indeed a tropical amoeba interpretation of the additive honeycomb, due to David Speyer. Roughly speaking, one replaces the real line R by the ring of Puiseux series (to get the tropicalisation), and starting from a triple of matrices A+B+C=0 the honeycomb can be recovered by looking at the curve det(xA+yB+zC)=0.

6 May, 2007 at 5:26 pm

Chris WoodwardHi Terry,

I once had this crazy dream, that the quantum version of a honeycomb

should be a honeycomb on a thrice-punctured sphere with hyperbolic

metric. That is, one has these geodesics coming in from infinity,

as in the pictures in Thurston’s book, and one can talk about

honeycombs where each segment is a segment of a geodesic

of this type. Given data at infinity (namely, distinct

finite subsets of each circle at infinity) one can try to count honeycombs.

On the good side, there is an obvious Euclidean limit which gives

the usual notion of honeycomb. There would also be a strategy

for proving associativity (degenerate a quadruply punctured

sphere in two ways ). On the bad side, it wasn’t

clear to me whether one has the expected cyclic symmetry.

Or even, how does one choose the base points on the circles

at infinity?

I would love it if someone would explain why my crazy dream couldn’t

work (or could it?)

Chris

6 May, 2007 at 8:58 pm

Terence TaoHi Chris! Thanks for posting.

It’s certainly a very visually appealing idea, and fits of course with the role of the thrice-punctured sphere in the product of unitary matrices problem. One could start with investigating the n=1 case: the space of all “hyperbolic 1-honeycombs” cuts out a two-dimensional subset of the three circles at infinity, which one can view as a group-type operation (mapping the product of two circles to a third); it should then be easy to check whether this operation is isomorphic to the expected one (basically the addition operation on R/Z) by checking that it behaves in a commutative and associative manner.

The other difficulty I see is that there is no obvious discretisation which would count fusion product multiplicities.

It may be that we have to somehow abandon the idea of a honeycomb as a 1-D subset of a 2-D space. Take for instance the equivalent “hive” model of the additive honeycomb, which is a piecewise linear concave function with prescribed boundary values. The piecewise linearity is what is causing the honeycomb (which is sort of a dual object to the hive in some ways) to behave one-dimensionally, but one could imagine some sort of “hyperbolic hive” which had the concavity but not the piecewise linearity, and so the corresponding honeycomb becomes “smeared out” somehow. I don’t exactly see what such a hyperbolic hive could be, though; I vaguely want to take some sort of section of a bundle of thrice-punctured spheres over a triangle, or something strange like that.

But this is just wild speculation, not supported by much hard data :)

7 May, 2007 at 7:33 am

Terence Taop.s. Vaughan Jones pointed me towards the thesis of his student Scott Morrison, in which he describes the representation theory of the quantum group in terms of spherical categories. One gets a number of topologically planar diagrams involved, related to Kuperberg’s webs and Jones’ planar algebras; they do of course look vaguely honeycomb-ish, but even in the classical additive setting we were never able to figure out what was the connection between these topologically planar objects and the geometrically planar honeycombs. The idea (implicit in your post) of using the uniformisation theorem to connect topology to geometry is very intruiging, though.

Vaughan ventured the opinion that planarity (or more specifically, a non-crossing condition) should play a much more prominent role in the quantum (multiplicative) setting than in the classical (additive setting), though I don’t know exactly what to make of this “hint”. Certainly the “overlay” operation for additive honeycombs looks like it should extend to multiplicative honeycombs, for instance.

8 May, 2007 at 1:29 am

SachaTerry, in using the term “quantum honeycomb”, are you saying that “it” (whatever it is) would be related to quantum algebraic structures (eg quantum algebras)? I’m unfamiliar with the algebraic structures you mention in your post.

8 May, 2007 at 8:59 am

Terence TaoDear Sacha,

Quantum (or “multiplicative”) honeycombs should relate to two “quantum” objects: firstly, quantum Schubert calculus, i.e. the structure of the quantum cohomology of the Grassmannian, which basically is counting how many holomorphic sections one can find in bundles of Grassmannians over punctured spheres (these counts are known as Gromov-Witten invariants). The other is the fusion product multiplicities of the Verlinde algebra of SU(n). I believe the latter is related to quantum groups, though I do not know the precise relationship. In the classical (or “additive”) honeycomb setting, the analogous objects are classical Schubert calculus (the intersection numbers of Schubert varieties), and tensor product multiplicities of irreducible representations of SU(n) or U(n).

Incidentally, just to make matters a bit more confusing, there is an unrelated “classical/quantum” distinction in the subject, between the “continuous” setting (which relates to sums of Hermitian matrices or products of unitary matrices) and the “discrete” setting (which is where the Schubert calculus and representation theory is). The continuous/discrete distinction is analogous to the distinction between classical (Hamiltonian) mechanics and quantum (Schrodinger/Heisenberg) mechanics. (I’m actually not entirely clear why quantum cohomology and quantum groups also have the adjective “quantum” attached, but that’s the standard terminology.)

9 May, 2007 at 4:56 am

SachaThanks Terry – I’ll have to read up about these topics. I know a few things about quantum groups (I did my PhD on one family of quantum supergroups) – I think that, essentially, the use of “quantum” as an adjective is by analogy with quantum mechanics in relation to classical mechanics.

In one version of quantum groups, they can be seen as one-parameter deformations of the universal enveloping algebras of lie algebras or lie superalgebras. The name seems a bit funny to me that it seems to have stuck now. I also call them quantum algebras as they’re not groups (unless you are actually talking about the groups).

16 May, 2007 at 6:19 am

FrederickProfessor Tao,

I want to ask a question. Given a sequence B(n), integer n from -infinity to +infinity. I require B(n)=B(-n). Now define a matrix C(p,q)=B(q-p)*q*q (square q). Now I ask under what conditions of B(n), the eigenvalues of matrix C are the eigenvalues of random matrix? The question comes from many fields from physics, such as the Helmholtz equation in a two dimensional billiard.

This is a very difficult question for me (although it seems easy). I thought it hard, but could not find any answer, even any hints to attack it. I even could not find any references to look at. I even don’t know whether C is some kind of special matrix, on which mathematicians have done a lot of work already.

Thanks!

Frederick

17 May, 2007 at 9:41 am

Terence TaoDear Frederick,

The matrix is not self-adjoint or normal, and so it may be that eigenvalues might not be the most relevant feature of this matrix (there is no spectral theorem, for instance). After a Fourier transform, the matrix is equivalent to the operator acting on periodic functions, where b is the Fourier series with coefficients B. If instead you consider the self-adjoint version (this corresponds to replacing q*q by q*p in your matrix), then you can use tools from microlocal analysis, such as Weyl’s law, to obtain some control on the eigenvalues, though to get fine information such as eigenvalue spacing distributions is still well beyond current technology.

19 May, 2007 at 2:02 am

FrederickDear Professor Tao,

If b(x) is periodic, then lots of problems are solved by Bloch’s theorem. All the eigenfunctions are extended (no Anderson localization occurs). Actually I was thinking the eigenvalues and wavefunction of a billiard (scar). After making a conformal transformation, the Laplacian will have a factor/coefficient, I thought the factor/coefficient would be periodic. Actually it’s not periodic. So things are more complex than I first thought.

Lots of physical problems are either solved numerically such as using finite difference method or solved by the spectral method. But the spectral method always brings big infinite matrices. It is a pity there is not very effective to handle these kinds of infinite matrices.

Thanks very much!

Frederick

20 December, 2007 at 6:48 pm

trisetyarsoDear Prof. Tao,

It is very helpful for me that I can find this explanation.

I just start my research in this area.

I want to see the acts of honeycomb in measurement-based quantum computation. (Adder, Error-Correction etc)

Best regards,

Agung Trisetyarso

http://www.agungtrisetyarso.com

20 March, 2012 at 1:44 pm

.Isn’t Littlewood-Richardson number computation in P while Kronecker numbers are in NP-hard (harder)? How can an amateur try to learn about these two numbers?

20 March, 2012 at 1:45 pm

.I meant an amateur with a cs theory background trying to find approximation for these numbers!

9 December, 2019 at 4:55 pm

PolytopeTerry, could you provide an example showing an explicit construction of the extremal rays of the additive and multiplicative polytopes mentioned above

(say, for n=2)? Or a reference for the same?

9 December, 2019 at 7:23 pm

Terence TaoFor (and I think also , though not for all ) all the extremal rays come from the cases when the matrices in question are simultaneously diagonalisable, and this should be computable by direct calculation (especially if one first normalises the matrices to have trace zero in the additive case, or determinant 1 in the multiplicative case).

29 February, 2020 at 8:03 am

Intuition for torusTerry, what’s the intuition behind why the quantum honeycomb is expected to live on a 2D torus?

20 November, 2020 at 8:34 am

Q“Littlewood-Richardson numbers that are computed by additive honeycombs have even been proposed to play a role in lower bounds in complexity theory, and specifically the P \neq NP problem”. What is the exact role?

21 November, 2020 at 1:27 pm

Terence TaoThis is discussed in https://arxiv.org/abs/cs/0501076

7 February, 2021 at 2:36 am

Arulsaravana JeyarajProfessor Terence Tao:

GCT and algebraic complexity works at the non-boolean level.

Consider the problems:

1. Given an n x n 0/1 matrix M over integers what is the ith bit of determinant?

2. Given an n x n 0/1 matrix M over integers what is the ith bit of permanent?

I understand permanent over integers and determinant over integers have different invariants (permanent is invariant under permutations only) and I think that is what GCT is trying to capitalize on.

If the intention were to separate their complexities at the boolean level would the invariants be much different?

In other words if GCT is studying complexity in Boolean world would the objects of interest be any different? How would ‘Boolean GCT’ explain the following phenomenon for Least Significant Bit and Most Significant Bit? So perhaps there are things which should be considered in the program while dealing with Boolean complexity?

If i is the Least Significant Bit then by mechanics of being in F_2 and by Cook-Levin the boolean complexity is the same (the boolean circuit can be found in P). If i is the Most Significant Bit then by JSV (https://dl.acm.org/doi/10.1145/1008731.1008738) algorithm and randomized algorithms being in P/poly there is a small boolean circuit which can be found in P by Cook-Levin under proper derandomization arguments.

Thank you.

7 February, 2021 at 3:12 am

Arulsaravana JeyarajSince MSB position can be any value between 1 and \lceil\log_2 n!\rceil it has to explain why it can extract ith bit bit position only when it is MSB.

29 May, 2021 at 3:20 pm

Arulsaravana JeyarajJSV algorithm http://www.cc.gatech.edu/~vigoda/Permanent.pdf alone fails to guarantee BPTIME algorithm for MSB of permanent. However if the following is correct it might provide a BPTIME algorithm for MSB of 0/1 permanent under randomized self reducibility property of PP https://lance.fortnow.com/papers/files/rsr.pdf.

Fix reals $a,b\in(1,2)$ satisfying $1<b<a<ab<2$.

Is the fraction of $0/1$ matrices of dimensions $n\times n$ have permanents in $[b2^m,a2^m]$ at some $m\in\{0,1,2,\dots,\lfloor\log_2n!\rfloor\}$ at least $\frac1{p(n)}$ at a fixed polynomial $p(n)$ whose coefficients and degree are independent of $n$?

It can validate statements similar to "the fraction of permanents in (2^m, 2^(m+1)) having values in (b 2^m, a2^m) where b=1.01 and a=1.9 is Omega(2^m/poly(m))".

In a sense is permanent of 0/1 matrices distributed approximately uniformly?

30 May, 2021 at 1:11 am

Arulsaravana JeyarajThe consequence of a BPTIME algorithm for MSB would imply any explanation to separation Permanent from FP/poly would need to explain why Permanent does not have PTIME subtractive closure and nothing more while morally LSB and MSB are essentially equivalent in complexity (a pretty tricky and possibly impossible situation to overcome *within* the current consensus of things).

1 June, 2021 at 11:55 am

Arulsaravana JeyarajI think these simple tricks will not work.