Avi Wigderson‘s final talk in his Distinguished Lecture Series on “Computational complexity” was entitled “Arithmetic computation“; the complexity theory of arithmetic circuits rather than boolean circuits.
Arithmetic circuits manipulate inputs and outputs in a given field, such as the complex numbers (or more generally, a field of characteristic 0, to avoid having to deal with arithmetic “shortcuts” coming from identities such as in ). There are many models for such circuits, but one simple one is to create circuits by composing addition gates (which take two inputs x and y and return x+y as an output), multiplication gates (which take two inputs x, y and return xy as an output), and scalar multiplication gates (which take one input x and return cx as output, where c is a fixed constant for each such gate). Despite the name, circuits are not allowed to contain loops; gates can only use the outputs provided by earlier gates, although one is definitely allowed to reuse a given output for multiple future gates.
For simplicity let us view the scalar multiplication gates as being “free” and only count the addition and multiplication gates when counting complexity. Thus, for instance, to perform an inner product computation
requires 2n-1 gates (n to multiply the pairs and together, and then n-1 to add together the resulting pairs). It is not difficult to show that this 2n-1 is sharp. More generally, given any polynomial operation p with many inputs and one output, we define the circuit complexity S(p) of p to be the least number of gates needed to build an arithmetic circuit that can generate p from ; it is clear that p has to be polynomial in order to be generated by such circuits. We can also treat the complexity of multiple polynomials of some variables by considering the least number of gates required to build a circuit that can generate all outputs from the given inputs. A good example here is matrix multiplication of two matrices A, B, in which there are inputs (the coefficients of A and B) and outputs (the coefficients of AB. The obvious matrix multiplication algorithm requires a circuit of complexity , thus . But this is not best possible; the fast matrix multiplication methods of Strassen and later authors use clever recursive techniques to reduce the exponent, with the current record being . Conversely, it is clear that S(AB) should be at least , since each of the inputs must be connected to at least one gate. But the true order of S(AB) is not known.
More generally, the problem of computing S(p) or for various interesting polynomials is an important one, to which there are still only very partial answers. Even the complexity of a single polynomial p(x) of a single variable x is not easy. If p has degree d, then clearly , and counting arguments can show that this is sharp for “generic” p, but for many p one can do a lot better; for instance it is not hard to see that by repeatedly squaring x and then combining some of the iterated squares to form . On the other hand, it is clear that we have a lower bound , since a circuit of m gates can only generate a polynomial of degree at most . (Here we see the usefulness of the degree concept in arithmetic circuit complexity; there does not seem to be an analogously useful concept in boolean circuit complexity, thus making the former subject easier than the latter.)
Avi noted, though, that it was still an open question to find an “natural” polynomial p of degree d for which S(p) grew faster than any power of . (One could take a polynomial p whose coefficients are all algebraically independent, but this is “cheating”.) In particular, it is not known if
for every C. But… if this claim failed, then one can show that factoring n-bit numbers can be done in polynomial time! (The reason is that if (1) failed, then one can compute d! mod N quickly for n-bit integers d and N by using a polynomial size circuit modulo N, which basically allows one to determine whether all prime factors of N are less than any given threshold d, from which one can quickly isolate each individual prime factor.) But this already shows how difficult we expect these problems to be in general.
Here are some other interesting examples of polynomials whose complexity we would like to understand:
- The determinant polynomial on variables. It can be shown that by Gaussian elimination (admittedly, this procedure uses divisions, but there is a way to rework it so that divisions are avoided – due to Strassen, apparently, though I don’t have the reference), and by using fast matrix multiplication methods one can shave this down to as before. Conversely, the trivial lower bound on complexity here is .
- The permanent on variables; Avi referred to the permanent as the determinant’s “bad cousin”. Here, the best known upper bound for is something like . The trivial lower bound is . Closing the exponential gap here is a major challenge, which Avi returned to later in the lecture.
- The symmetric polynomial of degree d on n variables. By recursively describing the symmetric polynomials of degree up to d in terms of the same polynomials on one fewer variable, one can obtain an upper bound . The trivial lower bound is .
It is not known whether any of the above upper bounds are sharp. But Baur and Strassen have shown some logarithmic improvements on some of the lower bounds. For instance:
(Note that the bounds in (3) and (4) are tight.)
Let’s first look at (4). This looks easy: after all; each individual polynomial requires gates to compute, and the polynomials are clearly so “independent” that there should be no advantage in sharing computation, leading to a lower bound . But this naive argument is not valid. To see this, consider the problem of computing a product Ax of a constant matrix A (with all entries algebraically independent) and a vector
x of n unknown entries. Because this computation involves independent constants, it must require gates. Now if are n vectors of n distinct unknowns each (thus having variables in all), the same reasoning as before would suggest that computing simultaneously should require gates. But the fast matrix multiplication algorithms imply that we can in fact rely on just gates instead! So one’s naive intuition on this subject can sometimes be wrong; there can be clever ways to save computation even when no obvious opportunities for such savings exist.
Due to the central role of polynomials in arithmetic complexity, it is not surprising that algebraic geometry plays an important role in the subject. For instance, the basic concept of the degree of a variety (or more precisely, an algebraic set) , which can be defined for instance (at least in the case where the underlying field is algebraically closed) as the maximum cardinality that an intersection of that variety with an affine subspace of the complementary dimension can have. Thus for instance a line has degree 1, a conic section has degree 2, an elliptic curve has degree 3, and so forth. A basic result from algebraic geometry (which generalises Bezout’s theorem) is that the degree of the intersection of two algebraic sets is bounded by the product of the individual degrees. Also, the affine projection of an algebraic set of degree at most d, is also an algebraic set of degree at most d. From these facts we see that the set of all possible input and output states of a circuit of complexity m form a variety of degree at most , since each addition gate determines a variety of degree 1 and each multiplication gate determines a variety of degree 2. On the other hand, one can easily check from the definition that the variety has degree , giving a proof of (4) (note that the complexity of and clearly differ by at most O(n)).
The same argument does not directly work for (3), but there is a trick based on the upper bound
which allows one to deduce (3) from (4). (This is an example of how an upper bound on complexity can be used to deduce lower bounds from other lower bounds, which is a common technique in this business.) To prove (*), what one does is take a circuit that computes p and observe that by recursively working backwards from late outputs of this circuit to early outputs, one can compute all the partial derivatives of the final output p with respect to the value of any intermediate output y (where we temporarily allow ourselves the ability to vary these intermediate variables at will), while only increasing the number of gates by a multiplicative constant.
A similar argument gives (2), based on first computing the complexity of the inverse operation (here, of course, one has to allow division), which is closely connected to the derivatives of the determinant (via the cofactor expansion). Avi did not discuss the proof of (1), but I presume that it also uses similar techniques.
Making further progress on complexity of arbitrary arithmetic circuits like this beyond the logarithmic improvements over the trivial bound seems to be very challenging, so Avi then turned to the problem of circuit complexity of depth-bounded circuits, for which more progress has been made. Here, we allow the addition and multiplication gates to take in an arbitrary number of inputs (and in the case of addition gates, we allow arbitrary scalar multiplications, thus we can take an arbitrary linear combination of inputs in a single gate), but we organise the gates in levels, with the inputs at depth j being used to produce outputs at depth j+1, and limit the total depth to be a small number such as 2 or 3. For instance, with depth 1, one can only create linear combinations or products of inputs. With depth 2, one can create any polynomial output, but it is not hard to see that the complexity is comparable to the number of non-zero terms in that polynomial (unless it is a product of linear polynomials, in which case it is comparable to the number of terms in the product). Depth 3 is the first really interesting case – unless the polynomial p factorises into sparse factors, the most efficient way to use depth 3 circuits is is the linear combination of products of linear polynomials (though occasionally one might want to swap the order of addition and multiplication). Let us use to denote the number of gates needed to compute p with a depth 3 circuit.
One expects to have quite large lower bounds on for various interesting values of p. For instance, the depth 3 complexity of the determinant is expected to be exponentially large, though this remains open. But in some cases there are clever tricks to compute polynomials quicker than one might first expect. For instance, Ben-Or proved the bound , which is surprising given that has exponentially many terms (if d is, say, equal to n/2) and also has no obvious factorisation. Actually, this is quite easy to show: with gates in a depth 2 circuit one can already compute the quantities for , and then one can recover the symmetric polynomials as the coefficients of P via polynomial interpolation. This turns out to be tight in the case when d is comparable to n, as shown by Shpilka and Wigderson.
Finally, some exponential-type lower bounds on complexity are known if one restricts one’s circuits to not only have bounded depth, but also be homogeneous – which means all intermediate outputs need to be homogeneous polynomials of the inputs. In particular, if one is computing a degree d output, then all multiplicative gates can have order at most d, thus ruling out the polynomial interpolation trick mentioned above (which requires degree n gates). Indeed, with this additional restriction on depth 3 circuits one can show that now requires an exponential number of gates in n, if d is equal to (say) n/4. The idea here is to work not with the polynomial p per se, but rather with the linear space of polynomials generated by its translations or (equivalently, by Taylor expansion) by its various derivatives. The dimension can easily be verified to obey the inequalities and . In particular, if p is generated from a depth 3 circuit as the sum of m products of at most d linear polynomials, one has . On the other hand, by inspecting how linearly independent the derivatives of are, we have the bound . Combining the two we obtain the exponential lower bound.
Finally, Avi turned to the permanent. Valiant observed that the permanent of an matrix A can be re-expressed as the determinant of an matrix B that depends linearly on A. Unfortunately, in his construction, m is exponentially large in n, and so the best circuit complexity upper bound on the permanent is similarly exponential. Using some algebraic geometry and some Hessian-type objects, Mignon-Ressayre observed that the minimal value of m one could take here had to be at least . In contrast, Valiant proved that if one could obtain a lower bound of the form for all C, then one necessarily has (an arithmetic version of) ! Thus we see that these apparently innocuous arithmetic complexity questions are intimately related to the rest of computational complexity theory.
[Update, Jan 16: slight change to the degree argument for lower bounding .]