This is my second Milliman lecture, in which I talk about recent applications of ideas from additive combinatorics (and in particular, from the inverse Littlewood-Offord problem) to the theory of discrete random matrices.
In many areas of physics, chemistry, and computer science, one often has to study large matrices A, which for sake of discussion we shall take to be square, thus A = (a_{ij})_{1 \leq i,j \leq n} is an n \times n matrix for some large integer n, and the a_{ij} are real or complex numbers. In some cases A will be very structured (e.g. self-adjoint, upper triangular, sparse, circulant, low rank, etc.) but in other cases one expects A to be so complicated that there is no usable structure to exploit in the matrix. In such cases, one often makes the non-rigorous (but surprisingly accurate, in practice) assumption that A behaves like a random matrix, whose entries are drawn independently and identically from a single probability distribution (which could be continuous, discrete, or a combination of both). Each choice of distribution determines a different random matrix ensemble. Two particularly fundamental examples of continuous ensembles are:

  1. The real Gaussian random matrix ensemble, in which each a_{ij} is distributed independently according to the standard normal distribution N(0,1) \equiv \frac{1}{\sqrt{2\pi}} e^{-x^2/2}\ dx. This ensemble has the remarkable feature of being invariant under the orthogonal group O(n); if R is a rotation or reflection matrix in O(n), and A is distributed as a real Gaussian random matrix, then RA and AR are also distributed as a real Gaussian random matrix. (This is ultimately because the product of n Gaussian measures \frac{1}{\sqrt{2\pi}} e^{-x^2/2}\ dx on {\Bbb R} is a Gaussian measure \frac{1}{(2\pi)^{n/2}} e^{-|x|^2/2}\ dx on {\Bbb R}^n, which is manifestly rotation-invariant.)
  2. The complex Gaussian random matrix ensemble, in which the a_{ij} are distributed according to a complex normal distribution \frac{1}{\pi} e^{-|z|^2}\ dz. This ensemble has the feature of being invariant under the unitary group U(n).

Two particularly fundamental examples of discrete ensembles are

  1. The Bernoulli ensemble, in which each a_{ij} is distributed independently and uniformly in the set \{-1,+1\}, thus A is a random matrix of signs.
  2. The lazy (or sparse) Bernoulli ensemble, in which each a_{ij} is independently equal to -1 or +1 with probability p/2, and equal to 0 with probability 1-p, for some fixed 0 \leq p \leq 1, thus A is a sparse matrix of signs of expected density p.

The Bernoulli and sparse Bernoulli ensembles arise naturally in computer science and numerical analysis, as they form a simple model for simulating the effect of numerical roundoff error (or other types of digital error) on a large matrix. Continuous ensembles such as the Gaussian ensembles, in contrast, are natural models for matrices in the analog world, and in particular in physics and chemistry. [For reasons that are still somewhat mysterious, these ensembles, or more precisely their self-adjoint counterparts, also seem to be good models for various important statistics in number theory, such as the statistics of zeroes of the Riemann zeta function, but this is not the topic of my discussion here.] There are of course many other possible ensembles of interest that one could consider, but I will stick to the Gaussian and Bernoulli ensembles here for simplicity.

If A is drawn randomly from one of the above matrix ensembles, then we have a very explicit understanding of how each of the coefficients of the matrix A behaves. But in practice, we want to study more “global” properties of the matrix A which involve rather complicated interactions of all the coefficients together. For instance, we could be interested in the following (closely related) questions:

  • Dynamics. Given a typical vector x \in {\Bbb R}^n, what happens to the iterates Ax, A^2 x, \ldots, A^m x in the limit m \to \infty?
  • Expansion and contraction. Given a non-zero vector x, how does the norm \|Ax\| of Ax compare with the norm \|x\| of x? What is the largest ratio \|Ax\|/\|x\|? The smallest ratio? The average ratio?
  • Invertibility. Is the equation A x = b solvable for every vector b? Do small fluctuations in b always cause small fluctuations in x, or can they cause large fluctuations? (In other words, is the invertibility problem stable?)

As any student of linear algebra knows, these questions can be answered satisfactorily if one knows the eigenvalues \lambda_1,\ldots,\lambda_n \in {\Bbb C} (counting multiplicity) and singular values \sigma_1 \geq \ldots \geq \sigma_n \geq 0 of the matrix A. (As the matrix A is not self-adjoint, the eigenvalues can be complex-valued even if the coefficients of A are real-valued; however, the singular values are always non-negative reals, because AA^* is self- adjoint and positive semi-definite.) For instance:

  • The largest eigenvalue magnitude \sup_{1 \leq k \leq n} |\lambda_k| determines the maximal rate of exponential growth or decay of A^m x (ignoring polynomial growth corrections coming from repeated eigenvalues), while the smallest eigenvalue magnitude \inf_{1 \leq k \leq n} |\lambda_k| determines the minimal rate of exponential growth.
  • The ratio \|Ax\|/\|x\| has a maximal value of \sigma_1, a minimal value of \sigma_n, and a root mean square value of (\frac{1}{n} \sum_{k=1}^n \sigma_k^2)^{1/2} if the orientation of x is selected uniformly at random.
  • The matrix A is invertible if and only if all eigenvalues are non-zero, or equivalently if \sigma_n is positive.
  • The stability of the invertibility problem is controlled by the condition number \sigma_1/\sigma_n.

So, one of the fundamental problems in the theory of random matrices is to understand how the eigenvalues and singular values of a random matrix A are distributed. (More generally, it is of interest to study the eigenvalues and singular values of A+B, where A is drawn from a standard random matrix ensemble, and B is a fixed deterministic matrix, but for simplicity we will not discuss this case here.) But how does one get a handle on these numbers?

The direct approach of working with the characteristic equation \det(A - \lambda I) = 0 (or \det(AA^* - \sigma^2 I) = 0) looks very unpromising; one is asking to find the roots of a large degree polynomial, most of whose coefficients depend in a hopelessly complicated way on the coefficients on A.

In the special cases of the Gaussian orthogonal and unitary ensembles, there is a massive amount of algebraic structure coming from the action of O(n) and U(n) that allows one to explicitly compute various multidimensional integrals, and this approach actually works! One gets a very explicit and useful explicit formula for the joint eigenvalue distribution (first worked out by Ginibre, I believe) this way. But for more general ensembles, such as the Bernoulli ensemble, such algebraic structure is not present, and so it is unlikely that any useful explicit formula for the joint eigenvalue distribution exists. However, one can still obtain a lot of useful information if, instead of trying to locate each eigenvalue or singular value directly, one instead tries to compute various special averages (e.g. moments) of these eigenvalues or singular values. For instance, from undergraduate linear algebra we have the fundamental formulae

\hbox{tr}(A) = \sum_{k=1}^n \lambda_k

and

\hbox{det}(A) = \prod_{k=1}^n \lambda_k

connecting the trace and determinant of a matrix A to its eigenvalues, and more generally

\hbox{tr}(A^m) = \sum_{k=1}^n \lambda_k^m (1)

\hbox{det}(A - zI) = \prod_{k=1}^n (\lambda_k-z)

and similarly for the singular values, we have

\hbox{tr}((AA^*)^m) = \sum_{k=1}^n \sigma_k^{2m}

\hbox{det}(AA^* - zI) = \prod_{k=1}^n (\sigma_k^2-z).

So, if one can easily compute traces and determinants of the matrix A (and various other matrices related to A), then one can in principle get quite a bit of control on the eigenvalues and singular values. It is also worth noting that the eigenvalues and singular values are related to each other in several ways; for instance, we have the identity

\prod_{k=1}^n |\lambda_k| = \prod_{k=1}^n \sigma_k (2)

(which comes from comparing the determinants of A and AA^*), the inequality

\sigma_n \leq \sup_{1 \leq k \leq n} |\lambda_k| \leq \sigma_1 (3)

(which comes from looking at the ratio \|Ax\|/\|x\| when x is an eigenvector), and the inequality

\sum_{1 \leq k \leq n} |\lambda_k|^2 \leq \sum_{1 \leq k \leq n} \sigma_k^2 (4)

(which is easiest to see by using QR (or KAN) decomposition of the eigenvector matrix to rotate the matrix A to be upper triangular, and then computing the trace of AA^*).

Let’s give some simple examples of this approach. If we take A to be the Gaussian or Bernoulli ensemble, then the trace of A has expectation zero, and so we know that the sum \sum_{k=1}^n \lambda_k has expectation zero also. (Actually, this is easy to see for symmetry reasons: A has the same distribution as -A, and so the distribution of eigenvalues also has a symmetry around the origin.) The trace of AA^*, by contrast, is the sum of the squares of all the matrix coefficients, and will be close to n^2 (for the Bernoulli ensemble, it is exactly n^2); thus we see that \sum_{k=1}^n \sigma_k^2 \sim n^2, and so by (4) we have \sum_{k=1}^n |\lambda_k|^2 = O(n^2). So we see that the eigenvalues and singular values should be about O(\sqrt{n}) on the average. By working a little harder (e.g. by playing with very high moments of AA^*) one can show that the largest singular value is also going to be O(\sqrt{n}) with high probability, which then implies by (3) that all eigenvalues and singular values will be O(\sqrt{n}). Unfortunately, this approach does not seem to yield much information on the least singular value, which plays a major role in the invertibility and stability of A.

It is now natural to normalise the eigenvalues and singular values of A by \frac{1}{\sqrt{n}}, and consider the distribution of the set of normalised eigenvalues \{ \frac{1}{\sqrt{n}} \lambda_k: 1 \leq k \leq n \}. If one plots these normalised eigenvalues numerically in the complex plane for moderately large n (e.g. n=100), one sees a remarkable distribution; the eigenvalues appear to be uniformly distributed in the unit circle D := \{ z \in {\Bbb C}: |z| \leq 1 \}. (For small n, there is a little bit of a clustering on the real line, just because polynomials with real coefficients tend to have a couple of real zeroes, but this clustering goes away in the limit as n goes to infinity.) This phenomenon is known as the circular law; more precisely, if we let n tend to infinity, then for every sufficiently nice set R in the plane (e.g. one could take R to be a rectangle), one has

\lim_{n \to \infty} \frac{1}{n} \{ 1 \leq k \leq n: \frac{1}{\sqrt{n}} \lambda_k \in R \} \to \frac{1}{\pi} |R \cap D|.

(Technically, this formulation is known as the strong circular law; there is also a weak circular law, which asserts that one has convergence in probability rather than almost sure convergence. But for this talk I will ignore these distinctions.)

The circular law was first proven in the case of the Gaussian unitary ensemble by Mehta in 1967, using an explicit formula for the joint distribution of the eigenvalues. But for more general ensembles, in which explicit formulae were not available, progress was more difficult. The method of moments (in which one uses (1) to compute the sums of powers of the eigenvalues) is not very useful because of the cancellations caused by the complex nature of the eigenvalues; indeed, one can show that \hbox{tr}((\frac{1}{\sqrt{n}} A)^m) is roughly zero for every m, which is consistent with the circular law but also does not preclude, for instance, all the eigenvalues clustering at the origin. [For random self-adjoint matrices, the moment method works quite well, leading for instance to Wigner’s semi-circular law.]

The first breakthrough was by Girko in 1984, who observed that the eigenvalue distribution could be recovered from the quantities

\frac{1}{n} \log |\det(\frac{1}{\sqrt{n}} A - zI)| = \frac{1}{n} \sum_{k=1}^n \log |\frac{1}{\sqrt{n}} \lambda_k - z| (5)

for complex z (this expression is known as the Stieltjes transform of the normalised eigenvalue distribution of A). To compute this quantity, Girko then used the formula (2) to relate the determinant of \frac{1}{\sqrt{n}} A - zI with the singular values of this matrix. The singular value distribution could then be computed by the moment method (note that singular values, unlike eigenvalues, are real and non-negative, and so we do not have cancellation problems). Putting this all together and doing a large number of algebraic computations, one eventually obtains (formally, at least) a proof of the circular law.

There was however a technical difficulty with the above analysis, which was that the formula (2) becomes very unstable when the least singular value is close to zero (basically because of a division by zero problem). This is not merely a technical issue but is fundamental to the general problem of controlling eigenvalues of non-self-adjoint matrices \frac{1}{\sqrt{n}} A: these eigenvalues can become very unstable near a region of pseudospectrum, which can be defined as a complex number z such that the least singular value of \frac{1}{\sqrt{n}} A-zI is small. The classic demonstration of this comes from the perturbed shift matrices

A_\varepsilon := \begin{pmatrix} 0 & 1 & 0 & \ldots & 0\\ 0 & 0 & 1 & \ldots & 0 \\ \vdots & & & \ddots & \\ 0 & 0 & 0 & \ldots & 1 \\ \varepsilon & 0 & 0 & \ldots & 0 \end{pmatrix}.

For sake of discussion let us take n to be even. When \varepsilon = 0, this matrix is singular, with least singular value \sigma_1 = 0 and with all n generalised eigenvalues equal to 0. But when \varepsilon becomes positive, the least singular value creeps up to \varepsilon, but the n eigenvalues move rapidly away from the origin, becoming \varepsilon^{1/n} e^{2\pi i j/n} for j=1,\ldots,n. This is ultimately because the zero set of the characteristic polynomial z^n - \varepsilon is very sensitive to the value of \varepsilon when that parameter is close to zero.

So, in order to make the circular law argument complete, one needs to get good lower bounds on the least singular value of the random matrix A (as well as variants of this matrix, such as \frac{1}{\sqrt{n}} A - zI). In the case of continuous (non-Gaussian) ensembles, this was first done by Bai in 1997. To illustrate the basic idea, let us look at a toy problem, to show that the least singular value of a Gaussian orthogonal matrix A is usually non-zero (i.e. A is invertible with high probability). For this, we use some linear algebra. Let X_1,\ldots,X_n denote the rows of A, which we can view as vectors in {\Bbb R}^n. Then the least singular value of A vanishes precisely when X_1,\ldots,X_n lie on a hyperplane. This implies that one of the vectors here is a linear combination of the other n-1; by symmetry, we conclude that the probability that the least singular value vanishes is bounded by n times the probability that X_n (say) is a linear combination of X_1,\ldots,X_{n-1}. But X_1,\ldots,X_n span a hyperplane at best; and X_n has a continuous distribution and so has only a zero probability of lying in the hyperplane. Thus the least singular value vanishes with probability zero. It turns out that this argument is robust enough to also show that the least singular value not only avoids zero, but in fact keeps a certain distance away from it; for instance, it is not hard to show with this method that the least singular value is at least 1/n^2 (say) with probability 1-o(1), basically because each row vector is not only not a linear combination of the other rows, but in fact keeps a certain distance away from the space spanned by the other rows, with high probability.

For discrete random matrices, one runs into a new difficulty: a row vector such as X_n is no longer continuously distributed, and so can in fact concentrate on a hyperplane with positive probability. For instance, in the Bernoulli case, a row vector X is just a random corner of the discrete cube \{-1,+1\}^n. There are certain hyperplanes which X can visit quite frequently; for instance, X will have a probability 1/2 of lying in the hyperplane \{ (x_1,\ldots,x_n): x_1 = x_2 \}. In particular, there is a probability 1/2^n that all rows X_1,\ldots,X_n lie in this hyperplane, which would cause the Bernoulli matrix A to be non-invertible. (In terms of A itself, what is going on is that there is a 1/2^n chance that the first column and second column coincide, which will of course destroy invertibility.) In particular, A now has a non-zero chance of being singular. [It is in fact conjectured that the singularity probability is close to this value, or more precisely equal to (1/2+o(1))^n. The best known upper bound currently for this probability is (1/\sqrt{2}+o(1))^n, due to Bourgain, building upon earlier work of Kahn-Komlos-Szemeredi, and Vu and myself.]

One can hope to sum the singularity probability over all hyperplanes, but there are of course infinitely many hyperplanes in {\Bbb R}^n. Fortunately, only a few of these will have a particularly high chance of capturing X. The problem now hinges on getting a sufficiently strong control on the number of hyperplanes which are “rich” in the sense that they have a high probability of capturing X (or equivalently, that they have a large intersection with the discrete cube \{-1,+1\}^n).

This is where (finally!) the additive combinatorics comes in. Let v = (v_1,\ldots,v_n) \in {\Bbb R}^n be a normal vector to a given hyperplane V. (For instance, if V = \{ (x_1,\ldots,x_n): x_1 = x_2 \}, one could take v = (1,-1,0,\ldots,0).) Then V is rich if and only if the random variable

\epsilon_1 v_1 + \ldots + \epsilon_n v_n (6)

vanishes a large proportion of the time, where \epsilon_1,\ldots,\epsilon_n \in \{-1,+1\} are independent signs. (One can view (6) as the result of an n-step random walk, in which the j^{th} step of the walk has magnitude |v_i|.) To put it another way, if (v_1,\ldots,v_n) is associated to a rich hyperplane, then there are many additive relations amongst the coefficients v_1,\ldots,v_n. What kinds of sets of numbers have such a strong amount of structure? (This problem is known as the inverse Littlewood-Offord problem; the forward Littlewood-Offord problem concerns how often (6) vanished for a given set of numbers v_1,\ldots,v_n.)

Well, one way that many of the sums (6) could vanish is if many of the v_i are themselves zero; for instance, we have already seen that if (v_1,\ldots,v_n) = (1,-1,0,\ldots,0), then half of the sums (6) vanish. But this is a rather degenerate case, and it is intuitively obvious that the more non-zero terms one has in the random walk, the less likely it is that the sum is going to vanish. Indeed, there is an observation of Erdős (a quick application of Sperner’s theorem) that if k of the coefficients v_1,\ldots,v_n are non-zero,then (6) can only vanish at most O(1/\sqrt {k}) of the time. This bound is sharp; if, for instance, v_1=\ldots=v_k=1 and v_{k+1}=\ldots=v_n=0, then the theory of random walks tells us that (6) is distributed in a roughly Gaussian and discrete fashion around the origin with standard deviation \sqrt{k}, and so (6) should vanish about O(1/\sqrt{k}) of the time (and one can check this easily enough using Stirling’s formula). [In fact, this case is essentially the exact optimum, as follows from Erdős’ argument.]

So, now suppose that all the v_i are non-zero. Then Erdős’ result tells us that (6) vanishes at most O(1/\sqrt{n}) of the time. But in most cases one can do a lot better; if, for instance, the v_i are linearly independent over the rationals, then (6) in fact never vanishes at all. It turns out that by intensively using the tools from additive combinatorics (including Fourier analysis and the geometry of numbers) one can obtain a satisfactory classification of the vectors v = (v_1,\ldots,v_n) for which (6) has a high chance of vanishing; the precise description is technical, but basically in order for (6) to equal zero often, most of the coordinates of v must lie inside an generalised arithmetic progression of reasonably small size and dimension. Using such facts, it is possible to get good bounds on the singularity probability and on the least singular value of random discrete matrices such as Bernoulli matrices, leading in particular to the circular law for such discrete matrices (various formulations of this law have been recently obtained by Götze-Tikhomirov, Pan-Zhou, and Van and myself). [To get the best bounds on the singularity probability, one uses a slightly different argument, using Fourier analysis and additive combinatorics to compare the vanishing probability of a random walk with that of a lazy random walk, thus creating a relationship between the singularity of Bernoulli matrices and sparse Bernoulli matrices; see for instance my paper with Van on this topic.]

This theory for understanding singularity behaviour of discrete random matrices promises to have applications to some other areas of mathematics as well. For instance, the subject of smoothed analysis, in which the introduction of random noise to various numerical algorithms (such as the simplex method) increases the stability of the algorithm, can use this theory to extend the theoretical results in that subject from continuous noise models to discrete noise models (such as those created by roundoff error).

[Update, Dec 6: Terminology and typos corrected.]