Van Vu and I have just uploaded to the arXiv our preprint “On the permanent of random Bernoulli matrices“, submitted to Adv. Math. This paper establishes analogues of some recent results on the determinant of random n \times n Bernoulli matrices (matrices in which all entries are either +1 or -1, with equal probability of each), in which the determinant is replaced by the permanent.

More precisely, let M be a random n \times n Bernoulli matrix, with n large. Since every row of this matrix has magnitude n^{1/2}, it is easy to see (by interpreting the determinant as the signed volume of a parallelopiped) that |\det(M)| is at most n^{n/2}, with equality being satisfied exactly when M is a Hadamard matrix. In fact, it is known that the determinant \det(M) has magnitude n^{(1/2 - o(1)) n} with probability 1-o(1); for a more precise result, see my earlier paper with Van. (There is in fact believed to be a central limit theorem for \log |\det(M)|; see this paper of Girko for details.) These results are based upon the elementary “base times height” formula for the volume of a parallelopiped; the main difficulty is to understand what the distance is from one row of M to a subspace spanned by several other rows of M.

The permanent \hbox{per}(M) looks formally very similar to the determinant, but does not have a geometric interpretation as a signed volume of a parallelopiped and so can only be analysed combinatorially; the main difficulty is to understand the cancellation that can arise from the various signs in the matrix. It can be somewhat larger than the determinant; for instance, the maximum value of \hbox{per}(M) for a Bernoulli matrix M is n! = n^{(1 - o(1)) n}, attaned when M consists entirely of +1’s. Nevertheless, it is not hard to see that \hbox{per}(M) has the same mean and standard deviation as \det(M), namely 0 and \sqrt{n!} respectively, which shows that |\hbox{per}(M)| is at most n^{(1/2-o(1))n} with probability 1-o(1). Our main result is to show that one also has that |\hbox{per}(M)| is at least n^{(1/2-o(1))n} with probability 1-o(1), thus obtaining the analogue of the previously mentioned result for the determinant (though our o(1) bounds are significantly weaker).

In particular, this shows that the probability that the permanent vanishes completely is o(1) (in fact, we get a bound of O(n^{-c}) for some absolute constant c > 0). This result appears to be new (although there is a cute observation of Alon (see e.g. this paper of Wanless for a proof) that if n=2^m-1 is one less than a power of 2, then every Bernoulli matrix has non-zero permanent). In contrast, the probability that the determinant vanishes completely is conjectured to equal (1/2 + o(1))^n (which is easily seen to be a lower bound), but the best known upper bound for this probability is (1/\sqrt{2 }+ o(1))^n, due to Bourgain, Vu, and Wood.

Roughly speaking, the strategy is to view the permanent of a n \times n Bernoulli matrix recursively as a random signed combination of the permanent of its n-1 \times n-1 minors formed from its first n-1 rows, which one can view as the “parents” of the original matrix. The idea is then to take a sort of “genetic” or “evolutionary” viewpoint, and ask how likely the trait of having a large permanent is of being passed from parents to children. The key point is that this trait is “dominant” rather than “recessive”: as long as just one of the parents has a large permanent, it is likely that the child will have large permanent as well (and if many of the parents have large permanent, the child is likely to have an even larger permanent). Our main tool for doing this is a classical inequality of Littlewood-Offord type, due to Erdős.

However, there is still a non-negligible chance that there is enough cancellation that a child, even one with many parents with large permanent, may end up having unexpectedly small or vanishing permanent. To avoid these failure probabilities adding up to overwhelm the analysis, we have to exploit some independence. For instance, if 1 \leq k < n, a k \times k minor with large permanent will have n-k (k+1) \times (k+1) minors as children. Each child has at least a 1/2 chance of having a permanent as large as that of its parent (if a certain entry of the (k+1)^{th} row has a favourable sign). These events are sufficiently independent (in a certain martingale sense) that the probability that none of these children have large permanent is exponentially small – at most (1/2)^{n-k}. (The main technical tool we use to prove bounds such as this is Azuma’s inequality.)

Just by using the above observations (and crude probabilistic tools such as the union bound and the first moment method), it is already fairly straightforward to show that there exist many k \times k minors with large permanent (of size about n^{k/2 - o(n)}) as long as k \leq (1-\varepsilon) n for some \varepsilon > 0. The trickiest part is to manage the last few generations of the evolution, in which we need to get a large number of (n-1) \times (n-1) minors with large permanent (this will imply that the full n \times n matrix has large permanent with high probability, thanks to Erdős’ Littlewood-Offord inequality). The main difficulty is that there is a loss of independence which prevents one from obtaining the exponentially small failure probabilities one enjoyed in previous generations. The main idea to overcome this is to locate several “disjoint lineages” of k \times k minors with large permanent, so that each of which has a fairly independent probability of eventually generating a (n-1) \times (n-1) minor with large permanent. Again, Azuma’s inequality and Erdős’ Littlewood-Offord inequality are our main tools here.

Because we were trying to optimise the strength of our result (both in getting the lower bound for the magnitude of the permanent, and getting a reasonable bound on the failure probability), the actual arguments get slightly technical at times. I have uploaded here a very early draft of our paper, in which we prove the weaker result that \hbox{per}(M) \neq 0 with probability 1-o(1); the argument here is only a few pages long but already captures many of the key ideas.  (This draft was mostly written by myself; Van then improved the bounds and results significantly, leading to the version that is on the arXiv.)