Van Vu and I have just uploaded to the arXiv our joint paper “The Littlewood-Offord problem in high dimensions and a conjecture of Frankl and Füredi“. In this short paper we give a different proof of a high-dimensional Littlewood-Offord result of Frankl and Füredi, and in the process also affirmatively answer one of their open problems.

Let be vectors in , which we normalise to all have length at least . For any given radius , we consider the *small ball probability*

where are iid Bernoulli signs (i.e. they take values or independently with a probability of of each), and ranges over all (closed) balls of radius . The *Littlewood-Offord problem* is to compute the quantity

where range over all vectors in of length at least one. Informally, this number measures the extent to which a random walk of length (with all steps of size at least one) can concentrate into a ball of radius .

The one-dimensional case of this problem was answered by Erdös. First, one observes that one can normalise all the to be at least (as opposed to being at most ). In the model case when , he made the following simple observation: if a random sum fell into a ball of radius (which in the one-dimensional case, is an interval of length less than ), and one then changed one or more of the signs from to , then the new sum must necessarily lie outside of the ball. In other words, for any ball of radius , the set of signs for which forms an antichain. Applying Sperner’s theorem, the maximal size of this antichain is , and this soon leads to the exact value

when (the bound is attained in the extreme case ).

A similar argument works for higher values of , using Dilworth’s theorem instead of Sperner’s theorem, and gives the exact value

whenever and for some natural number , where are the largest binomial coefficients of .

Now consider the higher-dimensional problem. One has the obvious bound

but it is not obvious whether this inequality is strict. In other words, is there some way to exploit the additional freedom given by higher dimensions to make random walks concentrate more than in the one-dimensional case?

For some values of , it turns out that the answer is no, as was first observed by Kleitman (and discussed further by Frankl and Füredi). Suppose for instance that

for some . Then one can consider the example in which is one unit vector, and is another unit vector orthogonal to . The small ball probability in this case can be computed to equal rather than , which is slightly larger.

In the positive direction, Frankl and Füredi established the asymptotic

as (holding and fixed). Furthermore, if was close to an integer, and more precisely if

(so that the above counterexample can be avoided) they showed that for sufficiently large (depending on ).

The factor was an artefact of their method, and they conjectured in fact that one should have for sufficiently large whenever

thus matching the counterexample exactly. This conjecture was verified for by Kleitman and for by Frankl and Füredi.

In this paper we verify the conjecture of Frankl and Füredi (and give a new proof of their asymptotic (1)). Our main tool is the following high-dimensional Littlewood-Offord inequality:

Theorem 1Suppose that which is genuinely -dimensional in the sense that for any hyperplane going through the origin, one has for at least values of . Then one has

Theorem 1 can be viewed as a high-dimensional variant of Erdös’s inequality (but without the sharp upper bound). It is proven by the Fourier-analytic method of Halász. (This theorem was announced in my book with Van Vu several years ago, but we did not get around to publishing it until now.)

Using Theorem 1, one can verify the conjecture of Frankl and Füredi fairly quickly (the deduction takes a little over a page). The main point is that if there is excessive concentration, then Theorem 1 quickly places almost all of the vectors to lie very close to a line. If *all* the vectors are close to a line, then we can project onto this line and rescale, which causes to worsen a little bit in this reduction to the one-dimensional case, but it turns out that the bounds (2) allow us to tolerate this degradation of once (so it is fortunate that the cases were already done for us!). If instead we have a vector far from the line (as is the case in the key counterexample), then we manually eliminate that vector using the parallelogram law, which effectively drops below (half of the time, at least) if was initially less than , which gives enough of a saving to conclude the argument.

One moral that one can draw from this argument is that one can use a quasi-sharp estimate (such as Theorem 1), which ostensibly loses constant factors, to then deduce a sharp estimate (such as the Frankl-Furëdi conjecture) that loses no constant factors, as long as one is in an asymptotic regime (in this case, and large depending on ). The key is to exploit the fine structure in the main term (in this case, the piecewise constant nature of when passes over integers) to extract gains that can absorb the losses coming from the quasi-sharp estimate).

## 8 comments

Comments feed for this article

2 March, 2010 at 4:55 am

konradswanepoelDid you mean ?

[Corrected, thanks.]2 March, 2010 at 10:17 pm

SteveMI’m interested in random walks and diffusions but have never encountered

this interesting problem before. But if the random terms in the partial sum are iid Bernoulli {-1,+1} with probabilities p(1)=q(-1)=1/2, then won’t the strong law of large numbers guarantee that the small ball probability converges almost surely to unity as n tends to infinity, at least for some dimensions? Also, how is the problem changed if the iid Bernoulli random variables in the problem have probability p(1)>q(-1)? Generated by a biased coin for example. I’m thinking of discrete random walks that are either transient or recurrent with Brownian motion being the limit of a random walk where the step sizes get smaller and smaller. In three dimensions or higher Brownian motion is transient and should leave a ball, however large, around any point never to return. Whereas in one and two dimensions, Brownian motion is recurrent and should return to a neighbourhood however small, of any point, infinitely often. I’m just curious as to whether these points are relevent.

3 March, 2010 at 9:10 am

Terence TaoThe random walks here are not normalised by n (or by sqrt(n)), so the law of large numbers (or the central limit theorem) is not directly relevant, although the central limit theorem is certainly consistent with the concentration probability being of order 1/sqrt(n).

The Littlewood-Offord inequality (Theorem 1) is valid for more general random walks (we remark on this in the paper), but for the problem of computing quantities such as p_d(n,Delta) precisely, one needs to have the Bernoulli distribution on the nose (otherwise the combinatorial tools such as Sperner’s theorem and Dilworth’s theorem will not apply).

3 March, 2010 at 11:08 am

sunkhiroesI ck Arixv, and liked the Bernoulli use and shrinking k=dl (masterful) .

16 November, 2010 at 10:04 pm

The Determinant of Random Bernoulli Matrices « Tsourolampis Blog[…] [7] Littlewood-Offord https://terrytao.wordpress.com/2010/03/01/the-littlewood-offord-problem-in-high-dimensions-and-a-conj… […]

7 July, 2015 at 9:11 pm

RakshithIs the Littlewood-offord theorem valid if the random variables are not iid?

8 July, 2015 at 8:14 am

Terence TaoWell, certainly

somediscorrelation between the signs is required; if all the signs are constrained to be equal, for instance, then there is an enormous amount of concentration regardless of the presence or absence of arithmetic structure. But the iid hypothesis is probably way too strong, and one should have analogous results in the presence of weak correlation, although most of the methods used so far are not able to handle such correlation. (There is beginning to be a little work in this direction for the Littlewood-Offord problems arising from random regular graphs.)On the other hand, the current techniques to prove Littlewood-Offord theorems should extend without much difficulty to the case of independent but not identically distributed variables (and there are probably results of this form in the literature already); identical distribution is largely a technical convenience rather than an essential hypothesis (as long as all the distributions of individual variables obey their hypotheses with a suitable level of uniformity).

8 July, 2015 at 2:52 pm

RakshithThanks!