You are currently browsing the tag archive for the ‘small ball probability’ tag.

Van Vu and I have just uploaded to the arXiv our joint paper “The Littlewood-Offord problem in high dimensions and a conjecture of Frankl and Füredi“. In this short paper we give a different proof of a high-dimensional Littlewood-Offord result of Frankl and Füredi, and in the process also affirmatively answer one of their open problems.

Let {v_1,\ldots,v_n} be {n} vectors in {{\mathbb R}^d}, which we normalise to all have length at least {1}. For any given radius {\Delta > 0}, we consider the small ball probability

\displaystyle  p(v_1,\ldots,v_n,\Delta) := \sup_B {\bf P}( \eta_1 v_1 + \ldots + \eta_n v_n \in B )

where {\eta_1,\ldots,\eta_n} are iid Bernoulli signs (i.e. they take values {+1} or {-1} independently with a probability of {1/2} of each), and {B} ranges over all (closed) balls of radius {\Delta}. The Littlewood-Offord problem is to compute the quantity

\displaystyle  p_d(n,\Delta) := \sup_{v_1,\ldots,v_n} p(v_1,\ldots,v_n,\Delta)

where {v_1,\ldots,v_n} range over all vectors in {{\mathbb R}^d} of length at least one. Informally, this number measures the extent to which a random walk of length {n} (with all steps of size at least one) can concentrate into a ball of radius {\Delta}.

The one-dimensional case of this problem was answered by Erdös. First, one observes that one can normalise all the {v_i} to be at least {+1} (as opposed to being at most {-1}). In the model case when {\Delta < 1}, he made the following simple observation: if a random sum {\eta_1 v_1 + \ldots + \eta_n v_n} fell into a ball of radius {\Delta} (which in the one-dimensional case, is an interval of length less than {2}), and one then changed one or more of the signs {\eta_i} from {-1} to {+1}, then the new sum must necessarily lie outside of the ball. In other words, for any ball {B} of radius {\Delta}, the set of signs {(\eta_1,\ldots,\eta_n) \in \{-1,+1\}^n} for which {\eta_1 v_1 + \ldots + \eta_n v_n \in B} forms an antichain. Applying Sperner’s theorem, the maximal size of this antichain is {\binom{n}{\lfloor n/2\rfloor}}, and this soon leads to the exact value

\displaystyle  p_1(n,\Delta) = \binom{n}{\lfloor n/2\rfloor}/2^n = \frac{\sqrt{\frac{2}{\pi}}+o(1)}{\sqrt{n}}

when {0 \leq \Delta < 1} (the bound is attained in the extreme case {v_1=\ldots=v_n=1}).

A similar argument works for higher values of {\Delta}, using Dilworth’s theorem instead of Sperner’s theorem, and gives the exact value

\displaystyle  p_1(n,\Delta) = \sum_{j=1}^s \binom{n}{m_j}/2^n = \frac{s\sqrt{\frac{2}{\pi}}+o(1)}{\sqrt{n}}

whenever {n \geq s} and {s-1 \leq \Delta < s} for some natural number {s}, where {\binom{n}{m_1},\ldots,\binom{n}{m_s}} are the {s} largest binomial coefficients of {\binom{n}{1}, \ldots, \binom{n}{n}}.

Now consider the higher-dimensional problem. One has the obvious bound

\displaystyle  p_d(n,\Delta) \geq p_1(n,\Delta),

but it is not obvious whether this inequality is strict. In other words, is there some way to exploit the additional freedom given by higher dimensions to make random walks concentrate more than in the one-dimensional case?

For some values of {\Delta}, it turns out that the answer is no, as was first observed by Kleitman (and discussed further by Frankl and Füredi). Suppose for instance that

\displaystyle  \sqrt{(s-1)^2+1} \leq \Delta < s

for some {s \geq 2}. Then one can consider the example in which {v_1=\ldots=v_{n-1}=e_1} is one unit vector, and {v_n=e_2} is another unit vector orthogonal to {e_1}. The small ball probability in this case can be computed to equal {p_1(n-1,s-1)} rather than {p_1(n,s-1)}, which is slightly larger.

In the positive direction, Frankl and Füredi established the asymptotic

\displaystyle  p_d(n,\Delta) = (1 + o(1)) p_1(n,\Delta) \ \ \ \ \ (1)

as {n \rightarrow \infty} (holding {d} and {\Delta} fixed). Furthermore, if {\Delta} was close to an integer, and more precisely if

\displaystyle  s-1 \leq \Delta < s-1 + \frac{1}{10s^2}

(so that the above counterexample can be avoided) they showed that {p_d(n,\Delta) = p_1(n,\Delta)} for sufficiently large {n} (depending on {s,\Delta}).

The factor {\frac{1}{10s^2}} was an artefact of their method, and they conjectured in fact that one should have {p_d(n,\Delta) = p_1(n,\Delta)} for sufficiently large {n} whenever

\displaystyle  s-1 \leq \Delta < \sqrt{(s-1)^2+1} \ \ \ \ \ (2)

thus matching the counterexample exactly. This conjecture was verified for {s = 1} by Kleitman and for {s=2,3} by Frankl and Füredi.

In this paper we verify the conjecture of Frankl and Füredi (and give a new proof of their asymptotic (1)). Our main tool is the following high-dimensional Littlewood-Offord inequality:

Theorem 1 Suppose that {v_1,\ldots,v_n \in {\mathbb R}^d} which is genuinely {d}-dimensional in the sense that for any hyperplane {H} going through the origin, one has {\hbox{dist}(v_i,H) \geq 1} for at least {k} values of {i}. Then one has

\displaystyle  p(v_1,\ldots,v_n,\Delta) \ll_{d,\Delta} k^{-d/2}.

Theorem 1 can be viewed as a high-dimensional variant of Erdös’s inequality (but without the sharp upper bound). It is proven by the Fourier-analytic method of Halász. (This theorem was announced in my book with Van Vu several years ago, but we did not get around to publishing it until now.)

Using Theorem 1, one can verify the conjecture of Frankl and Füredi fairly quickly (the deduction takes a little over a page). The main point is that if there is excessive concentration, then Theorem 1 quickly places almost all of the vectors {v_1,\ldots,v_n} to lie very close to a line. If all the vectors are close to a line, then we can project onto this line and rescale, which causes {\Delta} to worsen a little bit in this reduction to the one-dimensional case, but it turns out that the bounds (2) allow us to tolerate this degradation of {\Delta} once {s>3} (so it is fortunate that the cases {s \leq 3} were already done for us!). If instead we have a vector far from the line (as is the case in the key counterexample), then we manually eliminate that vector using the parallelogram law, which effectively drops {\Delta} below {s-1} (half of the time, at least) if {\Delta} was initially less than {\sqrt{(s-1)^2+1}}, which gives enough of a saving to conclude the argument.

One moral that one can draw from this argument is that one can use a quasi-sharp estimate (such as Theorem 1), which ostensibly loses constant factors, to then deduce a sharp estimate (such as the Frankl-Furëdi conjecture) that loses no constant factors, as long as one is in an asymptotic regime (in this case, {s \geq 3} and {n} large depending on {d,\Delta}). The key is to exploit the fine structure in the main term (in this case, the piecewise constant nature of {p_1(n,\Delta)} when {\Delta} passes over integers) to extract gains that can absorb the losses coming from the quasi-sharp estimate).

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,875 other followers