[This post is authored by Gil Kalai, who has kindly “guest blogged” this week’s “open problem of the week”. – T.]
This is a problem in discrete and convex geometry. It seeks to quantify the intuitively obvious fact that large convex bodies are so “fat” that they cannot avoid “detection” by a small number of observation points. More precisely, we fix a dimension d and make the following definition (introduced by Haussler and Welzl):
- Definition: Let be a finite set of points, and let . We say that a finite set is a weak -net for X (with respect to convex bodies) if, whenever B is a convex body which is large in the sense that , then B contains at least one point of Y. (If Y is contained in X, we say that Y is a strong -net for X with respect to convex bodies.)
For example, in one dimension, if , and where k is the integer part of , then Y is a weak -net for X with respect to convex bodies. Thus we see that even when the original set X is very large, one can create a -net of size as small as . Strong -nets are of importance in computational learning theory, and are fairly well understood via Vapnik-Chervonenkis (or VC) theory; however, the theory of weak -nets is still not completely satisfactory.
One can ask what happens in higher dimensions, for instance when X is a discrete cube . It is not too hard to cook up -nets of size (by using tools such as Minkowski’s theorem), but in fact one can create -nets of size as small as simply by taking a random subset of X of this cardinality and observing that “up to errors of “, the total number of essentially different ways a convex body can meet X grows at most polynomially in . (This is a very typical application of the probabilistic method.) On the other hand, since X can contain roughly disjoint convex bodies, each of which contains at least of the points in X, we see that no -net can have size much smaller than .
Now consider the situation in which X is now an arbitrary finite set, rather than a discrete cube. More precisely, let be the least number such that every finite set X possesses at least one weak -net for X with respect to convex bodies of cardinality at most . (One can also replace the finite set X with an arbitrary probability measure; the two formulations are equivalent.) Informally, f is the least number of “guards” one needs to place to prevent a convex body from covering more than of any given territory.
- Problem 1: For fixed d, what is the correct rate of growth of f as ?
It is already non-trivial (and somewhat surprising) that is even finite. This fact was first shown by Alon, Bárány, Füredi, and Kleitman (the planar case was achieved earlier by Bárány, Füredi, and Lovász), who established a bound of the form ; this was later improved to by Chazelle et al. In the other direction, the best lower bound is for some positive c(d); it was shown by Matousek that c(d) grows at least as fast as for some absolute constant c. Furthermore, many of the upper bound results come with actual (probabilistic) algorithms to locate these nets. Some better upper bounds are known when d=2 (where hyperbolic geometry can be used in a beautiful way) or when special assumptions are made on the set X (here, the bounds can be rather subtle, for instance inverse Ackermann functions show up in some cases!).
The finiteness of was an important ingredient in a paper of Alon and Kleitman which proved a Helly-type conjecture of Debrunner and Hadwiger. Recall that Helly’s theorem asserts that if a finite collection of convex bodies in has the property that any d+1 of them have non-empty intersection, then the whole family has non-empty intersection. Helly’s theorem already easily implies that for . Alon and Kleitman’s theorem asserts that for every integers and , , there is a function such that the following is true. Every finite family of convex sets in with the property that among every sets in the family there are with non empty intersection can be divided to at most intersecting subfamilies. (Helly’s theorem thus asserts that .)
If we replaced weak -nets by strong -nets then the situation is well understood; the analogue of would be finite if and only if the collection of convex bodies had finite Vapnik-Chervonenkis (or VC) dimension. However, it can be shown that the VC dimension of convex bodies is infinite, and so strong -nets can be required to be arbitrarily large for fixed . One can however use VC theory to show that, for instance, one can find a strong -net for any set with respect to, say, convex pentagons in the plane, whose size depends only on .)
One can talk about -nets in the more abstract setting of hypergraphs (a.k.a. set systems). Indeed, if is a hypergraph (thus A is a set of vertices, and is a collection of subsets (or “edges”) of A), and is any probability measure on A, we define a weak -net of to be a set which meets every edge of the hypergraph which has -measure greater than . We then define to be the least number such that every probability measure has a weak -net of cardinality at most .
- Problem 2: Find conditions on the hypergraph H which ensure that is finite. [It is already known that if H has a finite VC dimension D, then is finite, and in fact is bounded by ; the more interesting case is when H has infinite VC dimension.]
- Problem 3: Is it possible for to be finite but to be significantly larger than ? More precisely, does there exist a hypergraph H and such that for all sufficiently small ? It seems very unlikely that this is not the case, but no example of growth faster than has been constructed.
An improved understanding of both strong and weak -nets will help shed light on the integrality gap between vertex covering numbers (a.k.a. transversal numbers) and fractional vertex covering numbers (a.ka. fractional transversal numbers) of hypergraphs. Roughly speaking, the gap is the phenomenon that it can be more expensive to cover some territory if the amount of resources you can put into any one location is quantized to be an integer. For instance, suppose you want to cover the pentagon graph (i.e. the cycle of length 5) by vertices so that each edge is covered by at least one vertex; such a set of vertices is known as a vertex cover or transversal. Then one clearly needs at least three vertices to accomplish this. If however one is allowed to assign fractional weights to each vertex, then it becomes more efficient to give each vertex a weight of , then again each edge is covered by a total weight of 1 vertex, but now one has only used vertices instead of 3. These problems are of course special cases of integer programming, which is a much harder problem in general than linear programming (indeed, it can be NP-hard). For further discussion of these issues see this paper of Alon et al.
A good reference to much of the above work is Matousek’s book.