You are currently browsing the tag archive for the ‘Erdos distance problem’ tag.

Combinatorial incidence geometry is the study of the possible combinatorial configurations between geometric objects such as lines and circles. One of the basic open problems in the subject has been the Erdős distance problem, posed in 1946:

Problem 1 (Erdős distance problem) Let {N} be a large natural number. What is the least number {\# \{ |x_i-x_j|: 1 \leq i < j \leq N \}} of distances that are determined by {N} points {x_1,\ldots,x_N} in the plane?

Erdős called this least number {g(N)}. For instance, one can check that {g(3)=1} and {g(4)=2}, although the precise computation of {g} rapidly becomes more difficult after this. By considering {N} points in arithmetic progression, we see that {g(N) \leq N-1}. By considering the slightly more sophisticated example of a {\sqrt{N} \times \sqrt{N}} lattice grid (assuming that {N} is a square number for simplicity), and using some analytic number theory, one can obtain the slightly better asymptotic bound {g(N) = O( N / \sqrt{\log N} )}.

On the other hand, lower bounds are more difficult to obtain. As observed by Erdős, an easy argument, ultimately based on the incidence geometry fact that any two circles intersect in at most two points, gives the lower bound {g(N) \gg N^{1/2}}. The exponent {1/2} has been slowly increasing over the years by a series of increasingly intricate arguments combining incidence geometry facts with other known results in combinatorial incidence geometry (most notably the Szemerédi-Trotter theorem) and also some tools from additive combinatorics; however, these methods seemed to fall quite short of getting to the optimal exponent of {1}. (Indeed, previously to last week, the best lower bound known was approximately {N^{0.8641}}, due to Katz and Tardos.)

Very recently, though, Guth and Katz have obtained a near-optimal result:

Theorem 2 One has {g(N) \gg N / \log N}.

The proof neatly combines together several powerful and modern tools in a new way: a recent geometric reformulation of the problem due to Elekes and Sharir; the polynomial method as used recently by Dvir, Guth, and Guth-Katz on related incidence geometry problems (and discussed previously on this blog); and the somewhat older method of cell decomposition (also discussed on this blog). A key new insight is that the polynomial method (and more specifically, the polynomial Ham Sandwich theorem, also discussed previously on this blog) can be used to efficiently create cells.

In this post, I thought I would sketch some of the key ideas used in the proof, though I will not give the full argument here (the paper itself is largely self-contained, well motivated, and of only moderate length). In particular I will not go through all the various cases of configuration types that one has to deal with in the full argument, but only some illustrative special cases.

To simplify the exposition, I will repeatedly rely on “pigeonholing cheats”. A typical such cheat: if I have {n} objects (e.g. {n} points or {n} lines), each of which could be of one of two types, I will assume that either all {n} of the objects are of the first type, or all {n} of the objects are of the second type. (In truth, I can only assume that at least {n/2} of the objects are of the first type, or at least {n/2} of the objects are of the second type; but in practice, having {n/2} instead of {n} only ends up costing an unimportant multiplicative constant in the type of estimates used here.) A related such cheat: if one has {n} objects {A_1,\ldots,A_n} (again, think of {n} points or {n} circles), and to each object {A_i} one can associate some natural number {k_i} (e.g. some sort of “multiplicity” for {A_i}) that is of “polynomial size” (of size {O(N^{O(1)})}), then I will assume in fact that all the {k_i} are in a fixed dyadic range {[k,2k]} for some {k}. (In practice, the dyadic pigeonhole principle can only achieve this after throwing away all but about {n/\log N} of the original {n} objects; it is this type of logarithmic loss that eventually leads to the logarithmic factor in the main theorem.) Using the notation {X \sim Y} to denote the assertion that {C^{-1} Y \leq X \leq CY} for an absolute constant {C}, we thus have {k_i \sim k} for all {i}, thus {k_i} is morally constant.

I will also use asymptotic notation rather loosely, to avoid cluttering the exposition with a certain amount of routine but tedious bookkeeping of constants. In particular, I will use the informal notation {X \lll Y} or {Y \ggg X} to denote the statement that {X} is “much less than” {Y} or {Y} is “much larger than” {X}, by some large constant factor.

See also Janos Pach’s recent reaction to the Guth-Katz paper on Kalai’s blog.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,329 other followers