You are currently browsing the tag archive for the ‘Assaf Naor’ tag.

Assaf Naor and I have just uploaded to the arXiv our joint paper “Scale-oblivious metric fragmentation and the nonlinear Dvoretzky theorem“.

Consider a finite metric space {(X,d)}, with {X} being a set of {n} points. It is not always the case that this space can be isometrically embedded into a Hilbert space such as {\ell^2}; for instance, in a Hilbert space every pair of points has a unique midpoint, but uniqueness can certainly fail for finite metric spaces (consider for instance the graph metric on a four-point diamond). The situation improves, however, if one allows (a) the embedding to have some distortion, and (b) one is willing to embed just a subset of {X}, rather than all of {X}. More precisely, for any integer {n} and any distortion {D \geq 1}, let {k(n,D)} be the largest integer with the property that given every {n}-point metric space {(X,d)}, there exists a {k}-point subset {Y} of {X}, and a map {f: Y \rightarrow \ell^2}, which has distortion at most {D} in the sense that

\displaystyle  d(y,y') \leq \| f(y) - f(y') \|_{\ell^2} \leq D d(y,y')

for all {y,y' \in Y}.

Bourgain, Figiel, and Milman established that for any fixed {D>1}, one has the lower bound {k(n,D) \gg_D \log n}, and also showed for {D} sufficiently close to {1} there was a matching upper bound {k(n,D) \ll_D \log n}. This type of result was called a nonlinear Dvoretsky theorem, in analogy with the linear Dvoretsky theorem that asserts that given any {n}-dimensional normed vector space and {D > 1}, there existed a {k}-dimensional subspace which embedded with distortion {D} into {\ell^2}, with {k \sim_D \log n}. In other words, one can ensure that about {\log n} of the {n} points in an arbitrary metric space behave in an essentially Euclidean manner, and that this is best possible if one wants the distortion to be small.

Bartel, Linial, Mendel, and Naor observed that there was a threshold phenomenon at {D=2}. Namely, for {D < 2}, the Bourgain-Figiel-Milman bounds were sharp with {k(n,D) \sim_D \log n}; but for {D>2} one instead had a power law

\displaystyle  n^{1-B(D)} \ll_D k(n,D) \ll n^{1-b(D)}

for some {0 < b(D) \leq B(D) < 1}. In other words, once one allows distortion by factors greater than {2}, one can now embed a polynomial portion of the points, rather than just a logarithmic portion, into a Hilbert space. This has some applications in theoretical computer science to constructing “approximate distance oracles” for high-dimensional sets of data. (The situation at the critical value {D=2} is still unknown.)

In the special case that the metric {d} is an ultrametric, so that the triangle inequality {d(x,z) \leq d(x,y) + d(y,z)} is upgraded to the ultra-triangle inequality {d(x,z) \leq \max(d(x,y), d(y,z))}, then it is an easy exercise to show that {X} can be embedded isometrically into a Hilbert space, and in fact into a sphere of radius {\hbox{diam}(X)/\sqrt{2}}. Indeed, this can be established by an induction on the cardinality of {X}, using the ultrametric inequality to partition any finite ultrametric space {X} of two or more points into sets of strictly smaller diameter that are all separated by {\hbox{diam}(X)}, and using the inductive hypothesis and Pythagoras’ theorem.

One can then replace the concept of embedding into a Hilbert space, with the apparently stronger concept of embedding into an ultrametric space; this is useful for the computer science applications as ultrametrics have a tree structure which allows for some efficient algorithms for computing distances in such spaces. As it turns out, all the preceding constructions carry over without difficulty to this setting; thus, for {D<2}, one can embed a logarithmic number of points with distortion {D} into an ultrametric space, and for {D>2} one can embed a polynomial number of points.

One can view the task of locating a subset of a metric space that is equivalent (up to bounded distortion) to an ultrametric as that of fragmenting a metric space into a tree-like structure. For instance, the standard metric on the arithmetic progression {\{1,\ldots,3^m\}} of length {n=3^m} is not an ultrametric (and in fact needs a huge distortion factor of {3^m} in order to embed into an ultrametric space), but if one restricts to the Cantor-like subset of integers in {\{1,\ldots,3^m\}} whose base {3} expansion consists solely of {0}s and {1}s, then one easily verifies that the resulting set is fragmented enough that the standard metric is equivalent (up to a distortion factor of {3}) to an ultrametric, namely the metric {d(x,y) := 3^{j}/2} where {j} is the largest integer for which {3^j} divides {y-x} and {x \neq y}.

The above fragmentation constructions were somewhat complicated and deterministic in nature. Mendel and Naor introduced a simpler probabilistic method, based on random partition trees of {X}, which reproved the above results in the high-distortion case {D \gg 1} (with {b(D), B(D) \sim 1/D} in the limit {D \rightarrow \infty}). However, this method had inherent limitations in the low-distortion case, in particular failing for {D < 3} (basically because the method would establish a stronger embedding result which fails in that regime).

In this paper, we introduce a variant of the random partition tree method, which involves a random fragmentation at a randomly chosen set of scales, that works all the way down to {D>2} and gives a clean value for the exponent {b(D)}, namely {2e/D}; in fact we have the more precise result that we can take {b(D)} to be {1-\theta}, where {0 < \theta < 1} is the unique solution to the equation

\displaystyle  \frac{2}{D} = (1-\theta) \theta^{\theta/(1-\theta)},

and that this is the limit of the method if one chooses a “scale-oblivious” approach that applies uniformly to all metric spaces, ignoring the internal structure of each individual metric space.

The construction ends up to be relatively simple (taking about three pages). The basic idea is as follows. Pick two scales {R > r > 0}, and let {x_1, x_2, x_3, \ldots} be a random sequence of points in {X} (of course, being finite, every point in {X} will almost surely be visited infinitely often by this sequence). We can then partition {X} into pieces {Y_1, Y_2, \ldots}, by defining {Y_m} to be the set of points {x} for which {x_m} is the first point in the sequence to lie in the ball {B(x,R)}. We then let {S_m} be the subset of {Y_m} in which {x_m} actually falls in the smaller ball {B(x,r)}. As a consequence, we see that each {S_m} is contained in a ball of radius {r} (namely {B(x_m,r)}), but that any two {S_m, S_{m'}} are separated by a distance of at least {R-r} (from the triangle inequality). This gives one layer of a quasi-ultrametric tree structure on {\bigcup_m S_m}; if one iterates this over many different pairs of scales {R, r}, one gets a full quasi-ultrametric tree structure, which one can then adjust with bounded distortion to a genuine ultrametric structure. The game is then to optimise the choice of {R, r} so as to maximise the portion of {X} that remains in the tree; it turns out that a suitably random choice of such scales is the optimal one.

Assaf Naor and I have just uploaded to the arXiv our paper “Random Martingales and localization of maximal inequalities“, to be submitted shortly. This paper investigates the best constant in generalisations of the classical Hardy-Littlewood maximal inequality

for any absolutely integrable {f: {\mathbb R}^n \rightarrow {\mathbb R}}, where {B(x,r)} is the Euclidean ball of radius {r} centred at {x}, and {|E|} denotes the Lebesgue measure of a subset {E} of {{\mathbb R}^n}. This inequality is fundamental to a large part of real-variable harmonic analysis, and in particular to Calderón-Zygmund theory. A similar inequality in fact holds with the Euclidean norm replaced by any other convex norm on {{\mathbb R}^n}.

The exact value of the constant {C_n} is only known in {n=1}, with a remarkable result of Melas establishing that {C_1 = \frac{11+\sqrt{61}}{12}}. Classical covering lemma arguments give the exponential upper bound {C_n \leq 2^n} when properly optimised (a direct application of the Vitali covering lemma gives {C_n \leq 5^n}, but one can reduce {5} to {2} by being careful). In an important paper of Stein and Strömberg, the improved bound {C_n = O( n \log n )} was obtained for any convex norm by a more intricate covering norm argument, and the slight improvement {C_n = O(n)} obtained in the Euclidean case by another argument more adapted to the Euclidean setting that relied on heat kernels. In the other direction, a recent result of Aldaz shows that {C_n \rightarrow \infty} in the case of the {\ell^\infty} norm, and in fact in an even more recent preprint of Aubrun, the lower bound {C_n \gg_\epsilon \log^{1-\epsilon} n} for any {\epsilon > 0} has been obtained in this case. However, these lower bounds do not apply in the Euclidean case, and one may still conjecture that {C_n} is in fact uniformly bounded in this case.

Unfortunately, we do not make direct progress on these problems here. However, we do show that the Stein-Strömberg bound {C_n = O(n \log n)} is extremely general, applying to a wide class of metric measure spaces obeying a certain “microdoubling condition at dimension {n}“; and conversely, in such level of generality, it is essentially the best estimate possible, even with additional metric measure hypotheses on the space. Thus, if one wants to improve this bound for a specific maximal inequality, one has to use specific properties of the geometry (such as the connections between Euclidean balls and heat kernels). Furthermore, in the general setting of metric measure spaces, one has a general localisation principle, which roughly speaking asserts that in order to prove a maximal inequality over all scales {r \in (0,+\infty)}, it suffices to prove such an inequality in a smaller range {r \in [R, nR]} uniformly in {R>0}. It is this localisation which ultimately explains the significance of the {n \log n} growth in the Stein-Strömberg result (there are {O(n \log n)} essentially distinct scales in any range {[R,nR]}). It also shows that if one restricts the radii {r} to a lacunary range (such as powers of {2}), the best constant improvees to {O(\log n)}; if one restricts the radii to an even sparser range such as powers of {n}, the best constant becomes {O(1)}.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,867 other followers