You are currently browsing the tag archive for the ‘random neighbourhoods’ tag.

In the theory of dense graphs on {n} vertices, where {n} is large, a fundamental role is played by the Szemerédi regularity lemma:

Lemma 1 (Regularity lemma, standard version) Let {G = (V,E)} be a graph on {n} vertices, and let {\epsilon > 0} and {k_0 \geq 0}. Then there exists a partition of the vertices {V = V_1 \cup \ldots \cup V_k}, with {k_0 \leq k \leq C(k_0,\epsilon)} bounded below by {k_0} and above by a quantity {C(k_0,\epsilon)} depending only on {k_0, \epsilon}, obeying the following properties:

  • (Equitable partition) For any {1 \leq i,j \leq k}, the cardinalities {|V_i|, |V_j|} of {V_i} and {V_j} differ by at most {1}.
  • (Regularity) For all but at most {\epsilon k^2} pairs {1 \leq i < j \leq k}, the portion of the graph {G} between {V_i} and {V_j} is {\epsilon}-regular in the sense that one has

    \displaystyle  |d( A, B ) - d( V_i, V_j )| \leq \epsilon

    for any {A \subset V_i} and {B \subset V_j} with {|A| \geq \epsilon |V_i|, |B| \geq \epsilon |V_j|}, where {d(A,B) := |E \cap (A \times B)|/|A| |B|} is the density of edges between {A} and {B}.

This lemma becomes useful in the regime when {n} is very large compared to {k_0} or {1/\epsilon}, because all the conclusions of the lemma are uniform in {n}. Very roughly speaking, it says that “up to errors of size {\epsilon}“, a large graph can be more or less described completely by a bounded number of quantities {d(V_i, V_j)}. This can be interpreted as saying that the space of all graphs is totally bounded (and hence precompact) in a suitable metric space, thus allowing one to take formal limits of sequences (or subsequences) of graphs; see for instance this paper of Lovasz and Szegedy for a discussion.

For various technical reasons it is easier to work with a slightly weaker version of the lemma, which allows for the cells {V_1,\ldots,V_k} to have unequal sizes:

Lemma 2 (Regularity lemma, weighted version) Let {G = (V,E)} be a graph on {n} vertices, and let {\epsilon > 0}. Then there exists a partition of the vertices {V = V_1 \cup \ldots \cup V_k}, with {1 \leq k \leq C(\epsilon)} bounded above by a quantity {C(\epsilon)} depending only on {\epsilon}, obeying the following properties:

  • (Regularity) One has

    \displaystyle  \sum_{(V_i,V_j) \hbox{ not } \epsilon-\hbox{regular}} |V_i| |V_j| = O(\epsilon |V|^2) \ \ \ \ \ (1)

    where the sum is over all pairs {1 \leq i \leq j \leq k} for which {G} is not {\epsilon}-regular between {V_i} and {V_j}.

While Lemma 2 is, strictly speaking, weaker than Lemma 1 in that it does not enforce the equitable size property between the atoms, in practice it seems that the two lemmas are roughly of equal utility; most of the combinatorial consequences of Lemma 1 can also be proven using Lemma 2. The point is that one always has to remember to weight each cell {V_i} by its density {|V_i|/|V|}, rather than by giving each cell an equal weight as in Lemma 1. Lemma 2 also has the advantage that one can easily generalise the result from finite vertex sets {V} to other probability spaces (for instance, one could weight {V} with something other than the uniform distribution). For applications to hypergraph regularity, it turns out to be slightly more convenient to have two partitions (coarse and fine) rather than just one; see for instance my own paper on this topic. In any event the arguments below that we give to prove Lemma 2 can be modified to give a proof of Lemma 1 also. The proof of the regularity lemma is usually conducted by a greedy algorithm. Very roughly speaking, one starts with the trivial partition of {V}. If this partition already regularises the graph, we are done; if not, this means that there are some sets {A} and {B} in which there is a significant density fluctuation beyond what has already been detected by the original partition. One then adds these sets to the partition and iterates the argument. Every time a new density fluctuation is incorporated into the partition that models the original graph, this increases a certain “index” or “energy” of the partition. On the other hand, this energy remains bounded no matter how complex the partition, so eventually one must reach a long “energy plateau” in which no further refinement is possible, at which point one can find the regular partition.

One disadvantage of the greedy algorithm is that it is not efficient in the limit {n \rightarrow \infty}, as it requires one to search over all pairs of subsets {A, B} of a given pair {V_i, V_j} of cells, which is an exponentially long search. There are more algorithmically efficient ways to regularise, for instance a polynomial time algorithm was given by Alon, Duke, Lefmann, Rödl, and Yuster. However, one can do even better, if one is willing to (a) allow cells of unequal size, (b) allow a small probability of failure, (c) have the ability to sample vertices from {G} at random, and (d) allow for the cells to be defined “implicitly” (via their relationships with a fixed set of reference vertices) rather than “explicitly” (as a list of vertices). In that case, one can regularise a graph in a number of operations bounded in {n}. Indeed, one has

Lemma 3 (Regularity lemma via random neighbourhoods) Let {\epsilon > 0}. Then there exists integers {M_1,\ldots,M_m} with the following property: whenever {G = (V,E)} be a graph on finitely many vertices, if one selects one of the integers {M_r} at random from {M_1,\ldots,M_m}, then selects {M_r} vertices {v_1,\ldots,v_{M_r} \in V} uniformly from {V} at random, then the {2^{M_r}} vertex cells {V^{M_r}_1,\ldots,V^{M_r}_{2^{M_r}}} (some of which can be empty) generated by the vertex neighbourhoods {A_t := \{ v \in V: (v,v_t) \in E \}} for {1 \leq t \leq M_r}, will obey the conclusions of Lemma 2 with probability at least {1-O(\epsilon)}.

Thus, roughly speaking, one can regularise a graph simply by taking a large number of random vertex neighbourhoods, and using the partition (or Venn diagram) generated by these neighbourhoods as the partition. The intuition is that if there is any non-uniformity in the graph (e.g. if the graph exhibits bipartite behaviour), this will bias the random neighbourhoods to seek out the partitions that would regularise that non-uniformity (e.g. vertex neighbourhoods would begin to fill out the two vertex cells associated to the bipartite property); if one takes sufficiently many such random neighbourhoods, the probability that all detectable non-uniformity is captured by the partition should converge to {1}. (It is more complicated than this, because the finer one makes the partition, the finer the types of non-uniformity one can begin to detect, but this is the basic idea.)

This fact seems to be reasonably well-known folklore, discovered independently by many authors; it is for instance quite close to the graph property testing results of Alon and Shapira, and also appears implicitly in a paper of Ishigami, as well as a paper of Austin (and perhaps even more implicitly in a paper of myself). However, in none of these papers is the above lemma stated explicitly. I was asked about this lemma recently, so I decided to provide a proof here.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,576 other followers