In the theory of dense graphs on vertices, where
is large, a fundamental role is played by the Szemerédi regularity lemma:
Lemma 1 (Regularity lemma, standard version) Let
be a graph on
vertices, and let
and
. Then there exists a partition of the vertices
, with
bounded below by
and above by a quantity
depending only on
, obeying the following properties:
- (Equitable partition) For any
, the cardinalities
of
and
differ by at most
.
- (Regularity) For all but at most
pairs
, the portion of the graph
between
and
is
-regular in the sense that one has
for any
and
with
, where
is the density of edges between
and
.
This lemma becomes useful in the regime when is very large compared to
or
, because all the conclusions of the lemma are uniform in
. Very roughly speaking, it says that “up to errors of size
“, a large graph can be more or less described completely by a bounded number of quantities
. This can be interpreted as saying that the space of all graphs is totally bounded (and hence precompact) in a suitable metric space, thus allowing one to take formal limits of sequences (or subsequences) of graphs; see for instance this paper of Lovasz and Szegedy for a discussion.
For various technical reasons it is easier to work with a slightly weaker version of the lemma, which allows for the cells to have unequal sizes:
Lemma 2 (Regularity lemma, weighted version) Let
be a graph on
vertices, and let
. Then there exists a partition of the vertices
, with
bounded above by a quantity
depending only on
, obeying the following properties:
While Lemma 2 is, strictly speaking, weaker than Lemma 1 in that it does not enforce the equitable size property between the atoms, in practice it seems that the two lemmas are roughly of equal utility; most of the combinatorial consequences of Lemma 1 can also be proven using Lemma 2. The point is that one always has to remember to weight each cell by its density
, rather than by giving each cell an equal weight as in Lemma 1. Lemma 2 also has the advantage that one can easily generalise the result from finite vertex sets
to other probability spaces (for instance, one could weight
with something other than the uniform distribution). For applications to hypergraph regularity, it turns out to be slightly more convenient to have two partitions (coarse and fine) rather than just one; see for instance my own paper on this topic. In any event the arguments below that we give to prove Lemma 2 can be modified to give a proof of Lemma 1 also. The proof of the regularity lemma is usually conducted by a greedy algorithm. Very roughly speaking, one starts with the trivial partition of
. If this partition already regularises the graph, we are done; if not, this means that there are some sets
and
in which there is a significant density fluctuation beyond what has already been detected by the original partition. One then adds these sets to the partition and iterates the argument. Every time a new density fluctuation is incorporated into the partition that models the original graph, this increases a certain “index” or “energy” of the partition. On the other hand, this energy remains bounded no matter how complex the partition, so eventually one must reach a long “energy plateau” in which no further refinement is possible, at which point one can find the regular partition.
One disadvantage of the greedy algorithm is that it is not efficient in the limit , as it requires one to search over all pairs of subsets
of a given pair
of cells, which is an exponentially long search. There are more algorithmically efficient ways to regularise, for instance a polynomial time algorithm was given by Alon, Duke, Lefmann, Rödl, and Yuster. However, one can do even better, if one is willing to (a) allow cells of unequal size, (b) allow a small probability of failure, (c) have the ability to sample vertices from
at random, and (d) allow for the cells to be defined “implicitly” (via their relationships with a fixed set of reference vertices) rather than “explicitly” (as a list of vertices). In that case, one can regularise a graph in a number of operations bounded in
. Indeed, one has
Lemma 3 (Regularity lemma via random neighbourhoods) Let
. Then there exists integers
with the following property: whenever
be a graph on finitely many vertices, if one selects one of the integers
at random from
, then selects
vertices
uniformly from
at random, then the
vertex cells
(some of which can be empty) generated by the vertex neighbourhoods
for
, will obey the conclusions of Lemma 2 with probability at least
.
Thus, roughly speaking, one can regularise a graph simply by taking a large number of random vertex neighbourhoods, and using the partition (or Venn diagram) generated by these neighbourhoods as the partition. The intuition is that if there is any non-uniformity in the graph (e.g. if the graph exhibits bipartite behaviour), this will bias the random neighbourhoods to seek out the partitions that would regularise that non-uniformity (e.g. vertex neighbourhoods would begin to fill out the two vertex cells associated to the bipartite property); if one takes sufficiently many such random neighbourhoods, the probability that all detectable non-uniformity is captured by the partition should converge to . (It is more complicated than this, because the finer one makes the partition, the finer the types of non-uniformity one can begin to detect, but this is the basic idea.)
This fact seems to be reasonably well-known folklore, discovered independently by many authors; it is for instance quite close to the graph property testing results of Alon and Shapira, and also appears implicitly in a paper of Ishigami, as well as a paper of Austin (and perhaps even more implicitly in a paper of myself). However, in none of these papers is the above lemma stated explicitly. I was asked about this lemma recently, so I decided to provide a proof here.
— 1. Warmup: a weak regularity lemma —
To motivate the idea, let’s first prove a weaker but simpler (and more quantitatively effective) regularity lemma, analogous to that established by Frieze and Kannan:
Lemma 4 (Weak regularity lemma via random neighbourhoods) Let
. Then there exists an integer
with the following property: whenever
be a graph on finitely many vertices, if one selects
at random, then selects
vertices
uniformly from
at random, then the
vertex cells
(some of which can be empty) generated by the vertex neighbourhoods
for
, obey the following property with probability at least
: for any vertex sets
, the number of edges
connecting
and
can be approximated by the formula
This weaker lemma only lets us count “macroscopic” edge densities , when
are dense subsets of
, whereas the full regularity lemma is stronger in that it also controls “microscopic” edge densities
where
are now dense subsets of the cells
. Nevertheless this weaker lemma is easier to prove and already illustrates many of the ideas.
Let’s now prove this lemma. Fix , let
be chosen later, let
be a graph, and select
at random. (There can of course be many vertices selected more than once; this will not bother us.) Let
and
be as in the above lemma. For notational purposes it is more convenient to work with the (random)
-algebra
generated by the
(i.e. the collection of all sets that can be formed from
by boolean operations); this is an atomic
-algebra whose atoms are precisely the (non-empty) cells
in the partition. Observe that these
-algebras are nested:
.
We will use the trick of turning sets into functions, and view the graph as a function . One can then form the conditional expectation
of this function to the product
-algebra
, whose value on
is simply the average value of
on the product set
. (When
and
are different, this is simply the edge density
). One can view
more combinatorially, as a weighted graph on
such that all edges between two distinct cells
,
have the same constant weight of
.
We give (and
) the uniform probability measure, and define the energy
at time
to be the (random) quantity
one can interpret this as the mean square of the edge densities , weighted by the size of the cells
. From Pythagoras’ theorem we have the identity
for all ; in particular, the
are increasing in
. This implies that the expectations
are also increasing in
. On the other hand, these expectations are bounded between
and
. Thus, if we select
at random, expectation of
telescopes to be . Thus, by Markov’s inequality, with probability
we can freeze
such that we have the conditional expectation bound
Suppose have this property. We split
where
and
We now assert that the partition induced by
obeys the conclusions of Lemma 3. For this, we observe various properties on the two components of
:
Lemma 5 (
is structured)
is constant on each product set
.
Proof: This is clear from construction.
Lemma 6 (
is pseudorandom) The expression
is of size
.
Proof: The left-hand side can be rewritten as
Observe that the function is measurable with respect to
, so we can rewrite this expression as
Applying Cauchy-Schwarz, one can bound this by
But from Pythagoras we have
and so the claim follows from (3) and another application of Cauchy-Schwarz.
Now we can prove Lemma 4. Observe that
Applying Cauchy-Schwarz twice in and using Lemma 6, we see that the RHS is
; choosing
we obtain the claim.
— 2. Strong regularity via random neighbourhoods —
We now prove Lemma 3, which of course implies Lemma 2.
Fix and a graph
on
vertices. We randomly select an infinite sequence
of vertices in
, drawn uniformly and independently at random. We define
,
, as before.
Now let be a large number depending on
to be chosen later, let
be a rapidly growing function (also to be chosen later), and set
and
for all
, thus
grows rapidly to infinity. The expected energies
are increasing from
to
, thus if we pick
uniformly at random, the expectation of
telescopes to be . Thus, by Markov’s inequality, with probability
we will have
Assume that is chosen to obey this. Then, by another application of the pigeonhole principle, we can find
such that
Fix this . We have
so by Markov’s inequality, with probability ,
are such that
and also obey the conditional expectation bound
Assume that this is the case. We split
where
We now assert that the partition induced by
obeys the conclusions of Lemma 2. For this, we observe various properties on the three components of
:
Lemma 7 (
locally constant)
is constant on each product set
.
Proof: This is clear from construction.
Proof: This follows from (4) and Pythagoras’ theorem.
Lemma 9 (
uniform) The expression
is of size
.
Proof: This follows by repeating the proof of Lemma 6, but using (5) instead of (3).
Now we verify the regularity.
First, we eliminate small atoms: the pairs for which
clearly give a net contribution of at most
and are acceptable; similarly for those pairs for which
. So we may henceforth assume that
Now, let ,
have densities
then
We divide into the three pieces
,
,
.
The contribution of is exactly
.
The contribution of can be bounded using Cauchy-Schwarz as
Using Lemma 8 and Chebyshev’s inequality, we see that the pairs for which this quantity exceeds
will contribute at most
to (1), which is acceptable if we choose
so that
. Let us now discard these bad pairs.
Finally, the contribution of can be bounded by two applications of Cauchy-Schwarz and (9) as
which by (6) is bounded by
This can be made by selecting
sufficiently rapidly growing depending on
. Putting this all together we see that
which (since ) gives the desired regularity.
Remark 1 Of course, this argument gives tower-exponential bounds (as
is exponential and needs to be iterated
times), which will be familiar to any reader already acquainted with the regularity lemma.
Remark 2 One can take the partition induced by random neighbourhoods here and carve it up further to be both equitable and (mostly) regular, thus recovering a proof of Lemma 1, by following the arguments in this paper of mine. Of course, when one does so, one no longer has a partition created purely from random neighbourhoods, but it is pretty clear that one is not going to be able to make an equitable partition just from boolean operations applied to a few random neighbourhoods.
9 comments
Comments feed for this article
27 April, 2009 at 9:09 am
Asaf
Hi Terry,
Such an O(n) algorithm appears (explicitly) in the following paper of mine with Fischer and Matsliach.
Click to access regalg.pdf
That algorithm actually has the added advantage of being able to find (more or less) the smallest regular partition in the input.
27 April, 2009 at 10:31 am
Terence Tao
Dear Asaf: thanks for the reference! It again seems to be slightly different from the random neighbourhoods algorithm (which is a O(1) algorithm rather than O(n), but only defines the partition implicitly and does not make it equitable) but certainly in the same spirit.
27 April, 2009 at 8:11 pm
Anup
Hi Terry, for Lemma 2, do you want to allow i=j in the sum?
Otherwise it seems that partitioning the graph into one part V1 = V, would trivially satisfy the conclusions of the lemma.
[Hmm, you’re right. Thanks for the correction! – T.]
8 May, 2009 at 8:14 pm
Szemeredi’s regularity lemma via the correspondence principle « What’s new
[…] math.PR | Tags: correspondence principle, szemeredi regularity lemma | by Terence Tao In a previous post, we discussed the Szemerédi regularity lemma, and how a given graph could be regularised by […]
5 August, 2009 at 5:17 pm
Moser’s entropy compression argument « What’s new
[…] is often referred to as the “index”). These examples are related; see this blog post for further discussion. The general strategy here is to keep looking for useful pieces of energy […]
24 December, 2011 at 11:31 am
Szemerédi’s regularity lemma « Disquisitiones Mathematicae
[…] the book The probabilistic method of Alon and Spencer, the survey of Komlós and M. Simonovits and Tao’s perspective via random partitions. Merry Christmas!! Share this:TwitterLike this:LikeBe the first to like this […]
3 December, 2012 at 5:35 pm
The spectral proof of the Szemeredi regularity lemma « What’s new
[…] proofs of this lemma, which is actually not that difficult to establish; see for instance these previous blog posts for some examples. In this post I would like to record one further proof, based on the […]
29 May, 2014 at 7:19 pm
deep
Hi Terry,
I wanted to make sense of Szemerédi regularity lemma (SRL) for Erdős–Rényi random graph G(n,p).
If I understood correctly the SRL states that any random dense graph (the adjacency matrix) can be “approximately” partitioned into block-diagonal structures (after proper rearrangement) .
Lets generate G(n=10000,p=.1), a dense random network butthe corresponding adjacency matrix A it can NOT be represented as a block-diagonal form (whatever rearrangement we do). Then my question is, how to interpret SRL in thisset-up ? What I’m missing ?
Thank you so much for your time and apology for my ignorance.
30 May, 2014 at 7:56 am
Terence Tao
Actually, the regularity lemma asserts (roughly speaking) that any dense graph can be approximately partitioned, after rearrangement, into blocks, in which the graph behaves like a random graph in each block (of some density p which need not be 0 or 1, but can also be something in between). So, if one starts with an Erdős–Rényi graph, one is already done: we only need one block, because the graph already exhibits random behaviour in that block.
To put it another way, the regularity lemma tells us that every large dense adjacency matrix is in some sense a “combination” of a bounded rank matrix (which divides up into a bounded number of blocks) and a random matrix; the “block-diagonal” matrices and the Erdős–Rényi matrices reflect the two possible extremes of behaviour, and every other graph is in some sense a mixture of these two extremes.