You are currently browsing the tag archive for the ‘graph theory’ tag.

We have now seen two ways to construct expander Cayley graphs {Cay(G,S)}. The first, discussed in Notes 2, is to use Cayley graphs that are projections of an infinite Cayley graph on a group with Kazhdan’s property (T). The second, discussed in Notes 3, is to combine a quasirandomness property of the group {G} with a flattening hypothesis for the random walk.

We now pursue the second approach more thoroughly. The main difficulty here is to figure out how to ensure flattening of the random walk, as it is then an easy matter to use quasirandomness to show that the random walk becomes mixing soon after it becomes flat. In the case of Selberg’s theorem, we achieved this through an explicit formula for the heat kernel on the hyperbolic plane (which is a proxy for the random walk). However, in most situations such an explicit formula is not available, and one must develop some other tool for forcing flattening, and specifically an estimate of the form

\displaystyle  \| \mu^{(n)} \|_{\ell^2(G)} \ll |G|^{-1/2+\epsilon} \ \ \ \ \ (1)

for some {n = O(\log |G|)}, where {\mu} is the uniform probability measure on the generating set {S}.

In 2006, Bourgain and Gamburd introduced a general method for achieving this goal. The intuition here is that the main obstruction that prevents a random walk from spreading out to become flat over the entire group {G} is if the random walk gets trapped in some proper subgroup {H} of {G} (or perhaps in some coset {xH} of such a subgroup), so that {\mu^{(n)}(xH)} remains large for some moderately large {n}. Note that

\displaystyle  \mu^{(2n)}(H) \geq \mu^{(n)}(H x^{-1}) \mu^{(n)}(xH) = \mu^{(n)}(xH)^2,

since {\mu^{(2n)} = \mu^{(n)} * \mu^{(n)}}, {H = (H x^{-1}) \cdot (xH)}, and {\mu^{(n)}} is symmetric. By iterating this observation, we seethat if {\mu^{(n)}(xH)} is too large (e.g. of size {|G|^{-o(1)}} for some {n} comparable to {\log |G|}), then it is not possible for the random walk {\mu^{(n)}} to converge to the uniform distribution in time {O(\log |G|)}, and so expansion does not occur.

A potentially more general obstruction of this type would be if the random walk gets trapped in (a coset of) an approximate group {H}. Recall that a {K}-approximate group is a subset {H} of a group {G} which is symmetric, contains the identity, and is such that {H \cdot H} can be covered by at most {K} left-translates (or equivalently, right-translates) of {H}. Such approximate groups were studied extensively in last quarter’s course. A similar argument to the one given previously shows (roughly speaking) that expansion cannot occur if {\mu^{(n)}(xH)} is too large for some coset {xH} of an approximate group.

It turns out that this latter observation has a converse: if a measure does not concentrate in cosets of approximate groups, then some flattening occurs. More precisely, one has the following combinatorial lemma:

Lemma 1 (Weighted Balog-Szemerédi-Gowers lemma) Let {G} be a group, let {\nu} be a finitely supported probability measure on {G} which is symmetric (thus {\nu(g)=\nu(g^{-1})} for all {g \in G}), and let {K \geq 1}. Then one of the following statements hold:

  • (i) (Flattening) One has {\| \nu * \nu \|_{\ell^2(G)} \leq \frac{1}{K} \|\nu\|_{\ell^2(G)}}.
  • (ii) (Concentration in an approximate group) There exists an {O(K^{O(1)})}-approximate group {H} in {G} with {|H| \ll K^{O(1)} / \| \nu \|_{\ell^2(G)}^2} and an element {x \in G} such that {\nu(xH) \gg K^{-O(1)}}.

This lemma is a variant of the more well-known Balog-Szemerédi-Gowers lemma in additive combinatorics due to Gowers (which roughly speaking corresponds to the case when {\mu} is the uniform distribution on some set {A}), which in turn is a polynomially quantitative version of an earlier lemma of Balog and Szemerédi. We will prove it below the fold.

The lemma is particularly useful when the group {G} in question enjoys a product theorem, which roughly speaking says that the only medium-sized approximate subgroups of {G} are trapped inside genuine proper subgroups of {G} (or, contrapositively, medium-sized sets that generate the entire group {G} cannot be approximate groups). The fact that some finite groups (and specifically, the bounded rank finite simple groups of Lie type) enjoy product theorems is a non-trivial fact, and will be discussed in later notes. For now, we simply observe that the presence of the product theorem, together with quasirandomness and a non-concentration hypothesis, can be used to demonstrate expansion:

Theorem 2 (Bourgain-Gamburd expansion machine) Suppose that {G} is a finite group, that {S \subseteq G} is a symmetric set of {k} generators, and that there are constants {0 < \kappa < 1 < \Lambda} with the following properties.

  1. (Quasirandomness). The smallest dimension of a nontrivial representation {\rho: G \rightarrow GL_d({\bf C})} of {G} is at least {|G|^{\kappa}};
  2. (Product theorem). For all {\delta > 0} there is some {\delta' = \delta'(\delta) > 0} such that the following is true. If {H \subseteq G} is a {|G|^{\delta'}}-approximate subgroup with {|G|^{\delta} \leq |H| \leq |G|^{1 - \delta}} then {H} generates a proper subgroup of {G};
  3. (Non-concentration estimate). There is some even number {n \leq \Lambda\log |G|} such that

    \displaystyle  \sup_{H < G}\mu^{(n)}(H) < |G|^{-\kappa},

    where the supremum is over all proper subgroups {H < G}.

Then {Cay(G,S)} is a two-sided {\epsilon}-expander for some {\epsilon > 0} depending only on {k,\kappa, \Lambda}, and the function {\delta'(\cdot )} (and this constant {\epsilon} is in principle computable in terms of these constants).

This criterion for expansion is implicitly contained in this paper of Bourgain and Gamburd, who used it to establish the expansion of various Cayley graphs in {SL_2(F_p)} for prime {p}. This criterion has since been applied (or modified) to obtain expansion results in many other groups, as will be discussed in later notes.

Read the rest of this entry »

This month I am at MSRI, for the programs of Ergodic Theory and Additive Combinatorics, and Analysis on Singular Spaces, that are currently ongoing here.  This week I am giving three lectures on the correspondence principle, and on finitary versions of ergodic theory, for the introductory workshop in the former program.  The article here is broadly describing the content of these talks (which are slightly different in theme from that announced in the abstract, due to some recent developments).  [These lectures were also recorded on video and should be available from the MSRI web site within a few months.]

Read the rest of this entry »

[This post is authored by Luca Trevisan. – T.]

Notions of “quasirandomness” for graphs and hypergraphs have many applications in combinatorics and computer science. Several past posts by Terry have addressed the role of quasirandom structures in additive combinatorics. A recurring theme is that if an object (a graph, a hypergraph, a subset of integers, …) is quasirandom, then several useful properties can be immediately deduced, but, also, if an object is not quasirandom then it possesses some “structure” than can be usefully exploited in an inductive argument. The contrapositive statements of such randomness-structure dichotomies are that certain simple properties imply quasirandomness. One can view such results as algorithms that can be used to “certify” quasirandomness, and the existence (or, in some cases, conjectural non-existence) of such algorithms has implications in computer science, as I shall discuss in this post.
Read the rest of this entry »

This month I have been at the Institute for Advanced Study, participating in their semester program on additive combinatorics. Today I gave a talk on my forthcoming paper with Tim Austin on the property testing of graphs and hypergraphs (I hope to make a preprint available here soon). There has been an immense amount of progress on these topics recently, based in large part on the graph and hypergraph regularity lemmas; but we have discovered some surprising subtleties regarding these results, namely a distinction between undirected and directed graphs, between graphs and hypergraphs, between partite hypergraphs and non-partite hypergraphs, and between monotone hypergraph properties and hereditary ones.

For simplicity let us first work with (uncoloured, undirected, loop-free) graphs G = (V,E). In the subject of graph property testing, one is given a property {\mathcal P} which any given graph G may or may not have. For example, {\mathcal P} could be one of the following properties:

  1. G is planar.
  2. G is four-colourable.
  3. G has a number of edges equal to a power of two.
  4. G contains no triangles.
  5. G is bipartite.
  6. G is empty.
  7. G is a complete bipartite graph.

We assume that the labeling of the graph is irrelevant. More precisely, we assume that whenever two graphs G, G’ are isomorphic, that G satisfies {\mathcal P} if and only if G’ satisfies {\mathcal P}. For instance, all seven of the graph properties listed above are invariant under graph isomorphism.

We shall think of G as being very large (so |V| is large) and dense (so |E| \sim |V|^2). We are interested in obtaining some sort of test that can answer the question “does G satisfy {\mathcal P}?” with reasonable speed and reasonable accuracy. By “reasonable speed”, we mean that we will only make a bounded number of queries about the graph, i.e. we only look at a bounded number k of distinct vertices in V (selected at random) and base our test purely on how these vertices are connected to each other in E. (We will always assume that the number of vertices in V is at least k.) By “reasonable accuracy”, we will mean that we specify in advance some error tolerance \varepsilon > 0 and require the following:

  1. (No false negatives) If G indeed satisfies {\mathcal P}, then our test will always (correctly) accept G.
  2. (Few false positives in the \varepsilon-far case) If G fails to satisfy {\mathcal P}, and is furthermore \varepsilon-far from satisfying {\mathcal P} in the sense that one needs to add or remove at least \varepsilon |V|^2 edges in G before {\mathcal P} can be satisfied, then our test will (correctly) reject G with probability at least \varepsilon.

When a test with the above properties exists for each given \varepsilon > 0 (with the number of queried vertices k being allowed to depend on \varepsilon), we say that the graph property {\mathcal P} is testable with one-sided error. (The general notion of property testing was introduced by Rubinfeld and Sudan, and first studied for graph properties by Goldreich, Goldwasser, and Ron; see this web page of Goldreich for further references and discussion.) The rejection probability \varepsilon is not very important in this definition, since if one wants to improve the success rate of the algorithm one can simply run independent trials of that algorithm (selecting fresh random vertices each time) in order to increase the chance that G is correctly rejected. However, it is intuitively clear that one must allow some probability of failure, since one is only inspecting a small portion of the graph and so cannot say with complete certainty whether the entire graph has the property {\mathcal P} or not. For similar reasons, one cannot reasonably demand to have a low false positive rate for all graphs that fail to obey {\mathcal P}, since if the graph is only one edge modification away from obeying {\mathcal P}, this modification is extremely unlikely to be detected by only querying a small portion of the graph. This explains why we need to restrict attention to graphs that are \varepsilon-far from obeying {\mathcal P}.

An example should illustrate this definition. Consider for instance property 6 above (the property that G is empty). To test whether a graph is empty, one can perform the following obvious algorithm: take k vertices in G at random and check whether they have any edges at all between them. If they do, then the test of course rejects G as being non-empty, while if they don’t, the test accepts G as being empty. Clearly there are no false negatives in this test, and if k is large enough depending on \varepsilon one can easily see (from the law of large numbers) that we will have few false positives if G is \varepsilon-far from being empty (i.e. if it has at least \varepsilon |V|^2 vertices). So the property of being empty is testable with one-sided error.

On the other hand, it is intuitively obvious that property 3 (having an number of edges equal to a power of 2) is not testable with one-sided error.

So it is reasonable to ask: what types of graph properties are testable with one-sided error, and which ones are not?

Read the rest of this entry »

Today I’d like to discuss a beautiful inequality in graph theory, namely the crossing number inequality. This inequality gives a useful bound on how far a given graph is from being planar, and has a number of applications, for instance to sum-product estimates. Its proof is also an excellent example of the amplification trick in action; here the main source of amplification is the freedom to pass to subobjects, which is a freedom which I didn’t touch upon in the previous post on amplification. The crossing number inequality (and its proof) is well known among graph theorists but perhaps not among the wider mathematical community, so I thought I would publicise it here.

In this post, when I talk about a graph, I mean an abstract collection of vertices V, together with some abstract edges E joining pairs of vertices together. We will assume that the graph is undirected (the edges do not have a preferred orientation), loop-free (an edge cannot begin and start at the same vertex), and multiplicity-free (any pair of vertices is joined by at most one edge). More formally, we can model all this by viewing E as a subset of \binom{V}{2} := \{ e \subset V: |e|=2 \}, the set of 2-element subsets of V, and we view the graph G as an ordered pair G = (V,E). (The notation is set up so that |\binom{V}{2}| = \binom{|V|}{2}.)

Now one of the great features of graphs, as opposed to some other abstract maths concepts, is that they are easy to draw: the abstract vertices become dots on a plane, while the edges become line segments or curves connecting these dots. [To avoid some technicalities we do not allow these curves to pass through the dots, except if the curve is terminating at that dot.] Let us informally refer to such a concrete representation D of a graph G as a drawing of that graph. Clearly, any non-trivial graph is going to have an infinite number of possible drawings. In some of these drawings, a pair of edges might cross each other; in other drawings, all edges might be disjoint (except of course at the vertices, where edges with a common endpoint are obliged to meet). If G has a drawing D of the latter type, we say that the graph G is planar.

Given an abstract graph G, or a drawing thereof, it is not always obvious as to whether that graph is planar; just because the drawing that you currently possess of G contains crossings, does not necessarily mean that all drawings of G do. The wonderful little web game “Planarity” illustrates this point excellently. Nevertheless, there are definitely graphs which are not planar; in particular the complete graph K_5 on five vertices, and the complete bipartite graph K_{3,3} on two sets of three vertices, are non-planar.

There is in fact a famous theorem of Kuratowski that says that these two graphs are the only “source” of non-planarity, in the sense that any non-planar graph contains (a subdivision of) one of these graphs as a subgraph. (There is of course the even more famous four-colour theorem that asserts that every planar graphs is four-colourable, but this is not the topic of my post today.)

Intuitively, if we fix the number of vertices |V|, and increase the number of edges |E|, then the graph should become “increasingly non-planar”; conversely, if we keep the same number of edges |E| but spread them amongst a greater number of vertices |V|, then the graph should become “increasingly planar”. Is there a quantitative way to measure the “non-planarity” of a graph, and to formalise the above intuition as some sort of inequality?
Read the rest of this entry »

This post is a sequel of sorts to my earlier post on hard and soft analysis, and the finite convergence principle. Here, I want to discuss a well-known theorem in infinitary soft analysis – the Lebesgue differentiation theorem – and whether there is any meaningful finitary version of this result. Along the way, it turns out that we will uncover a simple analogue of the Szemerédi regularity lemma, for subsets of the interval rather than for graphs. (Actually, regularity lemmas seem to appear in just about any context in which fine-scaled objects can be approximated by coarse-scaled ones.) The connection between regularity lemmas and results such as the Lebesgue differentiation theorem was recently highlighted by Elek and Szegedy, while the connection between the finite convergence principle and results such as the pointwise ergodic theorem (which is a close cousin of the Lebesgue differentiation theorem) was recently detailed by Avigad, Gerhardy, and Towsner.

The Lebesgue differentiation theorem has many formulations, but we will avoid the strongest versions and just stick to the following model case for simplicity:

Lebesgue differentiation theorem. If f: [0,1] \to [0,1] is Lebesgue measurable, then for almost every x \in [0,1] we have f(x) = \lim_{r \to 0} \frac{1}{r} \int_x^{x+r} f(y)\ dy. Equivalently, the fundamental theorem of calculus f(x) = \frac{d}{dy} \int_0^y f(z) dz|_{y=x} is true for almost every x in [0,1].

Here we use the oriented definite integral, thus \int_x^y = - \int_y^x. Specialising to the case where f = 1_A is an indicator function, we obtain the Lebesgue density theorem as a corollary:

Lebesgue density theorem. Let A \subset [0,1] be Lebesgue measurable. Then for almost every x \in A, we have \frac{|A \cap [x-r,x+r]|}{2r} \to 1 as r \to 0^+, where |A| denotes the Lebesgue measure of A.

In other words, almost all the points x of A are points of density of A, which roughly speaking means that as one passes to finer and finer scales, the immediate vicinity of x becomes increasingly saturated with A. (Points of density are like robust versions of interior points, thus the Lebesgue density theorem is an assertion that measurable sets are almost like open sets. This is Littlewood’s first principle.) One can also deduce the Lebesgue differentiation theorem back from the Lebesgue density theorem by approximating f by a finite linear combination of indicator functions; we leave this as an exercise.

Read the rest of this entry »

In this second lecture, I wish to talk about the dichotomy between structure and randomness as it manifests itself in four closely related areas of mathematics:

  • Combinatorial number theory, which seeks to find patterns in unstructured dense sets (or colourings) of integers;
  • Ergodic theory (or more specifically, multiple recurrence theory), which seeks to find patterns in positive-measure sets under the action of a discrete dynamical system on probability spaces (or more specifically, measure-preserving actions of the integers {\Bbb Z});
  • Graph theory, or more specifically the portion of this theory concerned with finding patterns in large unstructured dense graphs; and
  • Ergodic graph theory, which is a very new and undeveloped subject, which roughly speaking seems to be concerned with the patterns within a measure-preserving action of the infinite permutation group S_\infty, which is one of several models we have available to study infinite “limits” of graphs.

The two “discrete” (or “finitary”, or “quantitative”) fields of combinatorial number theory and graph theory happen to be related to each other, basically by using the Cayley graph construction; I will give an example of this shortly. The two “continuous” (or “infinitary”, or “qualitative”) fields of ergodic theory and ergodic graph theory are at present only related on the level of analogy and informal intuition, but hopefully some more systematic connections between them will appear soon.

On the other hand, we have some very rigorous connections between combinatorial number theory and ergodic theory, and also (more recently) between graph theory and ergodic graph theory, basically by the procedure of viewing the infinitary continuous setting as a limit of the finitary discrete setting. These two connections go by the names of the Furstenberg correspondence principle and the graph correspondence principle respectively. These principles allow one to tap the power of the infinitary world (for instance, the ability to take limits and perform completions or closures of objects) in order to establish results in the finitary world, or at least to take the intuition gained in the infinitary world and transfer it to a finitary setting. Conversely, the finitary world provides an excellent model setting to refine one’s understanding of infinitary objects, for instance by establishing quantitative analogues of “soft” results obtained in an infinitary manner. I will remark here that this best-of-both-worlds approach, borrowing from both the finitary and infinitary traditions of mathematics, was absolutely necessary for Ben Green and I in order to establish our result on long arithmetic progressions in the primes. In particular, the infinitary setting is excellent for being able to rigorously define and study concepts (such as structure or randomness) which are much “fuzzier” and harder to pin down exactly in the finitary world.

Read the rest of this entry »

The question in extremal graph theory I wish to discuss here originates from Luca Trevisan; it shows that we still don’t know everything that we should about the “local” properties of large dense graphs.

Let G = (V,E) be a large (undirected) graph, thus V is the vertex set with some large number n of vertices, and E is the collection of edges {x,y} connecting two vertices in the graph. We can allow the graph to have loops {x,x} if one wishes; it’s not terribly important for this question (since the number of loops is so small compared to the total number of edges), so let’s say there are no loops. We define three quantities of the graph G:

  • The edge density 0 \leq \alpha \leq 1, defined as the number of edges in G, divided by the total number of possible edges, i.e. n(n-1)/2;
  • The triangle density 0 \leq \beta \leq 1, defined as the number of triangles in G (i.e. unordered triplets {x,y,z} such that {x,y},{y,z}, {z,x} all lie in G), divided by the total number of possible triangles, namely n(n-1)(n-2)/6;
  • The diamond density 0 \leq \gamma \leq 1, defined as the number of diamonds in G (i.e. unordered pairs { {x,y,z}, {x,y,w} } of triangles in G which share a common edge), divided by the total number of possible diamonds, namely n(n-1)(n-2)(n-3)/4.

Read the rest of this entry »

Archives