This month I have been at the Institute for Advanced Study, participating in their semester program on additive combinatorics. Today I gave a talk on my forthcoming paper with Tim Austin on the property testing of graphs and hypergraphs (I hope to make a preprint available here soon). There has been an immense amount of progress on these topics recently, based in large part on the graph and hypergraph regularity lemmas; but we have discovered some surprising subtleties regarding these results, namely a distinction between undirected and directed graphs, between graphs and hypergraphs, between partite hypergraphs and non-partite hypergraphs, and between monotone hypergraph properties and hereditary ones.

For simplicity let us first work with (uncoloured, undirected, loop-free) graphs G = (V,E). In the subject of graph property testing, one is given a property ${\mathcal P}$ which any given graph G may or may not have. For example, ${\mathcal P}$ could be one of the following properties:

1. G is planar.
2. G is four-colourable.
3. G has a number of edges equal to a power of two.
4. G contains no triangles.
5. G is bipartite.
6. G is empty.
7. G is a complete bipartite graph.

We assume that the labeling of the graph is irrelevant. More precisely, we assume that whenever two graphs G, G’ are isomorphic, that G satisfies ${\mathcal P}$ if and only if G’ satisfies ${\mathcal P}$. For instance, all seven of the graph properties listed above are invariant under graph isomorphism.

We shall think of G as being very large (so $|V|$ is large) and dense (so $|E| \sim |V|^2$). We are interested in obtaining some sort of test that can answer the question “does G satisfy ${\mathcal P}$?” with reasonable speed and reasonable accuracy. By “reasonable speed”, we mean that we will only make a bounded number of queries about the graph, i.e. we only look at a bounded number k of distinct vertices in V (selected at random) and base our test purely on how these vertices are connected to each other in E. (We will always assume that the number of vertices in V is at least k.) By “reasonable accuracy”, we will mean that we specify in advance some error tolerance $\varepsilon > 0$ and require the following:

1. (No false negatives) If G indeed satisfies ${\mathcal P}$, then our test will always (correctly) accept G.
2. (Few false positives in the $\varepsilon$-far case) If G fails to satisfy ${\mathcal P}$, and is furthermore $\varepsilon$-far from satisfying ${\mathcal P}$ in the sense that one needs to add or remove at least $\varepsilon |V|^2$ edges in G before ${\mathcal P}$ can be satisfied, then our test will (correctly) reject G with probability at least $\varepsilon$.

When a test with the above properties exists for each given $\varepsilon > 0$ (with the number of queried vertices k being allowed to depend on $\varepsilon$), we say that the graph property ${\mathcal P}$ is testable with one-sided error. (The general notion of property testing was introduced by Rubinfeld and Sudan, and first studied for graph properties by Goldreich, Goldwasser, and Ron; see this web page of Goldreich for further references and discussion.) The rejection probability $\varepsilon$ is not very important in this definition, since if one wants to improve the success rate of the algorithm one can simply run independent trials of that algorithm (selecting fresh random vertices each time) in order to increase the chance that G is correctly rejected. However, it is intuitively clear that one must allow some probability of failure, since one is only inspecting a small portion of the graph and so cannot say with complete certainty whether the entire graph has the property ${\mathcal P}$ or not. For similar reasons, one cannot reasonably demand to have a low false positive rate for all graphs that fail to obey ${\mathcal P}$, since if the graph is only one edge modification away from obeying ${\mathcal P}$, this modification is extremely unlikely to be detected by only querying a small portion of the graph. This explains why we need to restrict attention to graphs that are $\varepsilon$-far from obeying ${\mathcal P}$.

An example should illustrate this definition. Consider for instance property 6 above (the property that G is empty). To test whether a graph is empty, one can perform the following obvious algorithm: take k vertices in G at random and check whether they have any edges at all between them. If they do, then the test of course rejects G as being non-empty, while if they don’t, the test accepts G as being empty. Clearly there are no false negatives in this test, and if k is large enough depending on $\varepsilon$ one can easily see (from the law of large numbers) that we will have few false positives if G is $\varepsilon$-far from being empty (i.e. if it has at least $\varepsilon |V|^2$ vertices). So the property of being empty is testable with one-sided error.

On the other hand, it is intuitively obvious that property 3 (having an number of edges equal to a power of 2) is not testable with one-sided error.

So it is reasonable to ask: what types of graph properties are testable with one-sided error, and which ones are not?

If we “deconstruct” at our test for property 6 (that a graph is empty), we see that we implicitly relied on the (obvious) fact that any induced subgraph of an empty graph is again empty. This leads to a definition:

Definition. A graph property ${\mathcal P}$ is hereditary if whenever a graph G obeys ${\mathcal P}$, then all induced subgraphs of G obey ${\mathcal P}$ also.

For instance, except for property 3, all the graph properties listed previously are hereditary.

Given a hereditary property ${\mathcal P}$, and a positive integer k, there is an obvious test $\hbox{Test}_k$ for this property, which samples k vertices from a graph G at random, looks at the induced subgraph, and declares that G obeys ${\mathcal P}$ if and only if the induced subgraph does. The hereditary nature of ${\mathcal P}$ ensures that $\hbox{Test}_k$ generates no false negative. What about false positives? A result of Alon and Shapira (building on an enormous number of partial results by several authors, too many to be listed here) shows that this is OK as long as k is large enough:

Theorem 1. Let ${\mathcal P}$ be a hereditary graph property, and let $\varepsilon > 0$. Then if k is sufficiently large depending on ${\mathcal P}$ and $\varepsilon$, then $\hbox{Test}_k$ has a false positive rate of at most $\varepsilon$ for any graph which is $\varepsilon$-far from satisfying ${\mathcal P}$.

In particular, they show that every hereditary graph property is testable with one-sided error. In the converse direction, they show that any “natural” property which is testable with one-sided error is “almost” hereditary, though the definitions of “natural” and “almost” are slightly technical and will not be discussed here. (One also needs a further technical assumption that the tester is oblivious to the number of vertices in the graph.) One amusing fact is that even if ${\mathcal P}$ is a computable property, the size of k required in order for the above theorem to apply need not be computable (for reasons related to the halting problem); indeed, it can be shown that given the right choice of ${\mathcal P}$, k can grow faster than any given function of $1/\varepsilon$ (e.g. the Ackermann function).

The result of Alon-Shapira was extended to hypergraphs by several authors (including Kohayakawa-Nagle-Rödl, Gowers, Rödl-Schacht, Avart-Rödl-Schacht, Ishigami, and Elek-Szegedy); the definitions of terms such as “hereditary” and “testable with one sided error” are straightforward. In particular one has the following result of Rödl-Schacht:

Theorem 2. For any fixed k, the results of Theorem 1 are also true for k-uniform hypergraphs (i.e. hypergraphs in which every edge consists of exactly k vertices).

These results are rather non-trivial to prove, relying heavily on the graph regularity lemma and hypergraph regularity lemma respectively. (As an indication of the depth of these results, Theorem 1 already implies Roth’s theorem on arithmetic progressions of length 3, and Theorem 2 implies Szemerédi’s theorem; this connection was first observed by Ruzsa and Szemerédi.)

In my forthcoming paper with Tim Austin, we extend Theorem 2 slightly by allowing the hypergraphs to be directed, multi-coloured, and $\leq k$-uniform rather than k-uniform (so that edges are also allowed to have fewer than k edges). However, this is not the focus of my talk today.

The above results give a fairly definitive answer as to what properties of graphs and hypergraphs are testable with one-sided error. However, this is not the end of the story, because the Alon-Shapira and Rödl-Schacht arguments are quite different. In particular, the Alon-Shapira argument is constructive in the following sense. One can rephrase Theorem 1 in the following equivalent form:

Theorem 1, again: If ${\mathcal P}$ is a hereditary property and $\varepsilon > 0$, then there exists $k \geq 1$ and $\delta > 0$ such that whenever G=(V,E) is a graph with k-locally verifies ${\mathcal P}$ to accuracy $\delta$, in the sense that a random induced subgraph of G on k vertices will obey ${\mathcal P}$ with probability at least $1-\delta$, then G can be “repaired” to obtain a new graph G’ that globally verifies ${\mathcal P}$ by modifying at most $\varepsilon |V|^2$ edges.

The argument of Alon and Shapira in fact provides a constructive (randomised) algorithm to build G’ from G. To oversimplify enormously, the idea is to first apply the Szemerédi regularity lemma to G to create a finite complexity model for G, and then use that model to decide how to modify G to create G’. Actually there are two significant difficulties that arise in this approach. Firstly, it may be that the property ${\mathcal P}$ is not “local”, in which case a single application of the regularity lemma does not suffice due to the accumulation of error terms in the associated “counting lemma”. For instance, the property of being bipartite is equivalent to containing no odd cycles, so in order to completely check for bipartiteness one needs to deal with cycles of arbitrarily long length, which cannot be evaluated locally and so which cannot be counted accurately in terms of a Szemerédi partition. However, once one fixes the finite complexity model for G, there are only finitely many modification rules that are possible; each one of these will either create a graph that obeys ${\mathcal P}$, or will fail to do so for some local reason (e.g. a rule may fail to give a bipartite graph because it allows for cycles of length 37). Combining all these potential local failures together, we see that we can effectively replace the non-local property ${\mathcal P}$ by a local property ${\mathcal P}_{loc}$ for the purposes of modification rules based on this finite complexity model (it is at this point where one needs to solve a problem analogous to the halting problem). One then applies the regularity lemma again at a much finer degree of accuracy (depending on how local ${\mathcal P}_{loc}$ really is) in order to obtain a counting lemma that is sufficiently accurate to enable one to design a good modification rule. But then one runs into a second difficulty concerning “diagonal interactions”. More precisely, the problem is that the edges within a single Szemerédi cell need not be regular enough for one to be able to safely design a modification rule for these edges. In order to fix this, one needs to regularise even further, replacing each cell with a moderately large number of smaller cells, which have various edge density behaviours with respect to each other, but such that the interactions between pairs of these cells is always regular. One can then appeal to Ramsey’s theorem to make all these edge densities coherent enough to design a viable modification rule. [If the above paragraph made no sense to you, don’t worry; the only thing you really need to remember from it for the purposes of this talk is that the Alon-Shapira argument uses Ramsey’s theorem. It also uses the regularity lemma repeatedly, but the parameters involved in using that lemma do not depend on the size of the graph.]

From the nature of the Alon-Shapira argument, it turns out that the repaired graph G’ can be obtained from the original graph G by a local modification rule. More precisely, there exists an integer k and a rule A which takes graphs on k+2 labeled vertices and returns either 0 or 1, such that one can generate G’ from G by first selecting k vertices $v_1,\ldots,v_k$ from G at random, and then defining G’ by declaring two vertices $w_1$ and $w_2$ to be joined by an edge in G’ if and only if the rule A, when applied to the restriction of G to $v_1,\ldots,v_k,w_1,w_2$, gives 1.

To illustrate how this type of local repair rule works (it is analogous to the notion of a locally correctable code in coding theory), let us use property 7 above, i.e. that of being a complete bipartite graph. Suppose that G is a slightly corrupted complete bipartite graph, e.g. it was obtained from a random complete bipartite graph between two classes A, B on a large number N of vertices by adding and deleting $\varepsilon N^2$ of the edges, thus G will now contain a small number of edges within one of the bipartite classes A, B, and also exclude a small number of edges between the two classes A, B. The challenge is now to locally repair this graph G to obtain a new graph G’ which is close to G and which is genuinely complete bipartite (but it does not necessarily have to be the original uncorrupted graph).

[To give an experimental analogy for this task, imagine a large collection of identical-seeming balls, half of which are positively charged and half of which are negatively charged. Suppose we can conduct experiments on pairs of these balls to see whether they attract each other or not, but due to experimental error, these experiments are occasionally unreliable: a pair of like charged particles may accidentally be reported to attract each other, and similarly there is a small probability that a pair of unlike charged particles may be reported to not attract each other. Our task is to conduct these slightly unreliable experiments so that we can efficiently separate the balls into two classes, so that balls within one class almost never attract each other, while balls in different classes almost always do.]

To do this, we do the following. We first pick a large odd number k (we take k to be odd to break ties later) and inspect k vertices $v_1,\ldots,v_k$ at random from G. The restriction $G_k$ of G to this set will also look like a corruption of the complete bipartite graph between the classes . By brute force search over all the complete bipartite graphs on k vertices, one can then (with high probability) find a complete bipartite graph $G'_k$ on these vertices which is close to $G_k$. Let $A'_k$ and $B'_k$ denote the bipartite classes of $G'_k$, thus $\{v_1,\ldots,v_k\} = A'_k \cup B'_k$. These classes are likely to be very close to the restrictions $A_k := A \cap \{v_1,\ldots,v_k\}$, $B_k := B \cap \{v_1,\ldots,v_k\}$ of the oclasses for the original (uncorrupted) bipartite graph restricted to $\{v_1,\ldots,v_k\}$; it is also possible that the classes become swapped (thus $B'_k$ is close to $A_k$ and $A'_k$ is close to $B_k$), although this will turn out to not be important. Because of these considerations, we see that most of the vertices in G will either be highly connected with $A'_k$ (and almost totally disconnected with $B'_k$), or vice versa. This suggests partitioning these vertices into two classes A’ and B’ by majority vote: B’ consists of those vertices which have more connections with $A'_k$ than with $B'_k$, and vice versa for A’. One can then verify that the complete bipartite graph between A’ and B’ will be very close to G (or to the original uncorrupted bipartite graph) with high probability, with the accuracy of this repair algorithm increasing with k. (Actually, this algorithm is already rather good just for k=1.)

Let us call a graph property ${\mathcal P}$ locally repairable if, in the situation in Theorem 1, there is a local repair rule to generate from G (and from a random choice $v_1,\ldots,v_k$ of vertices) a repaired graph G’ which always obeys ${\mathcal P}$, and which will differ by at most $\varepsilon |V|^2$ edges from G with probability at least $1 - \varepsilon$.

Tim Austin and I were then able to adapt the Alon-Shapira argument show the following strengthening of Theorem 1:

Theorem 1′. Every hereditary graph property is locally repairable. In fact, the same result holds if one allows the graph to be multi-coloured, to contain loops, or be infinite (but in the latter case one has to provide some probability distribution on the set of vertices as a substitute for the uniform distribution).

The Alon-Shapira argument used Ramsey theory, and so do we. In fact, this is an essential component of the argument, because Theorem 1′ (in particular, the generalisation to looped graphs) implies Ramsey’s theorem. To see this, let ${\mathcal P}$ be the property that a looped graph contains no monochromatic induced subgraphs of some fixed size k. This property is impossible to satisfy for any non-trivial graph, since by considering the subgraph induced by k repetitions $v, \ldots, v$ of a single vertex we necessarily obtain a monochromatic graph. Hence, it is clearly not possible to locally repair a graph to obey ${\mathcal P}$. Applying Theorem 1′ in the contrapositive, we deduce that every graph must fail to k-locally verify ${\mathcal P}$ with some positive probability, which implies Ramsey’s theorem.

In view of Theorem 2, it was of course natural to try to extend Theorem 1′ to hypergraphs. Unfortunately, the Rödl-Schacht argument does not provide such a local repair argument; their argument proceeded by an indirect method, assuming for contradiction that Theorem 2 failed, and then producing a sequence $G_n$ of increasingly large counterexamples to that theorem. One then performed the rather unusual trick of regularising (very large) hypergraphs that were late in this sequence at a level of accuracy that depended on the size of (much smaller) hypergraphs that were early in this sequence; this allowed one to “almost embed” the small hypergraphs in the large ones, which could eventually be used to repair the small hypergraph. Another indication why the Rödl-Schacht argument would be insufficient to obtain a local repair version of Theorem 2 is that that argument did not use any Ramsey theory.

On the other hand, it is tempting to extend the Alon-Shapira method to hypergraphs. This was done for instance by Ishigami for partite hypergraphs, and we were also able to extend Theorem 1′ to hypergraph properties that were not only hereditary, but also monotone (which means that deletion of edges cannot destroy the property). Monotone properties are significantly easier to study than hereditary ones, because any collection of edges which is causing some difficulty for the purposes of designing a modification rule can simply be deleted, so long as the total number of edges deleted is not too large. Partite graphs were also simpler because the troublesome “diagonal interactions”, in which an edge intersected with a single Szemerédi cell multiple times, were completely eliminated.

For the general situation of hereditary properties on non-partite hypergraphs, a large portion of the Alon-Shapira argument goes through, but we found that the Ramsey theory step (which was needed to deal with diagonal interactions) caused a lot of trouble. Eventually, we in fact found a counterexample to the Ramsey theory statement that we needed (which is unfortunately too technical to state here), which then soon propagated to a counterexample to the entire statement itself. More precisely, we were able to show the following negative results:

Theorem 3. There exists a hereditary property of 3-uniform hypergraphs which is not locally repairable. Also, there exists a hereditary property of directed graphs which is not locally repairable.

Thus there are some interesting transitions in behaviour between the testability of hereditary properties for graphs and hypergraphs, for undirected graphs and directed graphs, for partite hypergraphs and non-partite hypergraphs, and for monotone hypergraph properties and hereditary hypergraph properties. In all four cases, the former properties are locally repairable, whereas the latter are merely testable.

To sketch the ideas behind Theorem 3, it is simpler to start with the latter statement, concerning directed graphs. We can think of a directed graph as a pair $(V,<)$ consisting of a finite vertex set V, together with a relation < on that set. The property ${\mathcal P}$ we shall pick for this directed graph is very simple: it is the property that the relation < is a total ordering. This is clearly a hereditary property, and thus testable with one-sided error (by the usual testing method of sampling a few vertices from V and seeing whether < is a total ordering on those vertices).

Unfortunately, the property of being a total ordering is not locally repairable. One can intuitively see this as follows. Let <‘ be a slight random corruption of the standard total ordering < on the numbers $\{1,\ldots,N\}$, thus if $1 \leq i < j \leq n$ then the relation $i <' j$ will hold true with probability $1-\varepsilon$ and fail with probability $\varepsilon$, and vice versa for the relation $j <' i$, with all events being independent. (In particular, we will occasionally discover that $i <' j$ and $j <' i$ are both true, or both false.)

Now let us try to locally repair this corrupted total ordering <‘ into a new total ordering <” by a local repair rule. This entails selecting k numbers $v_1,\ldots,v_k$ from $\{1,\ldots,n\}$ at random, and then designing a rule for the ordering <” by deciding the truth value of $w_1 <'' w_2$ based solely on how $v_1,\ldots,v_k,w_1,w_2$ are ordered via <‘.

For the sake of this informal discussion, let us suppose that we are lucky enough that the relation <‘ is uncorrupted for the purposes of comparing any element with $v_1,\ldots,v_k$, or comparing two elements between $v_1,\ldots,v_k$. This allows us to order $v_1,\ldots,v_k$ correctly, and then all other vertices w can be correctly positioned in one of the intervals $(v_i,v_{i+1})$ between two adjacent vertices (or in one of the boundary intervals ${}[1,v_1)$, $(v_k,N]$). This already allows us to design a local repair rule that will correctly rank any two vertices $w_1, w_2$ which lie in different intervals, simply by declaring all elements in the higher interval to be greater than those in the lower. However, we run into problems when deciding on what to do for $w_1 <'' w_2$ in the case when $w_1, w_2$ both lie in the same interval; in that case, $w_1$ and $w_2$ are indistinguishable from each other as far as their connectivity to $v_1,\ldots,v_k$ is concerned. One still has the relations $w_1 <' w_2$ and $w_2 <' w_1$ available to us, but this information is corrupted and thus unreliable. Indeed, it is possible to find $w_1, w_2$ in the same interval such that $w_1 <' w_2$ and $w_2 <' w_1$ have the same truth value, so we have total indistinguishability between $w_1$ and $w_2$. In particular, any local repair rule for <” will necessarily be such that $w_1 <'' w_2$ and $w_2 <'' w_1$ have the same truth value, so that this rule cannot possibly give a total ordering. This basically explains how we verify the second part of Theorem 3.

To modify these arguments to also establish the first part of Theorem 3, we used two encoding tricks: one trick that encoded a directed graph property as a property of $\leq 3$-uniform hypergraphs, and a second trick that encoded a property of $\leq 3$-uniform hypergraphs as a property of 3-uniform hypergraphs. I will only discuss the first trick, as it accomplishes the surprising and unintuitive feat of modeling an asymmetric property by a symmetric one.

Let us first understand what a $\leq 3$-uniform hypergraph is. One can think of this hypergraph as a collection of three things: a 1-uniform hypergraph $G_1$ (i.e. a vertex colouring), a 2-uniform hypergraph $G_2$ (i.e. an ordinary graph), and a 3-uniform hypergraph $G_3$ (a collection of triples of vertices). (Strictly speaking, there is also a 0-uniform component, but this component is rather trivial and plays no role). To make things a bit easier to visualise, let us describe these objects $G_1, G_2, G_3$ using more colourful notation:

1. We call a vertex blue if it lies in $G_1$, and red otherwise.
2. We say that two vertices like each other if they lie in $G_2$, and dislike each other otherwise.
3. We say that three vertices are properly ranked iff they lie in $G_3$.

The property ${\mathcal P}$ will be some property relating the notions of blue, red, like, dislike, and proper ranking. Before we describe this propertly, let us first describe a model example of a hypergraph that obeys this property. It is defined as follows:

1. The vertex set consists of the 2N integers $\{1,\ldots,2N\}$.
2. The first N integers $\{1,\ldots,N\}$ are blue, the rest are red.
3. Each red vertex will like a given blue vertex with an independent probability of 1/2. (We will not care whether blue vertices like each other or not, and similarly for a pair of red vertices.)
4. We say that three vertices are properly ranked iff one vertex is red, the other two are blue, the red vertex likes one of the blue vertices and dislikes the other, and the blue vertex which is liked has a higher numerical value than the one which is disliked.

Now let us describe the property ${\mathcal P}$ which the above hypergraph satisfies. For every red vertex r, we define a relation $<_{r}$ on the blue vertices by declaring \$latex b

It is then possible to adapt the indistinguishability argument used for directed graphs to obtain a failure of local repairability for this property. A further adaptation then gives the first part of Theorem 3.

It may well be that this failure of local repairability can be avoided by weakening the notion of local repairability into some notion intermediate between local repairability and testability, but we have not explored this issue thoroughly.