[*This post is authored by Luca Trevisan. - T.*]

Notions of “quasirandomness” for graphs and hypergraphs have many applications in combinatorics and computer science. Several past posts by Terry have addressed the role of quasirandom structures in additive combinatorics. A recurring theme is that if an object (a graph, a hypergraph, a subset of integers, …) is quasirandom, then several useful properties can be immediately deduced, but, also, if an object is not quasirandom then it possesses some “structure” than can be usefully exploited in an inductive argument. The contrapositive statements of such randomness-structure dichotomies are that certain simple properties imply quasirandomness. One can view such results as algorithms that can be used to “certify” quasirandomness, and the existence (or, in some cases, conjectural non-existence) of such algorithms has implications in computer science, as I shall discuss in this post.

Informally, an undirected graph is *quasirandom* if, for every two large sets of vertices and , the number of edges between and is approximately what it would be, on average, if we had generated by picking edges uniformly at random. Quantitatively, we’ll say that a graph is *-quasirandom* if for every two disjoint sets of vertices and the number of edges between and differs from by at most an additive error term. This property is true with for the complete graph, and it is quite remarkable that there exist very sparse graphs that are quasi-random. Indeed it is possible to show that a random graph with edges (that is, a graph where the average degree of a vertex is a constant depending only on ) is -quasirandom with high probability. While this is remarkable enough, it is also true that there are efficient algorithms that, given a random graph with edges, are able with high probability to verify that the graph is -quasirandom. (This is true with some caveats, as explained below.)

Here are some implications of such algorithms. Consider the problem of finding a largest independent set in a given graph. (If vertices represent tasks to be performed, and an edge represents a conflict that makes it impossible to perform two tasks at the same time, a largest independent set is a largest choice of simultaneously doable tasks.) This problem is NP-hard, meaning that if there were an efficient optimization algorithm for it, there would be an algorithm that given a provable mathematical statement would find a shortest proof for it in time polynomial in the length of the shortest proof; a number of other highly unlikely algorithmic consequences, including polynomial time algorithms to break any possible cryptographic protocol, would also follow. (See e.g. this description of the P versus NP problem.) In addition, we know that, in an -vertex graph, even achieving an approximation in polynomial time would imply that (as proved by Hastad) and that, in particular, exact optimization would also be possible in polynomial time. When we look at graphs of maximum degree , a approximation is trivial, but it is NP-hard to approximate the problem better than (see this paper).

Now consider a random -regular graph; it is easy to find an independent set of size , and, with high probability, we can prove that the graph is -quasirandom, and hence the largest independent set has size . This means that we

achieve with high probability an approximation of , and so that

random graphs are not the hardest instances of the problem. Typically, in combinatorics, when random graphs are not extremal for a given problem, then finding extremal constructions is very hard; indeed, one has to work quite hard to come up with examples showing that certain approximate algorithms for the independent set problem (such as the Lovasz theta function) do achieve approximation as bad as (see this paper by Feige).

The same phenomenon occurs for a number of other graph optimization problems. For example the Max Cut problem is the problem of finding a partition of the vertices of a given graph into two sets, so that a largest number of edges are “cut,” that is, have endpoints on different sides of the partition. There is a polynomial-time algorithm that achieves an approximation factor, it is NP-hard to approximate the problem better than , and there is some evidence that no approximation better than is achievable (see Section 5.1 of this paper for references). On the other hand, in every graph it is easy to find a partition that cuts half of the edges (use a random partition), and in an -quasirandom graph no partition can cut more than a fraction of edges; this means that, here, too, when looking at worst-case instances for certain algorithms one must resort to non-random constructions.

Having motivated the problem, let me briefly describe the algorithms.

First I’d like to discuss an algorithm that gives slightly worse bounds and works only on dense graphs, but is surprisingly simple: count the number of 4-cycles in the graph. This amazing result is due to Chung, Graham, Wilson and Thomason (see the introductory sections of this paper of Gowers for further references). Suppose we have a graph with vertices and edges; then a first observation is that the number of 4-cycles (that is, the number of 4-tuples of vertices such that the four edges exist for ) is at least , which follows from two applications of Cauchy-Schwarz:

where we define if is an edge, and otherwise.

Note that is the average number of 4-cycles in a random graph with vertices in which each edge is included with probability . The surprising thing is that if, in a given graph, the number of 4-cycles is close to , then the graph is quasirandom. This follows from applying Cauchy-Schwarz twice, in a similar way, starting from the expression

which we want to prove to be at most for every set and , in order to establish that the graph is -pseudorandom. It turns out that the above expression is upper bounded by the difference between the number of 4-cycles in the graph and .

When we deal with sparse graphs, unfortunately, this approach breaks down. In fact, any approach that looks at small patterns is bound to fail: a random graph with edges and a random bipartite graph with edges look “locally the same” even though the former is very likely to be -quasirandom, while the latter is not quasirandom at all.

For sparse *regular* graphs (for which all vertices have the same degree), “spectral” techniques give extremely good certificates of quasirandomness. Let be a regular graph where all vertices have degree , and be the adjacency matrix of , hence if is an edge and otherwise. One can see that:

- is symmetric, hence all eigenvalues are real;
- the vector is an eigenvector with eigenvalue ;
- all eigenvalues are between -d and d. Let now be the eigenvalues of , with multiplicities; as we said, .

It turns out that if all the other eigenvalues are at most in absolute value, then is -quasirandom. More precisely, if and are disjoint sets, then

This result is known as the *expander mixing lemma*, and the intuition of the proof is quite simple. Let be a system of orthonormal eigenvector for the eigenvalues . The number of edges between the sets and is

where, for a set , we define as the vector such that if and otherwise. This can be rewritten as

and the first term is , while the other terms can be upper

bounded by in absolute value.Furthermore, if one picks a -regular graph at random, then all eigenvalues except are at most with high probability. One can in fact prove that the upper bound is close to , which is the best possible, but such arguments are very difficult. (A nitpick here is that said results apply to the probabilistic model where we take the union of random matchings rather than a random -regular graph.)

To sum up, if a random graph is generated by taking the union of random matchings, we can certify that is is -quasirandom. In the model where each edge is selected with probability , similar arguments apply (and can certify -quasirandomness, where is the average degree), provided that the average degree is at least . The case of average constant degree is trickier, but it can probably be handled by standard techniques (removing high-degree and isolated vertices before computing the spectrum) that have succeeded in related problems.

Given how satisfactory are the combinatorial and algorithmic theories of quasirandomness for graphs, we may hope for similar results for hypergraphs.

In the dense case, an approach based on counting local patterns indeed works in a completely analogous way. In a 3-uniform hypergraph (an hypergraph where each hyperedge contains three vertices) with vertices and hyperedges we shall count the number of 6-tuples of vertices such that all the 8 hyperedges of the form exist for . Three applications of Cauchy-Schwarz show that this number is at least , and that if this number is at most then the hypergraph is -quasirandom.

Again, however, local approaches fail for sparse hypergraphs.

An algorithm for the sparse case would have remarkable applications for the kSAT problem. In the kSAT problem we are given a boolean AND-of-ORs expression where each OR term is over variables, for example this is an instance of 3SAT:

and we want to know if there is a setting to the variables that satisfies the formula. In the above example, setting all variables to works. This problem is a canonical NP-complete problem for all : if a polynomial time algorithm existed, then there would be polynomial time algorithms for all problems in NP. (There is a linear time algorithm to solve 2SAT.)

Recently, there has been considerable interest in the question of the average-case complexity of kSAT, in the model where we pick at random a formula with variables and clauses. (A “clause” is an OR term in the formula.)

In the hard graph problems we discussed above, uniformly chosen random instances were “easier” than worst-case instances, assuming . In the kSAT case, the situation seems more complex. It is conjectured that, for every , there is a constant such that when we generate a random formula with clauses then the formula is satisfiable with high probability, and when we generate a random formula with then the formula is unsatisfiable with high probability. Furthermore, there are algorithms that are conjectured to find satisfying assignments with high probability in the regime. (See my posts here, here and here for references).

An open question is whether there is an efficient way to certify the unsatisfiability of random formulas in the regime.

One approach in this direction is to reduce to an hypergraph problem. Given a kSAT formula with variables and edges, construct a -uniform hypergraph with 2n vertices (one for each variable x and bit b, which we think of as being the assignment ) and hyperdges, one for each clause, connecting the unique values of the variables that contradict the clause. Suppose the formula has a satisfying assignment, then there is a set of half the vertices such that no hyperedge is completely contained in the set; this is called an independent set. Now, if one generates the formula randomly, then the associated hypergraph is a random hypergraph. The question then becomes: given a random hypergraph, can we certify that there is no large independent set? If we could certify that a random hypergraph of sufficiently large average degree is, say, -quasirandom, then the non-existence of large independent sets would follow, and we would be able to certify the unsatisfiability of comparably dense random kSAT instances.

A natural approach to certify the quasirandomness of a given sparse hypergraph is to look for generalizations of spectral methods. While there is no clear definition of what the “eigenvalues” of an 3-hypergraph (or, more generally, of a symmetric trilinear form) should be, Friedman and Wigderson consider the characterization of the second eigenvalue of a graph as the problem of optimizing a certain quadratic form. The problem has a natural generalization to trilinear forms, and yields a definition of “second eigenvalue” of a 3-hypergraph. They are able to prove that this parameter is bounded for random hypergraphs and that when it is bounded the hypergraph is quasirandom. Unfortunately, no algorithm is known to efficiently compute this parameter.

It is becoming a standard conjecture that, indeed, the unsatisfiability of random instances is hard to verify when and is an arbitrarily large constant. (Note that the problem becomes easier, or no harder, when is larger.) On the other hand, simple refutation algorithms exist for kSAT in the setting of very high density, and the Chung-Graham-Wilson approach can certify quasirandomness of dense hypergraphs.

We can then ask what is the minimal density for which such problems admit efficient algorithmic solutions.

For kSAT, so far, the best refutation algorithms work when the density is about , and, for even , the idea of such algorithms also leads to an algorithm to verify the quasirandomness of random -hypergraphs.

Translated to the hypergraph setting, the algorithm is as follows. Given a -hypergraph (even ) with vertices and about hyperdges, we create a graph with vertices, one per -tuple. Each -hyperedge is associated to an edge in the graph (by splitting, say, randomly, the -tuple into two -tuples). One can see that if the graph is -quasirandom, then the hypergraph is -quasirandom. (The contrapositive is easier to prove, suppose the hypergraph is not quasirandom, let be the sets that witness that the quasirandomness property is contradicted; then the sets and contradict the quasirandomness of the graph.) If the hypergraph is randomly generated, then the graph is randomly generated, and we can use spectral techniques to verify its quasirandomness.

For odd , there are algorithms that certify the unsatisfiability of random kSAT formulas with about clauses; as far as I know, the analogous result is open for hypergraphs. (The kSAT algorithms use properties of kSAT that have no analog in the general hypergraph setting, such as the fact that an assignment that satisfies both and must also satisfy .

Substantial work has gone into proving limitations of spectral methods for the problem of certifying the unsatisfiability of random instances of SAT. Bounds at least as good as those obtained by spectral methods can always be obtained by formulating a semidefinite programming (SDP) relaxation of the given problem, and there are several results showing that even very powerful SDP relaxations of kSAT cannot certify unsatisfiability even for fairly large densities (see e.g. this paper), and the hypergraph quasirandomness problem is only harder. Still, it is plausible that one can verify the quasirandomness of a random hypergraph with at least hyperedges, for all . At least, the problem should be solvable in 3-hypergraphs with hyperedges.

## 20 comments

Comments feed for this article

15 February, 2008 at 2:43 pm

Terence TaoI have a problem that I can’t solve relating to the certifiability of hypergraph quasirandomness which is related to Luca’s post, and which potentially has various applications to breaking the parity problem in number theory.

It goes as follows. Let for some large integer n. We consider two functions from the square to the square (which can be identified with two 4-uniform 4-partite hypergraphs on 4 groups of n vertices). The first function f is a random permutation from to . The second function g is slightly harder to describe. One first needs a random colouring , and then g is a random permutation subject to the additional property that whenever . To put it another way, the colouring c divides in a “checkerboard fashion” into “white squares” and two “black squares” , and g is a random permutation from white squares to white squares and from black squares to black squares.

My problem is as follows: suppose one is given the two functions f and g (but not c) as above, but is not told which is which. Can one build an efficient distinguisher that can determine which function is f and which is g with high probability?

I know that any purely “local” statistic (e.g. number of “octahedra”) cannot distinguish f from g; some sort of “eigenvalue” computation seems to be needed instead. Unfortunately, f and g seem to be too sparse (only hyperedges among possible hyperedges) to get anywhere. Note that the hypergraph associated to f is quasirandom, whilst the hypergraph associated to g is not, so this problem can be viewed as a special case of Luca’s general question.

16 February, 2008 at 9:38 am

Klas M.Dear Terry,

Would not the number of cycles distinguish from with high probability?

A random permutation of an -set has an expectation of cycles, and small variance. The permutation is built from independent permutations of two sets, both of size around , and is a random permutation of a set of size , so is expected to have about twice as many cycles as .

16 February, 2008 at 10:06 am

Terence TaoHmm, fair enough, that solves my problem as stated :-) . I’ll have to think about whether that solution extends to the more complicated situations for which this problem was a toy model.

Suppose though that we change things a little bit. Instead of one colouring map , we have four maps , chosen independently at random (and all unknown to the distinguisher). Let’s say now that g is random subject to the constraint that whenever . It seems now that f and g have the same number and distribution of cycles, so the distinguisher problem needs a different attack in this case.

17 February, 2008 at 10:39 am

ChrisDear Prof. Tao, I’m not sure if I understand your problem

correctly, but consider the following.

I assume the distribution of random colourings is uniform on the set

I need some notation for partitionings of namely let Consider the following (very similar) procedures of sampling such colourings.

1. Choose uniformly at random and independent of choose uniformly at random Now, independent of and choose a random permutation, say from to and let for Now, if toss a coin to decide between and If toss a coin to decide between and Clearly after we defined and also

2. Choose uniformly at random and independent of choose uniformly at random Let Now, independent of and choose a random partitioning of into and such that the partitioning depends on but not on other features of Now, if toss a coin to decide between and If toss a coin to decide between and Clearly after we defined and also The last step is to choose uniformly at random mappings, say from to and from to and glue them to obtain

The observation is that in both sampling schemes follows the same distribution (and in fact follow the same distribution) whereas the defined in (1) is and

the defined in (2) is

Thus, if the sampling schemes are correct, and are statistically indistinguishable.

17 February, 2008 at 10:53 am

Terence TaoDear Chris,

The problem is that and are functions of only one of the two variables (z,w), and so one cannot always consistently specify these functions as ranges across, say, . For instance suppose and both lie in ; the coin tosses for each pair may lead to inconsistent prescriptions of what should be.

The easiest way to show that f and g are distinguishable is to see that the graph of f (as viewed as a sparse 4-uniform 4-partite hypergraph on 4n vertices) is quasirandom in the sense of the main post (in particular, for any sets A, B, C, D of size n/2 in [n], the number of quadruples (x,y,z,w) in A x B x C x D such that f(x,y)=(z,w) is always close to the expected number, which is n^2/16), the graph of g is not (just take A,B,C,D to be the sets where c_1=0, c_2=0, c_3=0, c_4=0 respectively).

17 February, 2008 at 11:10 am

ChrisI see, my bad! :)

17 February, 2008 at 12:24 pm

Jun TaruiHi.

For both the original and revised problems, can’t we distiguish f and g with high probability in the following way?

Fix an arbitray O(log n) by O(log n) square. Focusing on this square, check if there are balanced colorings with respect to which a function in question preserves the parity, where coloring is balanced if the number of each color doesn’t exceed, say, 2/3 fraction. Do naive, exhaustive checking; the number of all colorings (balanced or not) is poly(n).

Random coloring is balanced with probability at least 1-1/poly(n). The probability that a random permutation has any parity-preserving colorings is at most n^-Omega(log n) by a union bound over all balanced colorings.

17 February, 2008 at 12:59 pm

Terence TaoDear Jun Tarui,

I initially thought this worked (and it would be a great solution!) but unfortunately the problem is that f and g don’t map small squares to small squares; a O(log n) x O(log n) square gets mapped all over the place inside , and in fact generically one doesn’t even expect the image to ever repeat any coordinates (one is far below the threshold for the birthday paradox to come into play). So one can’t detect parity preservation inside such a square.

18 February, 2008 at 4:10 am

hikolapolathanks for such a usefull post!

18 February, 2008 at 6:48 am

Jun TaruiDear Terry,

Thank you for explaining my misunderstanding in my previous post above. Let me try again. For simplicity, I’ll consider random balanced colorings, i.e. ones that give each color to n/2 points, and general vs parity-preserving random *functions* from [n]^2 to [n]^2 as opposed to permutations on [n]^2.

Check if there are four points that form a grid-parallel rectangle and get mapped to a (possibly degenerate) rectangle with diagonal pairs going to diagonal pairs. If such four points exist, declare that a function is parity-preserving g, and f otherwise. I think this distinguishes random f vs parity-preserving random g with constant probability. (You say above

I know that any purely “local” statistic (e.g. number of “octahedra”) cannot distinguish f from g, so maybe I’m again missing something; sorry if this is the case…)Consider an arbitrary rectangle R. Prob[f(R) is a rectangle] is about 2/n^4. For g, R is either (1) monochromatic or (2) bichromatic with vertical/horizontal/diagonal pairs having the same color; in all cases, Prob[g(R) is a rectangle] is about 4/n^4, i.e., twice as large. The number of rectangles is (n choose 2)^2, i.e., about n^4/4. So the expected number of rectangles mapping to a rectangle is about 1/2 for f and about 1 for g.

We want to apply a Poisson approximation argument (Alon-Spencer: The probabilistic method, Chapter 8). So consider the probability of two rectangles R1 and R2 both getting mapped to rectangles. Consider the size of their intersection, i.e. how many points belong to both R1 and R2; it is either 0 or 1 or 2. If it is 0 or 1, we can consider the events of R1/R2 getting mapped to a rectangle as independent for both cases of f and g. If it is 2, the intersection forms a vertical or horizontal pair, and the probabilities for f and g respectively are about 2/n^7 and 8/n^7; so they are O(1/n^7). But the number of such pairs (R1, R2) is O(n^5), and thus their contribution for the sake of Poisson approximation (“Delta” in Alon-Spencer) is only O(1/n^2).

So for any balanced coloring, under a parity-preserving random permutation g, the probability that at least one rectangle gets mapped to a rectangle is 1 – e^(-1+O(1/n^2)), i.e., about 1-1/e, whereas for a random permutation f, the probability is at most about 1/2 since the expectation is about 1/2.

18 February, 2008 at 11:55 am

Terence TaoDear Jun Tarui,

Your analysis looks correct, and does provide a probabilistic distinguisher between f and g that has a greater than 50% chance of being accurate, though it does not seem obvious how to iterate it and create a distinguisher which is accurate with arbitrarily high probability for any fixed f and g. (At the risk of seeming to constantly move the goal posts, I actually want a little bit more than this: I want a rigorous certificate of quasirandomness which cannot be satisfied by any g, but is satisfied by most f. But a 99% accurate distinguisher would already be progress, and more than I am currently able to do.)

I guess I should have qualified my previous assertion on equivalence of local statistics; my analysis was only for those patterns which occurred quite frequently (at least n^c of them for some c > 0). In such cases, the number of independent variables exceeds twice the number of equations and so the parity constraints have only a negligible effect. But as you point out, the statistics do shift a little bit for those patterns which only occur O(1) times.

18 February, 2008 at 6:24 pm

ChrisDear Prof. Tao,

is it OK to assume I can compute and in constant time?

Also, what do you mean by efficient distinguisher in your initial post? Since $n$ is large, do you expect it to take poly(log(n)) to compute it? or linear of n or maybe poly(n) is OK?

19 February, 2008 at 10:57 am

Terence TaoDear Chris,

The best distinguisher I know that accepts all g and rejects most f has exponential cost in n, so anything polynomial in n or even subexponential would be an improvement. (In particular this means that lookup tables for and are essentially free.)

19 February, 2008 at 8:41 pm

Jun TaruiDear Terry,

Just one note to your remark (in your comment 1) “Unfortunately, f and g seem to be too sparse (only n^2 hyperedges among n^4 possible hyperedges) to get anywhere. ”

If there are cn^2 hyperdedges for sufficiently large constant c, I think it suffices to certify non-quasirandomness of g by the procedure explained in the last part of the main post. In particular, for your revised problem, given 16 independent copies of f and g (independent random g that are parity-preserving with respect to fixed colorings), I think we can distinguish them with high probability.

20 February, 2008 at 4:42 am

Jun TaruiDear Terry,

I think that the following solves your revised problem as stated (without certifying non-quasirandomness).

Let h be a permutation in question. For x, y, z, w in [n], if h(x, y) = (z, w), write the equation c1(x) + c2(y) = c3(z) + c4(w); thus obtain n^2 mod-2 linear equations in 4n unknowns. Solve the linear system to see if it has a solution such that at least one of c1, c2, c3, c4 is not constant, i.e., not all-0 nor all-1. (This is equivalent to the kernel having dimension at least 4; it can be easily checked.)

With high probability, a random permutation f does not have such a solution: There are 2^(4n) possible colorings(=solutions); for each coloring, the probability that a random permutation respects the coloring is smaller than n^(-Omega(n)).

20 February, 2008 at 10:15 am

Terence TaoDear Jun Tarui,

That is a nice solution to the problem! The one drawback (for the applications I have in mind) is that it is not too robust with respect to noise; if for instance the parity condition is violated by g for o(n^2) of the pairs x,y then it is not clear how to do “robust linear algebra over F_2″ and make the distinguisher work in this case. (In other words, as you say, it is not a quasirandomness certificate.) Though if the noise was much smaller, say only O(log n) errors, then presumably your earlier random sampling trick would work to eradicate it.

(I’m also interested in the case when the functions f, g are only partially defined, say on 1% of the pairs x,y, and possibly multi-valued and non-injective (e.g. each (z,w) could be hit by as many as 100 (x,y)). But any reasonably robust distinguisher for the permutation case should extend to this more general case – for instance, your linear algebra distinguisher seems to work in this case as long as there is only a negligible amount of noise.)

21 February, 2008 at 11:56 am

AnonA minor correction: the number of copies of the 4-cycle in a graph with density it at least and not at most .

21 February, 2008 at 3:24 pm

Terence TaoThanks for the correction!

Best,

Terry

25 May, 2008 at 9:30 pm

Local Events, Turan’s Problem and Limits of Graphs and Hypergraphs « Combinatorics and more[...] Related posts: “Extremal Combinatorics I,” and over Terry Tao’s blog: (Luca Trevisan) Checking the Quasirandomness of Graphs and Hypergraphs [...]

10 March, 2010 at 3:47 pm

The extremal utility belt: Cauchy-Schwarz « Portrait of the Mathematician[...] extremal utility belt: Cauchy-Schwarz By Harrison I found this little gem in an old post on Terry Tao’s blog. There’s not really enough content in it to merit an entire [...]