The question in extremal graph theory I wish to discuss here originates from Luca Trevisan; it shows that we still don’t know everything that we should about the “local” properties of large dense graphs.

Let G = (V,E) be a large (undirected) graph, thus V is the vertex set with some large number n of vertices, and E is the collection of edges {x,y} connecting two vertices in the graph. We can allow the graph to have loops {x,x} if one wishes; it’s not terribly important for this question (since the number of loops is so small compared to the total number of edges), so let’s say there are no loops. We define three quantities of the graph G:

  • The edge density 0 \leq \alpha \leq 1, defined as the number of edges in G, divided by the total number of possible edges, i.e. n(n-1)/2;
  • The triangle density 0 \leq \beta \leq 1, defined as the number of triangles in G (i.e. unordered triplets {x,y,z} such that {x,y},{y,z}, {z,x} all lie in G), divided by the total number of possible triangles, namely n(n-1)(n-2)/6;
  • The diamond density 0 \leq \gamma \leq 1, defined as the number of diamonds in G (i.e. unordered pairs { {x,y,z}, {x,y,w} } of triangles in G which share a common edge), divided by the total number of possible diamonds, namely n(n-1)(n-2)(n-3)/4.

Up to insignificant errors of o(1) (i.e. anything which goes to zero as the number of vertices goes to infinity), these densities can also be interpreted probabilistically as follows: if x,y,z,w are randomly selected vertices in V, then

  • {\Bbb P}( \{x, y\} \in E ) = \alpha + o(1);
  • {\Bbb P}( \{x, y\},\{y,z\}, \{z,x\} \in E ) = \beta + o(1);
  • {\Bbb P}( \{x, y\}, \{y,z\}, \{z,x\}, \{y,w\}, \{w,x\} \in E ) = \gamma + o(1).

(The errors of o(1) arise because the vertices x,y,z,w may occasionally collide with each other, though this probability becomes very small when n is large.) Thus we see that these densities are “local” qualities of the graph, as we only need to statistically sample the graph at a small number of randomly chosen vertices in order to estimate them.

A general question is to determine all the constraints relating \alpha, \beta, \gamma in the limit n \to \infty. (It is known from the work of Lovász and Szegedy that the relationships between local graph densities such as these stabilise in this limit; indeed, given any error tolerance \epsilon > 0 and any large graph G with densities \alpha,\beta,\gamma, there exists a graph with “only” O_\epsilon(1) vertices whose densities \alpha', \beta', \gamma' differ from those of G by at most \epsilon, although the best known bounds for O_\epsilon(1) are far too poor at present to be able to get any useful information on the asymptotic constraint set by direct exhaustion by computer of small graphs.)

Let us forget about diamonds for now and only look at the edge and triangle densities \alpha, \beta. Then the story is already rather non-trivial. The main concern is to figure out, for each fixed \alpha, what the best possible upper and lower bounds on \beta are (up to o(1) errors); since the collection of graphs with a given edge density is “path-connected” in some sense, it is not hard to see that every value of \beta between the upper and lower bounds is feasible modulo o(1) errors.

The best possible upper bound is easy: \beta \leq \alpha^{3/2} + o(1). This can be established by either the Kruskal-Katona theorem, the Loomis-Whitney inequality (or the closely related box theorem), or just two applications of Hölder’s inequality; we leave this as an exercise. The bound is sharp, as can be seen by looking at a complete subgraph on (\alpha^{1/2}+o(1))n vertices. (We thank Tim Austin and Imre Leader for these observations and references, as well as those in the paragraph below.) [Update, Apr 21: There is some literature on refining the o(1) factor; see this paper for a survey.]

The lower bound is trickier. The complete bipartite graph example shows that the trivial lower bound \beta \geq 0 is attainable when \alpha \leq 1/2 - o(1), and Turán’s theorem shows that this is sharp. For \alpha \geq 1/2, a classical theorem of Goodman (see also Nordhaus and Stewart) shows that \beta \geq \alpha(2\alpha-1)-o(1). When \alpha = 1 - 1/k for some integer k, this inequality is sharp, as can be seen by looking at the complete k-partite graph.

Goodman’s result is thus sharp at infinitely many values of \alpha, but it turns out that it is not quite the best bound. After several partial results, the optimal bound was obtained recently by Razborov, who established for 1-1/k < \alpha < 1-1/(k+1) that

\beta \geq \frac{(k-1) \left(k-2\sqrt{k(k-\alpha(k+1))}\right) \left(k + \sqrt{k(k-\alpha(k+1))} \right)}{k^2 (k+1)^2}- o(1)

and that this is sharp (!) except for the o(1) error. [Update, Apr 21: For some work on understanding the o(1) error, see this paper of Fisher.]

Now we consider the full problem of relating edge densities, triangle densities, and diamond densities. Given that the relationships between \alpha, \beta were already so complex, a full characterisation of the constraints connecting \alpha, \beta, \gamma is probably impossible at this time (though it might be possible to prove that they can be decidable via some (impractical) computer algorithm, and it also looks feasible to determine the exact constraints between just \alpha and \gamma). The question of Luca however focuses on a specific regime in the configuration space, in which \beta is exceptionally small.

From the Cauchy-Schwarz inequality and the observation that a diamond is nothing more than a pair of triangles with a common edge, we obtain the inequality

\gamma \geq \frac{\beta^2}{\alpha} - o(1). (*)

Because we understand very well when equality holds in the Cauchy-Schwartz inequality, we know that (*) would only be sharp when the triangles are distributed “evenly” among the edges, so that almost every edge is incident to the roughly expected number of triangles (which is roughly \beta n / \alpha).

However, it is a remarkable fact that this type of equidistribution is known to be impossible when \beta is very small. Indeed, the triangle removal lemma of Ruzsa and Szemerédi asserts that if \beta is small, then one can in fact make \beta vanish (i.e. delete all triangles) by removing at most c(\beta) n^2 edges, where c(\beta) \to 0 in the limit \beta \to 0. This shows that among all the roughly \alpha n^2/2 edges in the graph, at most c(\beta) n^2 of them will already be incident to all the triangles in the graph. This, and Cauchy-Schwarz, gives a bound of the form

\gamma \geq \frac{\beta^2}{c(\beta)} - o(1), (**)

which is a better bound than (*) when \beta is small compared with \alpha.

Luca’s question is: can one replace c(\beta) in (**) by any more civilised function of \beta? To explain what “civilised” means, let me show you the best bound that we know of today. Let 2 \uparrow \uparrow n be the tower-exponential of height n, defined recursively by

2 \uparrow \uparrow 1 := 2; \quad 2\uparrow \uparrow(n+1) := 2^{2\uparrow \uparrow n};

this is a very rapidly growing function, faster than exponential, double exponential, or any other finite iterated exponential. We invert this function and define the inverse tower function \log_* n by

\log_* n := \inf \{ m: 2 \uparrow \uparrow m \geq n \}.

This function goes to infinity as n \to \infty, but very slowly – slower than \log n or even \log \log n (which, as famously stated by Carl Pomerance, “is proven to go to infinity, but has never been observed to do so”).

The best bound on c(\beta) known is of the form

c(\beta) \ll (\log_* \frac{1}{\beta})^{-\epsilon}

for some absolute constant \epsilon > 0 (e.g. 1/10 would work here). This bound is so poor because the proof goes via the Szemerédi regularity lemma, which is known by the work of Gowers to necessarily have tower-type dependencies in the constants.

The open question is whether one can obtain a bound of the form (**) in which 1/c(\beta) is replaced by a quantity which grows better in \beta, e.g. one which grows logarithmically or double logarithmically rather than inverse-tower-exponential. Such a bound would perhaps lead the way to improving the bounds on the triangle removal lemma; we now have many proofs of this lemma, but they all rely on one form or another of the regularity lemma and so inevitably have the tower-exponential type bounds present. The triangle removal lemma is also connected to many other problems, including property testing for graphs and Szemerédi’s theorem on arithmetic progressions (indeed, the triangle removal lemma implies the length three special case of Szemerédi’s theorem, i.e. Roth’s theorem), so progress on improving (**) may well lead to much better bounds in many other problems, as well as furnishing another tool beyond the regularity lemma with which to attack these problems.

Curiously, the work of Lovász and Szegedy implies that the question can be rephrased in a purely analytic fashion, without recourse to graphs. Let W: [0,1]^2 \to [0,1] be a measurable symmetric function on the unit square, and consider the quantities

\beta := \int_0^1 \int_0^1 \int_0^1 W(x,y) W(y,z) W(z,x)\ dx dy dz

and

\gamma := \int_0^1 \int_0^1 \int_0^1 \int_0^1 W(x,y) W(y,z) W(z,x) W(y,w) W(w,x) \ dx dy dz dw.

Any bound connecting \beta and \gamma here is known to imply the same bound for triangle and diamond densities (with an error of o(1)), and vice versa. Thus, the question is now to establish the inequality \gamma \geq \beta^2/c'(\beta) for some civilised value of c;(\beta), which at present is only known to decay to zero as \beta \to 0 like an inverse tower-exponential function.

[Update, Apr 4: wording changed to remove ambiguity about different meanings of c(\beta).][Update, Apr 21: Thanks to Vlado Nikiforov for pointing out some additional references and related questions.]