The question in extremal graph theory I wish to discuss here originates from Luca Trevisan; it shows that we still don’t know everything that we should about the “local” properties of large dense graphs.

Let be a large (undirected) graph, thus V is the vertex set with some large number n of vertices, and E is the collection of edges {x,y} connecting two vertices in the graph. We can allow the graph to have loops {x,x} if one wishes; it’s not terribly important for this question (since the number of loops is so small compared to the total number of edges), so let’s say there are no loops. We define three quantities of the graph G:

- The
*edge density*, defined as the number of edges in G, divided by the total number of possible edges, i.e. ; - The
*triangle density*, defined as the number of triangles in G (i.e. unordered triplets {x,y,z} such that {x,y},{y,z}, {z,x} all lie in G), divided by the total number of possible triangles, namely ; - The
*diamond density*, defined as the number of diamonds in G (i.e. unordered pairs { {x,y,z}, {x,y,w} } of triangles in G which share a common edge), divided by the total number of possible diamonds, namely .

Up to insignificant errors of o(1) (i.e. anything which goes to zero as the number of vertices goes to infinity), these densities can also be interpreted probabilistically as follows: if x,y,z,w are randomly selected vertices in V, then

- ;
- ;
- .

(The errors of o(1) arise because the vertices x,y,z,w may occasionally collide with each other, though this probability becomes very small when n is large.) Thus we see that these densities are “local” qualities of the graph, as we only need to statistically sample the graph at a small number of randomly chosen vertices in order to estimate them.

A general question is to determine all the constraints relating in the limit . (It is known from the work of Lovász and Szegedy that the relationships between local graph densities such as these stabilise in this limit; indeed, given any error tolerance and any large graph G with densities , there exists a graph with “only” vertices whose densities differ from those of G by at most , although the best known bounds for are far too poor at present to be able to get any useful information on the asymptotic constraint set by direct exhaustion by computer of small graphs.)

Let us forget about diamonds for now and only look at the edge and triangle densities . Then the story is already rather non-trivial. The main concern is to figure out, for each fixed , what the best possible upper and lower bounds on are (up to o(1) errors); since the collection of graphs with a given edge density is “path-connected” in some sense, it is not hard to see that every value of between the upper and lower bounds is feasible modulo o(1) errors.

The best possible upper bound is easy: . This can be established by either the Kruskal-Katona theorem, the Loomis-Whitney inequality (or the closely related box theorem), or just two applications of Hölder’s inequality; we leave this as an exercise. The bound is sharp, as can be seen by looking at a complete subgraph on vertices. (We thank Tim Austin and Imre Leader for these observations and references, as well as those in the paragraph below.) [*Update*, Apr 21: There is some literature on refining the o(1) factor; see this paper for a survey.]

The lower bound is trickier. The complete bipartite graph example shows that the trivial lower bound is attainable when , and Turán’s theorem shows that this is sharp. For , a classical theorem of Goodman (see also Nordhaus and Stewart) shows that . When for some integer k, this inequality is sharp, as can be seen by looking at the complete k-partite graph.

Goodman’s result is thus sharp at infinitely many values of , but it turns out that it is not quite the best bound. After several partial results, the optimal bound was obtained recently by Razborov, who established for that

and that this is sharp (!) except for the o(1) error. [Update, Apr 21: For some work on understanding the o(1) error, see this paper of Fisher.]

Now we consider the full problem of relating edge densities, triangle densities, and diamond densities. Given that the relationships between were already so complex, a full characterisation of the constraints connecting is probably impossible at this time (though it might be possible to prove that they can be decidable via some (impractical) computer algorithm, and it also looks feasible to determine the exact constraints between just and ). The question of Luca however focuses on a specific regime in the configuration space, in which is exceptionally small.

From the Cauchy-Schwarz inequality and the observation that a diamond is nothing more than a pair of triangles with a common edge, we obtain the inequality

(*)

Because we understand very well when equality holds in the Cauchy-Schwartz inequality, we know that (*) would only be sharp when the triangles are distributed “evenly” among the edges, so that almost every edge is incident to the roughly expected number of triangles (which is roughly ).

However, it is a remarkable fact that this type of equidistribution is known to be impossible when is very small. Indeed, the triangle removal lemma of Ruzsa and Szemerédi asserts that if is small, then one can in fact make vanish (i.e. delete all triangles) by removing at most edges, where in the limit . This shows that among all the roughly edges in the graph, at most of them will already be incident to all the triangles in the graph. This, and Cauchy-Schwarz, gives a bound of the form

, (**)

which is a better bound than (*) when is small compared with .

Luca’s question is: can one replace in (**) by any more civilised function of ? To explain what “civilised” means, let me show you the best bound that we know of today. Let be the tower-exponential of height n, defined recursively by

this is a very rapidly growing function, faster than exponential, double exponential, or any other finite iterated exponential. We invert this function and define the inverse tower function by

This function goes to infinity as , but *very* slowly – slower than or even (which, as famously stated by Carl Pomerance, “is proven to go to infinity, but has never been observed to do so”).

The best bound on known is of the form

for some absolute constant (e.g. 1/10 would work here). This bound is so poor because the proof goes via the Szemerédi regularity lemma, which is known by the work of Gowers to necessarily have tower-type dependencies in the constants.

The open question is whether one can obtain a bound of the form (**) in which is replaced by a quantity which grows better in , e.g. one which grows logarithmically or double logarithmically rather than inverse-tower-exponential. Such a bound would perhaps lead the way to improving the bounds on the triangle removal lemma; we now have many proofs of this lemma, but they all rely on one form or another of the regularity lemma and so inevitably have the tower-exponential type bounds present. The triangle removal lemma is also connected to many other problems, including property testing for graphs and Szemerédi’s theorem on arithmetic progressions (indeed, the triangle removal lemma implies the length three special case of Szemerédi’s theorem, i.e. Roth’s theorem), so progress on improving (**) may well lead to much better bounds in many other problems, as well as furnishing another tool beyond the regularity lemma with which to attack these problems.

Curiously, the work of Lovász and Szegedy implies that the question can be rephrased in a purely analytic fashion, without recourse to graphs. Let be a measurable symmetric function on the unit square, and consider the quantities

and

Any bound connecting and here is known to imply the same bound for triangle and diamond densities (with an error of o(1)), and vice versa. Thus, the question is now to establish the inequality for some civilised value of , which at present is only known to decay to zero as like an inverse tower-exponential function.

[*Update*, Apr 4: wording changed to remove ambiguity about different meanings of .][*Update*, Apr 21: Thanks to Vlado Nikiforov for pointing out some additional references and related questions.]

## 21 comments

Comments feed for this article

3 April, 2007 at 5:01 pm

AnonymousHi Terry,

Didn’t you use c(\beta) to denote two different things? You started with defining c(\beta) as the triangle-removal-lemma constant, but later say that a better bound for c(\beta) may imporve the triangle removal lemma.

4 April, 2007 at 11:27 am

Terence TaoYou’re right; I’ve changed the wording in the post to try to clarify the point. The hope is that this problem is easier than improving the triangle removal lemma bound directly, but perhaps a solution to this problem will then lead the way to find a new way to prove triangle removal.

17 April, 2007 at 12:37 pm

Terence TaoA small update: Yuval Peres has pointed out to me that there is a conjecture of Sidorenko which has a somewhat similar flavour. It asserts that given bipartite graphs and (one should think of G as large and H as small), the number of copies of H inside G (mapping to , to , and allowing repeated vertices) is at least , which is the value attained asymptotically when G is a large random graph. This conjecture is known to be true for instance when H is a cycle, a path, a star, and for a couple other graphs. It also has an analytic formulation similar to the one given above for the triangle-diamond problem.

17 April, 2007 at 1:15 pm

Chris HillmanChanging topics (but perhaps not leaving the subject of sharp inequalities), I am curious to know whether other fans of “counting with categories” a la Baez and Dolan, who are aware of Cameron’s conjectures, have seen a recent article of Pouzet.

(For those who have no idea what I am talking about: I claim one can rewrite Wilf, Generatingfunctionology in terms of “structors” or combinatorial species, a kind of “categorification” of generating functions, along the lines one would guess after reading Baez and Dolan’s paper. This is “merely” a change of perspective but I think it suggests interesting new questions. See the undergraduate text by Cameron, Permutation Groups, Cambridge University Press, LMS student texts vol. 45, 1999, for a generalization of Polya enumeration which lies at the heart of this program, plus a very nice introduction to random graphs. It would be interesting to know more about the “density of p-cycles in a random Schreier digraphs”, if one can make sense of that notion. Schreier digraphs generalize Cayley graphs to general actions by a finite group.)

17 April, 2007 at 3:43 pm

Terence TaoDear Chris,

This is quite interesting. Certainly categorification is useful in enumerative combinatorics (being a souped up version of bijective proofs, as Baez and Dolan note), but as far as I am aware it has not really played much of a role in extremal combinatorics. In particular, the Cauchy-Schwarz inequality, arguably the most basic of the fundamental inequalities in the subject, does not appear to have a categorified version. A more concrete problem: suppose is a map from one finite set to another. Then the relative product is known to have cardinality at least , by Cauchy-Schwarz. In other words, is at least as large as . Is there a categorical proof of this fact (which, incidentally, is basically equivalent to Sidorenko’s conjecture for paths of length 2)? See also Tim Gowers’ discussion of the Cauchy-Schwarz inequality.

17 April, 2007 at 4:43 pm

Greg KuperbergIt may not count as categorification, but I can think of an example in extremal combinatorics where it pays to enhance bijections, rather than to erase them as in the example of the fibered Cartesian square of a set. Namely, Peter McMullen conjectured the optimal inequalities for the f-vector (vector of face numbers) of a simple polytope; the conjecture was that the associated h-vector is unimodal. Richard Stanley proved this conjecture by building homology groups (the homology of a toric variety) whose dimension sequence is the h-vector, and then showing that the homology groups form a representation of the Lie algebra sl(2). Thus, the h-vector is unimodal because it is a weight diagram.

Another example of proof by enhancement of combinatorics is Lovasz’s celebrated proof of the Kneser conjecture, using a topological method to bound the chromatic numbers of graphs.

Perhaps what can be said about these examples is that they are fairly exacting bounds. Most, or perhaps all, of the inequalities become elementary if you relax them moderately. This is unlike Ramsey theory, where in a sense the inequalities cut far deeper than any elementary bound. On the other hand, I am not sure whether this is really a robust distinction.

17 April, 2007 at 7:57 pm

Terence TaoDear Greg,

These are good examples, which I wasn’t previously aware of; I concede that there are several important non-analytic (and non-algorithmic) techniques in extremal combinatorics, at least some of which should be categorifiable :-) . Certainly algebraic or geometric tools are good for proving monotonicity or positivity properties, which are of course an important approach to extremal combinatorics problems. There is also the “polynomial method” in extremal combinatorics, which is based on various generalisations of the basic fact “a polynomial of one variable of degree n can have at most n zeroes”; a particularly useful such generalisation is the combinatorial Nullstellensatz of Alon.

I hadn’t seen Lovasz’s argument before – it is very beautiful – but topological arguments have appeared in extremal combinatorics in some other contexts. For instance, the crossing number inequality is really useful in incidence geometry and in arithmetic combinatorics, and is ultimately proven by a combination of Euler’s formula and the probabilistic method.

18 April, 2007 at 8:58 am

Chris HillmanWell, I seem to have committed the cardinal sin of posting before lurking, so I don’t know if you (plural) have said this here before! I hope you’ll all bear with me while I try to catch up. (Also, I haven’t tried to use pseudolatex in wordpress before, but I’ll take a guess and hope for the best.)

IIRC, what Terry called “relative product” and Greg called “fibered Cartesian square of a set” is called “kernel congruence” in Mac Lane, Categories for the Working Mathematician, Springer, 1971, and it is a special case of a “pullback square”. As a relation on X, the kernel congruence does indeed correspond to the kernel when f is a group epimorphism, say, in which case the preimages all have the same size and equality holds.

More generally, drawing a picture of the kernel congruence for a mapping from a finite set X to a finite set Y as an |X| by |X| by |Y| “box” made up of unit cubes in which the ordered pairs in the kernel congruence are darkened, and where the elements of X are listed in the order first preimage, second preimage, and so on, the cubes in kernel form square “briefcases” one layer thick and the number of these cubes is seen to the sum over the image of f of the squares of the preimages. So in the CS inequality

let a_j be the sizes of the primages and let all the b_j = 1. This gives with equality iff (a_j) and (b_j) are linearly dependent, i.e. iff the preimages all have the same size, where the last bit is I take it the interesting part. That’s what you have in mind by “the CS argument”, right?

18 April, 2007 at 9:12 am

Terence TaoDear Chris,

Sorry about the latex issues; it seems that the wordpress site doesn’t let one preview comments. But I can manually repair the latex (in this case, the editor didn’t like the use of primes, x’, or of \operatorname{}). Worst case, just type in pseudolatex and I can clean it up as best I can.

That is indeed the Cauchy-Schwarz argument I had in mind; the question is whether there is a “categorical” version of this proof, in which the numbers a_j, b_j are replaced by richer objects such as sets. Incidentally, there is now a parallel discussion of this topic over at the n-category cafe.

18 April, 2007 at 1:02 pm

Chris HillmanThanks for fixing my markup! And for the link to n-category cafe, although unfortunately, I don’t have MathML functionality so I can hardly ever read anything there. Your blog seems to work much better for me.

I just read your ICM talk and am feeling suitably inspired. Or distracted— I seem to be getting farther OT with every word! A few comments/queries:

1. “Solutions to highly nonlinear systems should behave randomly after accounting for conservation laws, etc.” Would it be fair to guess that the KdV would be “modestly nonlinear” in this sense? (“Too many” conservation laws?)

What about the BKL conjecture in general relativity? (I don’t know of an adequate exposition— the Wikipedia article is seriously misleading— but one version says something to the effect that curvature singularities in generic vacuum solutions of the Einstein field equation resemble the strong spacelike curvature singularity which occurs in the so-called Mixmaster universe. The kicker here is that approach to the Mixmaster singularity is “regulated” by a simple continued fraction expansion!

BTW, I recommend the article by Lagarias in The Unreasonable Effectiveness of Number Theory, ed. by Stefan A. Burr, AMS, 1992 as a wonderful survey of how number theory is relevant to dynamical systems.)

2. The difficulty of “assessing” (much less “navigating”) large function spaces, such as solution spaces of nonlinear PDEs: I have found the paper by C.T.C. Siklos, “Counting solutions of Einstein’s equations”, Class. Quantum Grav. 13 (1996): 1931-1948 to be very intriguing— and perfectly maddening. If I can ever say something sensible about this I plan to post it in http://www.arsmathematica.net/ …

3. I just came across http://tosio.math.toronto.edu/wiki/index.php/Main_Page and I want one of those! (A pywiki.) I have felt for some time that the future of wikis is this: authorship tools for research groups or even single authors. So I am very glad to see this well-disciplined example of an “open notebook”!

I guess you know the website http://eqworld.ipmnet.ru/ ? As it happens, I have computed the point symmetry group of a whole bunch of PDEs, including many mentioned in these places, and AFAIK there is no website collecting this information.

19 April, 2007 at 11:29 am

Terence TaoDear Chris,

Thanks for the comments. I have only very fuzzy answers to some of your questions, but yes, I would consider KdV as being very close to linear in behaviour (indeed, with regard to suitable action-angle coordinates, or via the inverse scattering transform) the evolution is (formally)

perfectlylinear. In particular one doesn’t see turbulent behaviour, such as cascades of energy into increasingly higher frequency scales, which seems to plague other very nonlinear equations and which seems to generate a lot of “mixing” in the dynamics.For perturbations of integrable systems one of course has KAM theory on one hand, so that there is a positive measure subset of phase space in which the dynamics continues to behave like an integrable system, and Arnold diffusion on the other, where one spreads out to cover most of the energy surface. I suppose this is some sort of intermediate regime between integrable and strongly nonlinear.

I am not an expert on mathematical relativity, though I can confidently say that the rigorous understanding of singularity formulation for the vacuum Einstein equations, while it is making good progress, is still very far away from the point where we can say or even conjecture any really good structural properties of a typical blowup. If one makes some strong symmetry reductions then one might be able to say something more (cf. Christodoulou’s proof of the cosmic censorship conjecture assuming spherical symmetry) but then it is arguable whether one is now considering “generic” behaviour. It is cute to see that number theory plays a role in these special solutions, though in some other contexts (e.g. periodic dispersive equations) it seems that many of these number-theoretic structures are fragile (e.g. they go away if a torus is replaced by a slightly perturbed domain).

Thanks for the link to eqworld, which I hadn’t seen before. It is good to see many different types of experiments regarding how to make good mathematical use of the Web; I feel that this is still an under-utilised medium in our subject, though it is not entirely clear how to systematically exploit it more effectively.

24 April, 2007 at 10:36 pm

Ars Mathematica » Blog Archive » Combinatorial Nullstellensatz[…] this thread on his weblog, Terry Tao mentioned an exciting paper by Noga Alon. The paper explains a result that […]

25 April, 2007 at 11:56 am

Vlado NikiforovDear Teri,

In another tradition diamonds are called “books of size 2″. In general, a “book of size k” is a set of k triangles sharing a common edge. The size of the maximal book in a graph is called its booksize.

Since 1962 Erdős has asked many questions about the booksize of graphs. Some of them are answered in arXiv.org:math/0405080.

More up to the point is the following open question of Erdős and Rothschild (e.g., Chung and Graham “Erdős on graphs”, p. 48):

What is the minimal booksize of a graph of order n with edge density c>0 if every edge is in a triangle?

Using the Regularity Lemma, Szemerédi showed that this value goes to infinity with n. The upper bound due to Erdős and to Alon and Trotter is proportional to the square root of n.

To see the importance of graphs with every edge in a triangle note that the following statement, together with the Solymósi projection, implies the existence of a three term arithmetic progression in dense subsets of integers:

Let a graph of order n has edge density c>0 and every edge is in a triangle. Then, if n is sufficiently large, the graph contains a diamond.

This statement seems weaker than the triangle removal lemma. Is this really so?

Also it seems that the functions involved in the problems of Trevisan and of Erdős-Rothschild are related. Is there any clear relationship between them?

25 April, 2007 at 1:54 pm

Jonathan Vos PostOne can fit 5 unit-edge regular tetrahedra in R^3 sharing a common edge, but no more. This might be called a 3-book of size 5. In hyperbolic space, one can have more tetrahedra share an edge on polytetrahedra.

Polytetrahedra relate loosely to tetrahedralization, and tightly to the structure of proteins and glasses. Some results are known for the Euclidean geometry, usefully in projecting from R^4 to R^3. But are the equivalent results for 3-books known as for books as previously defined, in the graph-theory sense (where the “steric hinderence” of R^3 is not a constraint)?

Search on the term “polytetrahedra” at the Online Encyclopedia of Integer Sequences for an extremely elementary and yet novel combinatorial enumeration problem I’ve posed and only scratched the surface of answering.

31 August, 2008 at 8:55 pm

The correspondence principle and finitary ergodic theory « What’s new[…] in passing that it actually implies the k=3 case of Szemerédi’s theorem. (See also my previous blog posts on related topics.) It turns out that this result can be deduced from some […]

13 May, 2009 at 6:03 pm

The Triangle Removal Lemma « in theory[…] wrote a fascinating post putting this question in the broad context of extremal questions about “local patterns” […]

24 May, 2009 at 4:31 pm

MathinfantDear Professor Tao:

I have some confuse in the triangle upper bound. Let denote the number of triangle, then . It seems that average number of triangle is “larger” than above upper bound. Is anyone can answer me what’s wrong in my estimate?

Due to I have no any mathmatical backround, excuse for my too stupid question.

Thank you for your response.

24 May, 2009 at 4:37 pm

MathinfantSorry for above typing errors. I don’t understand how to delete above comment. So, I rewrite here.

The expection is , then one can get the average density is about . It seems larger than .

Thank you all again.

24 May, 2009 at 4:58 pm

Terence TaoDear Mathinfant,

is smaller than for .

30 May, 2009 at 1:55 pm

Structure and Randomness: Pages from Year One of a Mathematical Blog « Abner’s Postgraduate Days[…] Open question: triangle and diamond densities in large dense graphs […]

28 December, 2014 at 5:20 pm

Determinant of a pullback diagram | CL-UAT[…] Terence Tao observes in one the comments on his weblog that the product of |PB| and |Y| is always greater than or equal to |X|2. (This is an application of the Cauchy-Schwarz inequality.) This fact may be rephrased as follows: If we ignore in the above diagram all arrows and replace the sets by their cardinalities we obtain a 2×2 matrix with a non-negative determinant. […]