This is a continuation of several threads from Tim Gowers’ massively collaborative mathematics project, “A combinatorial approach to density Hales-Jewett“, which I discussed in my previous post.

For any positive integer n, let be the set of strings of length n consisting of 1s, 2s, and 3s, and define a combinatorial line to be a triple of such strings arising by taking a string in with at least one wildcard x, and substituting x=1, x=2, x=3 in that string (e.g. xx1x3 would give the combinatorial line ). Call a set of strings *line-free* if it contains no combinatorial lines, and let be the size of the largest set A which is line-free. We then have

Density Hales-Jewett theorem (k=3): , i.e. .

This theorem implies several other important results, most notably Roth’s theorem on length three progressions (in an arbitrary abelian group!) and also the corners theorem of Ajtai and Szemeredi (again in an arbitrary abelian group).

Here are some of the questions we are initially exploring in this thread (and which I will comment further on below the fold):

- What are the best upper and lower bounds for for small n? Currently we know , with some partial results for higher n. We also have some relationships between different .
- What do extremal or near-extremal sets (i.e. line-free sets with cardinality close to ) look like?
- What are some good constructions of line-free sets that lead to asymptotic lower bounds on ? The best asymptotic lower bound we have currently is .
- Can these methods extend to other related problems, such as the Moser cube problem or the capset problem?
- How feasible is it to extend the existing combinatorial proofs of Roth’s theorem (or the corners theorem), in particular the arguments of Szemeredi, to the Density Hales-Jewett theorem?

But I imagine that further threads might develop in the course of the discussion.

As with the rest of the project, this is supposed to be an open collaboration: please feel free to pose a question or a comment, even if (or especially if) it is just barely non-trivial. (For more on the “rules of the game”, see this post.)

— I. for small n —

Because the Cartesian product of two line-free sets is line-free, we have a lower bound

(1)

for all , although this seems to be an inferior bound in practice.

There is a trivial bound

(2)

which comes from the observation that any line-free subset of can be split into three slices, each of which is essentially a line-free subset of . In particular, since , we see that

(3)

for .

If we identify with and consider the set

, (4)

thus . Observe that a combinatorial line with a,b,c,k 1s, 2s, 3s, and wildcards respectively lies in if and only if k is a multiple of 3, and a is not equal to b mod 3. In particular, is line-free for , thus matching the upper bound (3) and concluding that

(5)

For n=4, the only combinatorial lines remaining in are and their permutations. Thus (as observed in Neylon.83) the set is line-free, leading to the lower bound .

This bound was complemented with the lower bound in Jakobsen.90. Thus .

For , the best lower bound currently is (Neylon.83) and .

No attempt at getting reasonable bounds for , etc. appears in the previous threads.

**Question I.A:** Can we improve the upper and lower bounds on ?

**Question I.B:** Is there some reasonably efficient way to automate this process?

**Question I.C:** What do the extremal line-free sets (sets of size close to ) look like?

At some point we should submit this sequence to the OEIS.

— II. Lower bounds on for large n —

For any integers a,b,c adding up to n, let be the set of all strings with a 1s, b 2s, and c 3s. Observe that if is any set of triples (a,b,c) that avoids equilateral triangles for , then the set is line-free. In particular, if we choose B to be the set of all triples such that a+2b lies in a set free of length three arithmetic progressions (e.g. a Behrend set), then is line-free. This seems to give a lower bound

. (6)

**Question II.A:** Using the Elkin bound on Behrend sets, what is the precise lower bound one gets? (Solymosi.46.5 notes that we should restrict the Behrend set to scale rather than n).

**Question II.B:** What does one get for higher k than 3? Presumably one would use the O’Bryant bound (O’Bryant.126).

**Question II.C:** Do the examples from Section I extend to give competitive lower bounds for large n?

**Question II.D:** Are there better ways to avoid triangles or corners in two dimensions than simply lifting up the Behrend example from one dimension?

— III. Miscellanous facts about —

Using the colouring Hales-Jewett theorem, one can show that for every m, one has for sufficiently large m, improving (2) very slightly; see (Tao.85).

**Question III.A:** Anything else we can say about the ? For instance, is there a relation to Hales-Jewett numbers HJ(3,r) (defined as the first n such that every r-colouring of contains a monochromatic combinatorial line?)

Note: as pointed out in Gasarch.45.5, Hindman and Tressler have recently established that , , and .

— IV. Related problems —

Let denote the largest subset of which does not contain any geometric line (which is the same as a combinatorial line, but has a second wildcard y which goes from 3 to 1 whilst x goes from 1 to 3, e.g. xx2yy gives the geometric line ). The Moser cube problem is to understand the behaviour of . The first few values are (see OEIS A003142):

. (7)

It is clear that .

Elsholtz.43 proposed modifying the construction of (6) to give a similar lower bound on , but a difficulty was found in Solymosi.48. Currently, the best known asymptotic bounds for are

(8)

**Question IV.A**. Can we improve the lower bound in (8)?

**Question IV.B.** Can we get good bounds on ?

Define to be the largest cardinality of a subset of which contains no algebraic lines, defined as a triple where ; such sets are known as capsets. Clearly . As noted my previous blog post, the best asymptotic bounds are

(9)

**Question IV.C. ** Can we improve on these bounds in any way?

**Question IV.D.** What are the first few values of ? Brute force calculation reveals . (Presumably this sequence is also in the OEIS.)

[*Update*, Feb 7: I will now maintain a running table of the current “world records” for for small n, formed by combining all the bounds already mentioned in this post and comments.]

[*Update*, Feb 8: A more complete (and up-to-date) spreadsheet can now be found here.]

n | |||

0 | 1 | 1 | 1 |

1 | 2 | 2 | 2 |

2 | 6 | 6 | 4 |

3 | 18 | 16 | 9 |

4 | 52 | 43 | 20 |

5 | [150,154] | [96,129] | 45 |

6 | [450,462] | [258,387] | 112 |

7 | [1302,1386] | [688,1161] | [236,292] |

8 | [3780,4158] | [1849,3483] | [472,773] |

9 | [11340,12474] | [4128,10449] | [1008,2075] |

10 | [32864,37422] | [11094,31347] | [2240,5632] |

11 | [95964,112266] | [29584,94041] | [5040,15425] |

12 | [287892,336798] | [79507,282123] | [12544,42569] |

13 | [837850,1010394] | [175504,846369] | [26432,118237] |

14 | [2458950,3031182] | [477042,2539107] | [52864,330222] |

15 | [7376850,9093946] | [1272112,7617321] | [112896,926687] |

- green – obtained via the inequality (1)
- orange – obtained via the inequality (2)
- lavender – uses the inequality

— V. Szemeredi’s proof of Roth’s theorem —

Szemeredi provided a short proof of Roth’s theorem by purely combinatorial means, and also famously proved the full case of Szemeredi’s theorem by much more intricate combinatorial argument. Both arguments are based on the density increment method. In the setting of , the density increment method revolves around the critical density , which exists by (2); the density Hales-Jewett theorem is precisely the statement that vanishes, so suppose for contradiction that is positive.

Then one can find a line-free set of density such that A has density at most for every m-dimensional subspace of (and one can complement this lower bound with an upper bound for “most” subspaces, see Tao.100). One can also regularise the density on other sets than subspaces, see Tao.121). So, basically, the moment one gets a significant density increment on a non-trivial subspace, one is done (see Tao.120, O’Donnell.122, Tao.127).

Szemeredi’s proof of Roth’s theorem proceeds by first showing that a dense set A contains a large cube Q, and thus (if it is free of algebraic lines) must completely avoid the set . This set contains (most of) lots of parallel subspaces and so squeezes A to have higher density on some other subspace, closing the argument. (See Tao.86, Tao.118, Tao.131)

**Question V.A**: Is there some way to adapt this argument to density Hales-Jewett, or at least to the corners problem? The sparsity of lines is a serious difficulty, see Tao.118, Solymosi.135.

One can at least generate lots of cubes inside A without difficulty (Tao.102, Solymosi.135), lots of combinatorial lines with two elements inside A (Tao.130, Bukh.132, O’Donnell.133, Solymosi.135), and many unorganised subspaces in the complement of A (Tao.118, Solymosi.135), but this does not seem to be enough as yet.

**Question V.B:** What about using the techniques from Szemeredi’s big paper on Szemeredi’s theorem? (The immediate issue is that there is a key point where one needs to regularise a graph, and the relevant graph in HJ or even for corners is far too sparse; I’ll try to clarify this at some point.)

— A note on comments —

As in the first thread, please number your comments (starting with 200, then 201, etc.) and provide a title, to assist with following the comments. (Presumably, in future projects of this type, we will use a platform that allows for comment threading; see this post for further discussion.)

## 101 comments

Comments feed for this article

5 February, 2009 at 6:43 pm

Lior200. Bounds for

Regarding the capset problem, here is a review of known values up to .

6 February, 2009 at 2:20 am

Tyler Neylon201. Lower bounds for

Following Jason Dyer’s suggestion, I empirically found a few more lower bounds for using a greedy algorithm on a bipartite graph representing combinatorial lines in . Starting with , they are: 18, 50, 140, 420, 1155, 3346, 11340, 32676.

The greedy algorithm fails to reach an optimal result beginning with , which has known value 52 (as noted above).

It was also interesting to observe that the resulting graph was in some sense very skewed – a few points are in many more lines than most of the points. More specifically, if , then is in combinatorial lines. Intuitively, most points are in the central slices, which are the least connected, and these slices “mostly” don’t share lines, hinting at why we need very dense sets to hit every line.

The source code – it’s in python – is available here:

http://thetangentspace.com/wiki/Hales-Jewett_Theorem

6 February, 2009 at 3:46 am

Tyler Neylon202. Lower bounds for

I made a mistake in my argument that in comment 83 on the original post. In that argument, if you remove the indicated points with to block the lines in , the remaining set will still include the line , for example. Sorry about the mistake.

We still know, at least, that , using the last post and .

6 February, 2009 at 7:30 am

Jason Dyer203. Greedy algorithm

Tyler, my Python is rusty, but I am correct in assuming the code does not randomize the choice of a node to remove when there’s a tie? If that’s the case, that would be the next step.

Also, this brings up something related to

Question I.B:Question I.B.1In what exact circumstances does the greedy algorithm fail; is there some larger structural concern besides corners and triangles?Question I.B.2Would it be possible to automate the program to avoid those circumstances?6 February, 2009 at 4:16 pm

Tyler Neylon204. Algorithms

Jason: Yes, that code is not randomized. I guess you might get better numbers by randomizing a bit, or by trying to improve on a first guess answer through some kind of residual graph technique.

Another thought is that we can find a blocking set of minimal size as a hypergraph transversal problem. I’m a little confused about how hard this problem is. In [Lund, Yannakakis], they apparently show the problem as being NP-hard to approximate:

http://portal.acm.org/citation.cfm?doid=185675.306789

On the other hand, a quick scholar.google search brings up several papers that claim to have efficient algorithms. It probably helps that we have a 3-uniform hypergraph.

If finding a minimal hypergraph transversal is feasible, that might help a lot toward question I.B. If the algorithm is simple enough, it might help generate more bounds for . Since it explicitly finds minimal blocking sets, it is also finding the sets considered in question I.C.

6 February, 2009 at 6:31 pm

Jason Dyer205. Algorithms

The problem with any NP-complete results is they seem to be on general cases. Even when restricting to 3-uniform hypergraphs, our graph is a lot more symmetrical than an arbitrary one. Surely there’s something about that we can exploit?

Mathematically describing those symmetries might be an interesting exercise (although perhaps someone already has in the original 150+ comment thread).

6 February, 2009 at 7:17 pm

Terence Tao206. Algorithms

One possible way to exploit symmetries is to work in (a,b,c) space rather than the cube . Indeed, note that the set of all with forms a triangular lattice of length , and that each triple (a,b,c) comes with a “weight” , which is the number of strings in with a 1s, b 2s, and c 3s.

Now pick some subset B of this lattice which contains no equilateral triangles . Then the set has no combinatorial lines, and has cardinality equal to the total weight of all the elements of B. So one now has a two-dimensional problem: maximise the total weight of a subset of the triangular lattice with no equilateral triangles.

The best bounds for for all take this form. For instance, the example takes B to be the entire triangle except for . It looks feasible to search for optimal Bs for the next few ns. Note that the set in the main post corresponds to excluding those for which ; after excluding those, the only equilateral triangles remaining are those whose length is a multiple of 3 and so the hypergraph splits into disconnected components based on the residues of a,b,c mod 3, which should simplify things significantly.

7 February, 2009 at 12:33 am

Sune Kristian Jakobsen207. Lower bounds for

The algorithm in Tao.206 is very effective. I has found the following _without_ useing a computer:

For take and remove . This gives a set without triangles and with 150 elements. Thus , and this bound cannot be improved using the algoritm in 206.

For take and remove the permutations of . This gives , and agian, this cannot be improved using the algorithm in 206.

7 February, 2009 at 12:40 am

none208. proof theoretic analysis

Could it be interesting to try to prove there is NO combinatorial proof of the theorem, sort of like Kirby and Paris proved there is no PA proof of Goodstein’s theorem? I’m guessing “combinatorial proof” means one codeable in primitive recursive arithmetic, whose proof theoretic ordinal is omega**omega. So we’d try to use the Hales-Jewett theorem to prove some kind of well-ordering on those point collections, which has order type larger than omega**omega, which most interesting orderings probably would have.

7 February, 2009 at 4:07 am

Sune Kristian Jakobsen209. Lower bound for

Correction to 207: When I said that the bound could not be improved using the algorithm from Tao.206 I assume that triples with had to be left out of the set. I don’t know if it can be improved without this assumption.

7 February, 2009 at 4:47 am

Sune Kristian Jakobsen210. Algorithms

The idea in Tao.206 can only prove lower bound for . But perhaps it can be modified to prove the values of some :

Let . Now we want some inequality in that holds for every without combinatorial lines. More on this later.

Now, instead of giving each triple with the weight , we give it an integer weight between 0 and . Instead of looking for subsets of the triangular lattice with no equilateral triangles, we look for weightings where the weights of the points in each equilateral triangle obey the above inequality.

One example of such an inequality could be: the sum of the two greatest of . I don’t think this inequality is strong enough, because we want to “remove” more than 1/3 of the total weight, but perhaps we can find some stronger inequalities.

7 February, 2009 at 5:06 am

DHJ — the triangle-removal approach « Gowers’s Weblog[…] of the ensuing discussion that are most directly related to this initial proposal. Two other posts, one hosted by Terence Tao on his blog, and the other here, take up alternative approaches that emerged during the discussion. […]

7 February, 2009 at 7:16 am

Jason Dyer211. Sequence for

Sune.207 leaves this sequence as the last one standing in the OEIS.

At our current rate of progress I suspect we can at least get an exact value for , so I’d say wait a bit longer on sending our sequence in.

7 February, 2009 at 8:11 am

Jason Dyer212. Algorithms

Here’s a summary of the different possible algorithm tracks.

1. Randomize the greedy algorithm, then run multiple times to see if any better lower bounds come out. If this doesn’t get any result that improves over Tao.206, that suggests that Tao.206 is optimal.

2. Implement Tao.206 with the dihedral group. As Sune.207 discovered this can be done well even by hand, so it might also be worth it to generate some graphics of lattices with the weights put in to let people try to solve as a puzzle — if anything in this project could be done by a non-expert, this is it.

3. Implement Tao.206 without the dihedral group. If no improvement is found then it would be worthwhile to attempt a proof that the dihedral group can *always* be removed.

4. Write a brute force confirmation algorithm that will leave no question once a is found that it’s exact.

7 February, 2009 at 8:56 am

Terence Tao213. Upper and lower bounds

I added a table to the main post to reflect the progress so far on for n up to 7. (I may expand the table up to n=10 or so at some later point.)

7 February, 2009 at 9:21 am

Jason Dyer(Meta) Quick fix: you put 1115 when it’s 1155 for

7 February, 2009 at 9:28 am

Sune Kristian Jakobsen214. Upper and lower bounds.

Shouldn’t the upper bound for be 156? And I think the next upper bounds should be higher too.

7 February, 2009 at 9:45 am

Terence TaoThanks for the corrections! I decided to go ahead and extend the table to n=10, though I am sure some of the bounds here could be optimised further.

7 February, 2009 at 10:13 am

Sune Kristian Jakobsen215. Algorithm / Idea for bounding the in 210

This is almost a copy of 340:

In 331 Gowers gives a proof that if is a antichain, . Using the same idea it is possible to show that if is a family, that intersects every chain , then . Notice that when the elements of is only allowed to have one of two given sizes, these two theorems are equivalent. The later theorem seems to be easier to generalize to the k=3 case, and it might be useful, at least in the 200-thread.

7 February, 2009 at 10:40 am

Sune Kristian Jakobsen216. Algorithm.

This is almost the same as I suggested in 215, but simpler and weaker:

The chance that a random line with a 1s, b 2s, b 3s, and r wildcard intersects is . Using this and the similar for b and c we get

, if doesn’t contain any lines. Still, I don’t think this inequality is strong enough. I think something stronger is true when A, B and C all are non-empty.

7 February, 2009 at 10:43 am

Frankl-Rodl’s Theorem and Variations on the Cap Set Problem: A Recent Research Project with Roy Meshulam (A) « Combinatorics and more[…] to the density Hales-Jewett project discussed over Gowers’s blog (and even more to this thread at Tao’s blog), I decided to go ahead with […]

7 February, 2009 at 1:54 pm

Michael Peake217. Lower bounds for c_n

Following Jakobsen.207 who used Tao.206

I got a lower bound for c_7 of 1302: Remove {016,106,052,502,151,511,160,610} from D_7.

and a lower bound for c_8 of 3780: Remove {017,107,026,206,125,215,071,701,161,611,080,800,260,620} from D_8

7 February, 2009 at 9:26 pm

Michael Peake218. Lower bounds for c_n

It is curious that the current lower bounds for c_2, c_5 and c_8 are respectively one-third of the current lower bounds for c_3, c_6 and c_9.

I have a lower bound for c_12: 287892, built from (abc) = (A02,705,423,165,462) and permutations, with A=10);

and a lower bound for c_15 of 7376850, built from (abc) = (40B,13B,708,168,735,465,762) and permutations, with B=11.

7 February, 2009 at 10:05 pm

Terence Tao219. Lower bounds for

Hi Michael! I propagated your lower bounds back in n using (2), and they seem to give quite good bounds. Certainly our accuracy for is much superior to what we currently have for or .

I also like the suggestion in Jakobsen.210 to try to understand the ways in which a line-free set can intersect . A test problem that might be susceptible to brute force attack is the following: consider the set . This is the union of three groups of 12 vertices, and it contains 24 triangles, each of which meets each of the three groups in exactly one vertex, with each vertex belonging to two triangles. Now let be a line-free subset of S, where , , . The question is: what constraints are there on the triple ? A simple double counting argument shows that (this is already alluded to in 210). And it is easy to see that can attain the values . But are these the only ways that can be as large as 24? If so, this suggests that the most efficient way to build line-free sets is to zero out some of the and pile all the vertices into the rest – in short, to perform the type of strategy we are already doing from 206 onwards.

8 February, 2009 at 12:13 am

Michael Peake219. Lower bounds for c_n

I wish to correct the triples for c_12 in my previous post (see below).

I have found a way to automate construction of lower bounds,

when n is a multiple of 3.

The current lower bounds for c_{3m} are built like this,

written in terms of Tao.206’s (a,b,c):

c_3 from (012) and permutations

c_6 from (123,024) and perms

c_9 from (234,135,045) and perms

c_12 from (345,246,156,02A,057) and perms (A=10)

c_15 from (456,357,267,13B,168,04B,078) and perms (B=11)

To get the triples in each row, add 1 to the triples in the previous row; then include new triples that have a zero.

This method gives a lower bound for c_99 that is bigger than 3^98.

8 February, 2009 at 2:29 am

Michael Peake220. Lower bounds for

Hi Terry. An answer to your test question is that

you can keep 8 from each group of 12: Remove those, such as 0201,

whose doubled digits are in positions 1 and 3, or

positions 2 and 4. This removes all lines because the

wild-card digit has to match the digit two positions away

once in each line.

8 February, 2009 at 3:24 am

Sune Kristian Jakobsen221. Algorithm / Upper bounds.

One way to show that the idea in 210 can _not_ work would be the following:

Find a weighting w, such that for every a,b,c and r, , there exist sets with and no lines in , but with .

I don’t know if such a weighting exist, but in the k=2 case (Sperners theorem) it does: Let if and 0 otherwise. Now for a given a, b and r we can choose a set A, of elements x with and a set B, of elements x with . These set are big enough, and don’t contain any lines. And the sum of the weight function is almost .

8 February, 2009 at 4:52 am

Sune Kristian Jakobsen222. Alogrithm/ Upper bounds

The obvious generalization to the k=3 case works: Let if . Now we can choose a set A with elements x, such that and a set C with elements x, such that and a set B. Now don’t contain any lines.

This weight gives a density of almost ¼. So the idea in 210 won’t work for large n. Still, I think it might be interesting to look at the questions Tao asked in 219 for general a, b, c and r.

8 February, 2009 at 5:43 am

Michael Peake223. Lower bounds of

There is a connection between , and :

Of the 450 length-six sequences defined in Peake.219,

150 begin with a 1….., and 52 begin with 12…..

The same thing applies with the current lower bounds of c_7, c_8 and c_9

It also gives lower bounds of 32864 for c_10, and 837850 for c_13

8 February, 2009 at 7:36 am

Terence Tao224. Lower bounds for

@Peake.219: This is quite interesting. I wonder if it is possible to extract an asymptotic for the lower bound obtained by this method as (i.e. Question II.C). It is also interesting to see that is really decaying very slowly as .

It may be time to set up some sort of communal spreadsheet for all this data – I’ll have a look into this.

@Jakobsen.220: Ah well, it was worth a shot. (Note, by the way, that the bound in 219 implies the sharp bound , since a line-free set must omit at least one element from and similarly for cyclic permutations; so a better understanding of when equality occurs in would lead to a classification of when one has the maximal cardinality of , which should assist with the upper bound for as with Jakobsen.90). [

Update, Feb 8: I made an arithmetic error here, this inequality is not enough by itself to get to 52; one also needs to figure out how to relate , etc. into the mix. But the general point may still be valid.]In light of the discussion in Gowers’ threads, it seems that the Kruskal-Katona theorem,

http://en.wikipedia.org/wiki/Kruskal%E2%80%93Katona_theorem

is likely to be relevant here, at least for large n.

@Peake.223: Hmm. Does this mean that all of our optimal counterexamples are in fact nested (e.g. is the example a slice of the example?) This may be a transient phenomenon, but still an interesting one.

8 February, 2009 at 8:06 am

DHJ — quasirandomness and obstructions to uniformity « Gowers’s Weblog[…] after the post entitled A combinatorial approach to density Hales-Jewett. The other two are Upper and lower bounds for the density Hales-Jewett problem , hosted by Terence Tao on his blog, and DHJ — the triangle-removal approach, which is on […]

8 February, 2009 at 8:09 am

Terence Tao225. Spreadsheet

OK, I _think_ I’ve set up a collaborative spreadsheet at

http://spreadsheets.google.com/ccc?key=p5T0SktZY9DsU-uZ1tK7VEg

to track the current data we have for . It should be editable to anyone with a google account. It should automatically propagate the bounds (1), (2) to any new bound entered. (I also have a backup copy in case something goes wrong.) One should presumably be able to program the computations in 219 into here too. Please feel free to contribute to it.

8 February, 2009 at 1:47 pm

Jason Dyer226.

Just a quick note: is *not* in the OEIS.

8 February, 2009 at 6:19 pm

Michael Nielsen227. This has perhaps already been pointed out, but there is at most one entry in the Encyclopedia of Integer Sequences consistent with what is already known about . That is A052979. It matches exactly where we already know the value, and matches the (current) bounds for at least the first fifteen entries.

8 February, 2009 at 7:05 pm

Michael Peake227. Spreadsheet

Hi, I looked at the spreadsheet, but can’t edit it.

I logged into Google, but the Edit menu is grayed out.

Does anyone else have this problem?

8 February, 2009 at 7:05 pm

Terence Tao22228. Edel

I’ve added the lower bound from this paper of Edel. In this other paper of Edel, an inequality is obtained which in our notation reads

for , and which thus slightly improves on (2) for this problem. I’ve updated the spreadsheet and the table accordingly.

Michael.227: oops, I didn’t set the permissions properly. I think it should be OK now…

8 February, 2009 at 8:07 pm

KS Chua229 : via optimization

For HJ2, there is a well known method for expressing as the maximization of a quadratic polynomial over a box so that one can apply a continuous optimization method.

Let be a graph with vertex set given by all subsets of and let be an edge if and only if it defines a line. Then is the size of a maximal independent set in .

Let be a variable vector of length , which we think

of as where indexing the subset of . The known quadratic programming problem defining is

It just occurs to me that a similar result appears to hold for and in fact for all . For HJ3, let be the 3-hypergraph with vertex set as before and set to be a 3-edge if and only if it is a line. is then the size of a maximal independent set as before.

Let

The following then holds,

To check the first equality, if is an independent set in of size , set for , and zero otherwise. Then because is independent. So . Conversely, if for some and if there are triplets with , we change to 0. This reduces by 1 but also reduce the second term of by at least one. So repeating we can find an with and if , i.e. an independent set of size . So . The 2nd equality follows from the maximum principle since is a square free polynomial.

So an algorithm which can always determine the maximum of a quadratic polynomial in a box will be able to determine but the problem is hard. Maybe one can try to find this way using quadratic programming for small .

The same should hold for any with

8 February, 2009 at 8:58 pm

Kristal CantwellA052979 is within the current bounds for c_n for the first 27 entries of

the spreadsheet.

8 February, 2009 at 9:38 pm

Kristal CantwellThe sequence c_n’ matches the sequence A027994 for the first

five values and lies between the upper and lower bounds

of c_n’ as calculated in the spreadsheet for the next 23 values,

it matches the sequence A027068 for the first five values

and lies within the bounds of the next 22 values.

These are the only two sequences in the OEIS which could

possibly be equal to c_n’.

9 February, 2009 at 1:04 am

Boris232. Can one modify Behrend construction to give lower bound on ?

In a geometric line, the number of 1’s ,2’s and 3’s is , and respectively, where and denote the number of wildcards of the two types. In other words, if and are the first two signatures, then the third is . This is similar to when Behrend works: if one point is determined by the other two in a linear manner. Here “linear manner” is somewhat crooked, and I do not see how to apply Behrend, or any of the standard heuristics.

Hence, question: how large is the largest subset of not containing a configuration of the above form?

9 February, 2009 at 1:42 am

Michael Peake232. Lower bounds of

I have added some more lower bounds to Terry.225’s spreadsheet.

The sets of (abc) that I have used are the following for N a multiple of 3.

I think that they are triangle-free.

For N=3M-1, restrict the first digit of a 3M sequence to be 1;

For N=3M-2, restrict the first two digits of a 3M sequence to be 12.

For N<21, ignore any triple with a negative entry.

For N=6M:

(2x, 2x+2, N-4x-2) and permutations (x=0..M-4)

(2x, 2x+5, N-4x-5) and perms (x=0..M-4)

(2x, 3M-x-4, 3M+x+4) and perms (x=0..M-4)

(2x, 3M-x-1, 3M+x+1) and perms (x=0..M-4)

(2x+1, 2x+5, N-4x-6) and perms (x=0..M-5)

(2x+1, 2x+8, N-4x-9) and perms (x=0..M-5)

(2x+1, 3M-x-1, 3M-x) and perms (x=0..M-5)

(2x+1, 3M-x-4, 3M-x+3) and perms (x=0..M-5)

(N/3-7, N/3-3, N/3+10) and perms

(N/3-7, N/3, N/3+7) and perms

(N/3-7, N/3+3, N/3+4) and perms

(N/3-6, N/3-4, N/3+10) and perms

(N/3-6, N/3-1, N/3+7) and perms

(N/3-6, N/3+2, N/3+4) and perms

(N/3-5, N/3-1, N/3+6) and perms

(N/3-5, N/3+2, N/3+3) and perms

(N/3-4, N/3-2, N/3+6) and perms

(N/3-4, N/3+1, N/3+3) and perms

(N/3-3, N/3+1, N/3+2) and perms

(N/3-2, N/3, N/3+2) and perms

(N/3-1, N/3, N/3+1) and perms

For N=6M+3:

(2x, 2x+4, N-4x-4) and perms, x=0..M-3

(2x, 2x+7, N-4x-7) and perms, x=0..M-3

(2x, 3M+1-x, 3M+2-x) and perms, x=0..M-3

(2x, 3M-2-x, 3M+5-x) and perms, x=0..M-3

(2x+1, 2x+3, N-4x-4) and perms, x=0..M-4

(2x+1, 2x+6, N-4x-7) and perms, x=0..M-4

(2x+1, 3M-x, 3M-x+2) and perms, x=0..M-4

(2x+1, 3M-x-3, 3M-x+5) and perms, x=0..M-4

(N/3-7, N/3-3, N/3+10) and perms

(N/3-7, N/3, N/3+7) and perms

(N/3-7, N/3+3, N/3+4) and perms

(N/3-6, N/3-4, N/3+10) and perms

(N/3-6, N/3-1, N/3+7) and perms

(N/3-6, N/3+2, N/3+4) and perms

(N/3-5, N/3-1, N/3+6) and perms

(N/3-5, N/3+2, N/3+3) and perms

(N/3-4, N/3-2, N/3+6) and perms

(N/3-4, N/3+1, N/3+3) and perms

(N/3-3, N/3+1, N/3+2) and perms

(N/3-2, N/3, N/3+2) and perms

(N/3-1, N/3, N/3+1) and perms

9 February, 2009 at 3:38 am

GilEven if we are tentatively happy about the bounds (like (6)) based on AP free sets for k=3. Regarding problem II.b for higher k’s it looks we should be able to do much better (than just replacing Behrend by the examples for larger k).

At least this is suggested by the way our bound changes between the k=2 case and the k=3 case.

E.g we can try something like this: map words of length n in an alphabet of 4 letters to words in 3 letters. Say with one letter a representing 1 or 2 letter b representing 2 or 3 and letter c representing 3 or 4.

Then try to find a large set of letters in a b and c whose preimage will automatically exclude a combinatorial line of length 4.

9 February, 2009 at 7:55 am

Jason Dyer(Meta)

Kirstal, Terry’s original post already specified this sequence is .

9 February, 2009 at 8:12 am

Jason Dyer234. via optimization

Regarding Chua.229, shouldn’t that be rather than in the k=3 version of the algorithm? (Otherwise I’m confused how the indexing works.)

It looks like this could be the best algorithm to get exact values of , although the algorithm is elaborate enough I would be worried about the integrity of the code. Is there at least some implementation for k=2 out there already?

29 March, 2021 at 1:24 am

Anonymousfacciamo una prova

9 February, 2009 at 8:27 am

Boris235.

I want to just add that if we restrict the triple to lie in a Cartesian product , then “crookedness” disappears and we end up with system of three linear equations in nine variables, whose solutions we want to avoid in . For the next few hours I will not have the time to try adapting Behrend to this setting though.

9 February, 2009 at 1:44 pm

Boris236. Restriction to Cartesian products is too rough of a proposal. It does not even work for corners. Indeed the points have four distinct coordinates among them, . They are linked by the equation . The largest set avoiding solutions to this equation has size $\sqrt{n}(1+o(1))$ (this equation usually goes by the name of Sidon equation, and is extremely well-studied).

Similarly, straightforward implementation of my proposal in 235 gives the equation where . Pigeonhole principle tells us that the largest solution-free set is of size .

The question remains: is there a Behrend-type construction?

9 February, 2009 at 4:56 pm

Terence Tao237.

Boris, this was discussed a little bit in 43, 48 of the original thread. The basic problem is the case (in the notation of your 232) when two of the points on the crooked line coincide. It basically means that one cannot have more than one point on any line . Conversely, any configuration with this property will be free of crooked lines, which I guess is what leads to the known lower bound.

A subproblem may be to work out whether one can find denser sets on the set that are free of geometric lines than just a single , and have a better density than asymptotically. If so, then we can glue together such sets as ranges over a Behrend set and get a competitive example.

9 February, 2009 at 8:01 pm

Michael Peake238. Lower bounds of

The pattern I described in My.232 has a slight error

when N=6M+3, as x ranges from 0 to M-4, not M-3.

This pattern, for N a multiple of 3, leaves no room for any more points – any new point would complete a line.

I don’t know the asymptotic for this pattern’s value of c_n/3^n, but

2.7 sqrt(log(n)/n) is a good fit when n is between 4000 and 10000

9 February, 2009 at 9:25 pm

KS Chua239. by optimization.

Sorry I made a mistake in 229. The box should be

for instead of as Jason Dyer noted. Also for

it is no longer a QP problem and one needs a general

optimization routine.

10 February, 2009 at 3:56 am

Tyler Neylon240. Algorithms

I think finding a minimum-sized hypergraph transversal in NP-hard, even for 3-uniform hypergraphs.

We can reduce 3-SAT to this problem. We are given a boolean expression where each is the OR of 3 terms, each term either a boolean value from , or the negation thereof. Form a hypergraph with vertices , and each as a hyperedge. Add a dummy node , and a hyperedge for each boolean value. These last hyperedges ensure there is no smaller solution than the one corresponding to a satisfying boolean assignment, if one exists. It’s not hard to check that a minimum transversal is size (the number of boolean values) iff there is a satisfying assignment; this completes the reduction.

What does this mean for us? If we want a polynomial time algorithm (in the size of the graph), we have to use more of the structure than just the fact that every line has 3 points, assuming . Also recall that the size of our graph itself is already exponential in n.

10 February, 2009 at 10:06 am

Michael Peake241. Upper bound for

This proof is along the lines of Sune.90

Sune shows there is just one way to place 18 points in a cube,

and that at most 52 points fit in

Suppose 156 points may be chosen in .

There must be 52 points in each slice of

Divide into nine cubes. There are 17 or 18 points in each cube, and one cube in each row and column has 18 points. For example

17, 17, 18

18, 17, 17

17, 18, 17

Cut the cubes further, into squares. The 18-point cubes must cut into three six-point slices x y and z, but the 17-point squares have more variation

pqr, stu, xyz

xyz, abc, def

ghk, xyz, lmn

Take the three cubes in the top row, and slice them along a different axis so they are psx, qty and ruz.

One of these cubes has 18 points, and so is xyz; so r=x and u=y.

Similar logic in second column gives u=x and c=y. So there is a contradiction.

There are four other ways to place the 17-point and 18-point cubes, but they all lead to contradictions.

So 156 points can’t be placed in

10 February, 2009 at 11:39 am

Terence Tao242.

Michael: Nice! I’ve updated the spreadsheet and the table accordingly. Computing exactly may well be within reach now.

10 February, 2009 at 9:53 pm

Michael Peake3^5.

The same proof works for 155 points, except for the case

17 17 18

17 17 17

18 17 17

and permutations of that. I can assume that, when sliced along

any axis, each cube has at least 17 points.

Cut the cubes into slices as in Michael.240.

Some of the slices must be x,y or z, as shown there

pqx rsy xyz

tuy abc yde

xyz yfg zhk

When the pqx cube is cut into squares along any of its three axes,

it has an x slice in third position. That forces the placement of

thirteen points in the cube, and needs four points to be

placed in the remaining 2x2x2 sub-cube. It turns out that pqx is either

yz’x or y’zx, where z’ is a z slice with one point removed.

If pqx = yz’x when sliced along one axis, then it is yz’x when sliced

along all three axes. Then the cube paz, from reslicing the diagonal,

is yaz along all three axes, which is impossible.

So pqx = y’zx when sliced along each axis, and rsy=tuy=zx’y

For the same reasons, zhk = zxy’

So cube qbh = zbx along all three axes, which is impossible.

I think this contradiction completes the proof.

10 February, 2009 at 10:09 pm

Michael Peake244.

In 243., I made an error in the paragraph that begins “If pqx = yz’x”.

I just need the fact that a can’t have five points if yaz has no lines.

Also, at the end, b can’t have five points if zbx has no lines.

11 February, 2009 at 5:25 am

Michael Peake245.

Recall that there is just one pattern to fit 18 points in a cube;

the three square slices of this pattern (along any axis) are x, y and z.

To fit 17 points in a cube, the only way is to remove one point from

either xyz, yzx or zxy.

This makes the proof in 243. easier because the slices are formed

by removing points from

yzx zxy xyz

zxy xyz yzx

xyz yzx zxy

Now the major diagonal of the cube is yyy, and six points must be removed from that. Four of the off-diagonal cubes must also lose points. That leaves 152 points, which contradicts the 155 points we started with.

11 February, 2009 at 9:46 am

Terence Tao246.

Great! I’ve updated the table again (I see that the spreadsheet was already updated, excellent).

It would be interesting to see what the 150 lower bound example looks like in xyz notation, this may clarify what needs to be done to narrow the upper and lower bounds further.

11 February, 2009 at 12:46 pm

Kristal Cantwell247. Let us look at the set of 150

points which form the lower

bound for c_5

It would consist of all points whose sum is not zero mod 3 with

all points with 5 coordinates equal to 1 or or all 5 points equal to 2 removed

combined with removal of all points

with one coordinate equal to 3 and the rest equal to

1 and the removal of all points with one coordinate

equal to 3 and the rest equal to 2

This gives the following

17 17 18

18 14 17

14 18 17

which gives let u equal x with one point removed

v = y with one point removed

and t =z with one point removed

uyz xvz xyz

xyz ??? xvz

??? xyz uyz

Note we have chosen three coordinates to be within

the cube and two other to determine

the position of the cube within the square

coordinates go from

one to three from left to right

and one to three upwards

xyz are squares determined

by one of the coordinates of the

cube. x (or any letter in the leftmost position)

corresponds to the

the square with coordinate equal

to one the letters in the next two positions correspond

to the squares with the coordinate equal to 2 and 3.

The question marks are for the cubes with 14 points. Are there problems with

having all the cubes having a value that is close to 18?

Is there a set of points having no combinatorial lines

with 5 coordinates varying from 1 to 3 such

that all of the cubes have value 17 or 18?

11 February, 2009 at 1:22 pm

Terence Tao248. Conventions for xyz notation

I think it will be helpful to fix the conventions for the xyz notation. If we use Cartesian coordinates

13 23 33

12 22 32

11 21 31

to parameterise , then (if I understand Michael’s posts correctly), x is the six-element set

* _ *

* * _

_ * *

(where * denotes elements of x and _ denotes non-elements), y is the six-element set

_ * *

* _ *

* * _

and z is the six-element set

* * _

_ * *

* _ *

and xyz is the unique 18-element line-free set in (with the convention that abc is the set whose first slice is a, second slice is b, and third slice is c).

Michael, does this match your conventions?

Also, when you say “to fit 17 points on the cube, the only way is to remove one point from xyz, yzx, or zxy”, are there some restrictions on what point one can remove? Clearly one can remove any point one wishes from xyz, but there seems to be more constraints for yzx and zxy.

11 February, 2009 at 1:41 pm

Terence Tao249. A hyper-optimistic conjecture from 400-499

Over at the 400-499 thread, Gil Kalai.455 and Tim Gowers.459 have proposed a “hyper-optimistic” conjecture, and asked whether it could be falsified using the examples from this thread. Let me rephrase it as follows. Given a set , define the

weighted sizeof A by the formulathus each slice has weighted size 1 (and we have been referring to as “slices-equal measure” for this reason), and the whole cube has weighted size equal to the triangular number, .

Example: in , the diagonal points 11, 22, 33 each have weighted size 1, whereas the other six off-diagonal points have weighted size 1/2. The total weighted size of is 6.

Let be the largest

weightedsize of a line-free set. For instance, , , and .As in the unweighted case, every time we find a subset B of the grid without equilateral triangles, it gives a line-free set . The weighted size of this set is precisely the cardinality of B. Thus we have the lower bound , where is the largest size of equilateral triangles in .

Hyper-optimistic conjecture:We in fact have . In other words, to get the optimal weighted size for a line-free set, one should take a set which is a union of slices .This conjecture, if true, will imply the DHJ theorem (see the 400-499 discussion for details). Note also that all our best lower bounds for the unweighted problem to date have been unions of slices. Also, the k=2 analogue of the conjecture is true, and is known as the LYM inequality (in fact, for k=2 we have for all n).

If the conjecture is false, then perhaps this will be visible by computing upper and lower bounds for for small n. By hand I can check that for n=0,1,2… but perhaps with all the above progress we can also get good bounds for n=3,4,5?

11 February, 2009 at 4:25 pm

Tyler Neylon250. A hyper-optimistic conjecture

Could someone clarify the definition of ?

As far as generalizations of LYM go, let , and suppose for all line-free . Then Michael’s examples seem to suggest grows quickly – as opposed to in LYM – so, intuitively, how do we hope for something like LYM?

I don’t think this is exactly what’s needed, but we do have where is the percent of lines passing through any single element of .

11 February, 2009 at 4:45 pm

Terence Tao251. A hyper-optimistic conjecture

One can define in two equivalent ways. One of them is that it is the largest weighted size of a line-free set which is the union of slices . (This is in contrast to , which is the largest weighted size of

anyline-free set, not necessarily the union of slices.)Another equivalent definition is that is the largest subset of the triangular grid which contain no equilateral triangles. For instance, by deleting the two points (0,0,2) and (1,1,0) from one removes all triangles, whereas removing one point is not enough, and so .

As for your second question, I don’t think we have yet a strategy for this, but one could imagine some sort of extremal combinatorics argument by showing that a line-free set which is not a union of slices can be “improved” to increase its weighted size, without losing the line-free property. Showing this rigorously, though, would be, well, hyper-optimistic.

(Incidentally, what you call f(n) would be what I would call an upper bound for .)

11 February, 2009 at 7:15 pm

Jason Dyer252. Hyper-optimist conjecture

Am I understanding this right?

Minimal deletion for removes (0,3,0) (0,2,1) (2,1,0) (1,0,2), leaving 6 points. So ?

11 February, 2009 at 7:48 pm

Terence Tao253.

Well, that certainly makes the set triangle-free, so . If you know that you cannot make triangle-free by removing only three points, then yes, would equal 6.

11 February, 2009 at 8:12 pm

Jason Dyer254. Proof cannot be triangle free removing only 3 points

Yes, with only three removals each of these (non-overlapping) triangles must have one removal:

set A: (0,3,0) (0,2,1) (1,2,0)

set B: (0,1,2) (0,0,3) (1,0,2)

set C: (2,1,0) (2,0,1) (3,0,0)

Consider choices from set A:

(0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1)

(0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work

(1,2,0) is symmetrical with (0,2,1)

12 February, 2009 at 12:49 am

Michael Peake255.

Krystal.247: I think my 245 shows that, if all the cubes

have 17 or 18 points, then you have to delete ten points from

a full 9×18 pattern of xs, ys and zs. That leaves 152 points,

which is less than 9*17. So at least one cube has to have

16 or fewer points.

Another pattern of 150 points is this: Take the 450 points

in which are and permutations,

then select the 150 whose final coordinate is 1. That gives

this many points in each cube:

17 18 17

17 17 18

12 17 17

Terry.248: Yes, that matches what I meant by x,y and z;

yzx and zxy have a full line on the cube’s major diagonal

so you have to remove one of those three points to leave

a valid seventeen points.

12 February, 2009 at 7:32 am

Jason Dyer256. Hyper-optimist conjecture

The case was originally proposed as a puzzle by Kobon Fujimura (who is best known for <a href=”Kobon Triangles).

I have been searching the recreational mathematics literature for any other mention of our problem but I haven’t found a reference.

12 February, 2009 at 7:51 am

Jason Dyer(Meta) Coin puzzles

Fujimura’s version of is simply given a pyramid of 10 coins, what is the smallest number you need to remove so there are no equilateral triangles?

The most common type of coin puzzle (which goes back to Dudeney) involves changing one configuration of coins into another with a certain number of moves. For example, invert a pyramid of 10 coins in the smallest number of moves.

This article from the (highly recommended book)

Games of No Chanceanalyzes this type of puzzle in detail.12 February, 2009 at 10:46 am

Sune Kristian Jakobsen257.

:

The set of all (a,b,c) in with exactly one of a,b,c =0, has 9 elements and doesn’t contain any equilateral triangles.

Let be a set without equilateral triangles. If , there can only be one of (0,x,4-x) and (x,0,4-x) in S for x=1,2,3,4. Thus there can only be 5 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorph to , so S can at most have 4 elements in this set. So . Similar if S contain (0,4,0) or (4,0,0). So if S doesn’t contain any of these. Also, S can’t contain all of (0,1,3), (0,3,1), (2,1,1). Similar for (3,0,1), (1,0,3),(1,2,1) and (1,3,0), (3,1,0), (1,1,2). So now we have found 6 elements not in S, but , so .

12 February, 2009 at 11:32 am

Sune Kristian Jakobsen258.

:

The set of all (a,b,c) in with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles.

Let be a set without equilateral triangles. If , there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorph to , so S can at most have 6 elements in this set. So . Similar if S contain (0,5,0) or (5,0,0). So if S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles:

(3,1,1),(0,4,1),(0,1,4)

(4,1,0),(1,4,0),(1,1,3)

(4,0,1),(1,3,1),(1,0,4)

(1,2,2),(0,3,2),(0,2,3)

(3,2,0),(2,3,0),(2,2,1)

(3,0,2),(2,1,2),(2,0,3)

So now we have found 9 elements not in S, but , so .

12 February, 2009 at 1:03 pm

Christian Elsholtz259 About the constant c”(n): the value of c”(6)=112 has been proved by

Aaron Potechin: Maximal caps in AG (6, 3)

Journal Designs, Codes and Cryptography, Volume 46, Number 3 / March, 2008

http://www.springerlink.com/content/h003577g11112308/

This gives a new set of slightly improved upper bounds: for n=7 to 15 as follows:

{7, 292}, {8, 773}, {9, 2075}, {10, 5632}, {11, 15425},

{12, 42569}, {13, 118237}, {14, 330222}, {15, 926687}

A discussion of the problem on caps, with plenty of references, can also

be found in:

Yves Edel, Christian Elsholtz, Alfred Geroldinger, Silke Kubertin, Laurence Rackham: Zero-sum problems in finite abelian groups and affine caps.

Quarterly Journal of Mathematics 58 (2007), 159-186.

http://qjmath.oxfordjournals.org/cgi/content/abstract/58/2/159

12 February, 2009 at 1:29 pm

Jason Dyer260.

Sune, could you give your explicit list for removals from ? I am unable to reproduce a triangle-free configuration from your description.

For example, (4,0,0) (0,4,0) (0,0,4) (2,1,1) (1,1,2) (1,2,1) leaves the triangle (2,2,0) (0,2,2) (2,0,2).

12 February, 2009 at 1:30 pm

Terence Tao260. Updates to spreadsheet

I’ve updated the spreadsheet to take into account the above developments (in particular, adding entries for upper and lower bounds for and . I’m using the trivial bounds

and

but clearly we should be able to do better than these bounds.

12 February, 2009 at 1:56 pm

Jason Dyer261. Lower bound for

One easy lower bound is to note that one can always go triangle free by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle.

For example, with remove (0,4,0), (1,2,1), (2,1,1), (1,1,2), (3,0,1), (2,0,2), and (1,0,3).

Therefore a lower bound would be . (That is, the triangular number of n+1 with (n,0,0) removed and then the triangular number of n-1 removed, referring to the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1).)

12 February, 2009 at 4:46 pm

Michael Peake262. Three solutions for

There are exactly three solutions for 52 points in

I use recent notation from , x, y and z defined in Terry.248

x’ is x with one point removed; that point in is either 2222 or 3333. Likewise for y’ and z’

All three solutions are made from full $latex Gamma_{abc}

Each row is a different solution.

xyz yz’x zxy’

y’zx zx’y xyz

z’xy xyz yzx’

12 February, 2009 at 6:27 pm

Jason Dyer263. Finishing the calculations

I guess that’s what I get for typing and running — the bound at 261 is just 2n.

Also, 254 the proof can be much, much simpler: just note (3,0,0) or something symmetrical has to be removed, leaving 3 triangles which do not intersect, so 3 more removals are required.

12 February, 2009 at 8:04 pm

Michael Peake264. Lower bound for

Another lower bound is 3(n-1), as you keep most of all three sides of the triangle. Remove the corners and the inner triangle.

12 February, 2009 at 8:23 pm

Jason Dyer265. Lower bound for \overline{c}^mu_n

Michael, I have a counterexample for that at 260.

12 February, 2009 at 8:44 pm

Terence Tao266. Wiki

We’ve started up a wiki for the polymath project at

http://michaelnielsen.org/polymath1/index.php?title=Main_Page

I’ve put in a page for the proofs of the best bounds for c_n for small n at

http://michaelnielsen.org/polymath1/index.php?title=Upper_and_lower_bounds

The last few sections still need some work. Of course, everyone is invited to contribute. (For lengthy new computations, it may be best to put the details on the wiki and announce an abridged version here.)

At some point I’ll try to do something similar for .

12 February, 2009 at 8:52 pm

Michael Peake266. Lower bound for

You can fit nine points in a 15-point triangle (n=4),

(013,022,031,103,202,301,130,220,310)

but the best result for only uses seven points.

(022,112,121,202,220,310,301)

12 February, 2009 at 9:04 pm

Michael Peake267. Lower bound for

Jason,

The triangle made up by (2,2,0),(2,0,2) and (0,2,2) is upside down.

It has a constant r=-2, so doesn’t apply to HJ sequences, which

use r as a number of columns.

I think you were right the first time, in 254., to do the long-winded proof.

12 February, 2009 at 9:12 pm

Terence Tao268. Wiki

Here is a page for , the computation of which I have dubbed “Fujimura’s problem”:

http://michaelnielsen.org/polymath1/index.php?title=Fujimura%27s_problem

Again, some cleanup is required to convert the blog comments into wiki format and to integrate the text smoothly.

12 February, 2009 at 10:49 pm

Michael Peake269. Upper limit for

From looking at triangles in the lowest two rows, we have

12 February, 2009 at 11:29 pm

Michael Peake270. Hyper-optimistic

My point in 266. was, does that contradict the hyper-optimistic conjecture?

13 February, 2009 at 4:25 am

Michael Peake271. Upper limit for Fujimura ( )

There are triangles, and each point is in n of them, so you must remove at least (n+2)(n+1)/6 points to remove all triangles.

So an upper limit for is (n+1)(n+2)/3.

13 February, 2009 at 7:39 am

Terence Tao272. Hyper-optimistic

Dear Michael, it’s not clear yet whether this example violates the hyper-optimistic one, because the extremiser for the problem is not going to be the extremiser for the problem. Indeed, the example, involving only seven of the slices, will have a weighted size of 7, whereas the example has a weighted size of 9. To contradict the conjecture we would need to find a line-free set in with a weighted size exceeding 9.

Incidentally, I found the lower bound for , since one can always take an extremiser in , add a new edge to the triangular grid and put two points on it so that the triangle these two points generate doesn’t have the third point in the extremiser.

Given how this thread is filling up, I will probably start a fresh thread in a day or so. The wiki is very helpful in this regard, it reduces the extent to which I will have to summarise all the progress…

13 February, 2009 at 7:49 am

Jason Dyer272. When r<0

re: Peake.267, yes, I see now. I don’t remember r ever being defined as positive so I never thought about the case. (2,0,2) (2,2,0) (0,2,2) would be 1133, 1122, and 2233 which form no combinatorial lines.

That does bring the interesting question of: is there anything important about this problem relating to the cases of r<0? They might affect the “subcubes” from O’Donnell.422 which seem to be working with “partial” combinatorial lines. Calling the points a, b and c: a and b form a partial combinatorial line, b and c do, and a and c do, but a, b, and c do not form a line together. I believe this situation may be unique for the r<0 case.

Also, it means there are two variants of Fujimura, and while r<0 doesn’t apply to our problem it might be worth looking into as a side project (or at least noting it as a separate case).

13 February, 2009 at 8:48 am

Sune Kristian Jakobsen273.

Michael.219: “This method gives a lower bound for c_99 that is bigger than 3^98.” and

Terry.467: “We also have an example of a subset of of density that still has no combinatorial line.”

Then why is c_96/3^96=0.006155058319549. Shouldn’t it be >1/3?

13 February, 2009 at 8:49 am

Sune Kristian Jakobsen273b

The number I mentioned is L98 in the spreadsheet: Lower bound for

13 February, 2009 at 12:04 pm

Kristal CantwellThe two examples we have for a lower bound for c_5 being 150 have cubes with 14 and 12 points is there an example with all cubes 16 or higher

and total 150 or higher? Or failing that all cubes 15 or higher and total 150 or higher?

13 February, 2009 at 4:14 pm

Thomas Sauvaget275. : an example

Inspired by the previous discussion, I think I’ve found an example showing that , I’d be grateful if someone could check it.

I’m using slightly different notations: instead of cartesian coordinates I’m using left-to-right ordering of words recursively to make visualisation easy. Namely, when is even increase upwards, when is odd increase righwards. In this way we can deal with 2-dimensional patterns only. For example looks like this:

131 132 133 231 232 233 331 332 333

121 122 123 221 222 223 321 322 323

111 112 113 211 212 213 311 312 313

Given those notations here is what I think is a 154-element line-free pattern in :

The bottom-left black element is 11111 (not in the set), and so on. The two lone red elements are removed (Michael’s idea, here these are 12222 and 13333), then I found that removing the 6 other red elements leads to a line-free set (note that these 6 red elements are all on a super-diagonal). Total is thus . This would also fit with the OEIS sequence A052979 mentionned above.

I’ve built the example recursively by matching lower conditions starting from the bottom left square, first killing straight lines then diagonals, then doing the same with the other squares from . In fact there is still some freedom in the pattern: the two squares I’ve circled with blue could have any of the 3 elementary patterns, but here I’ve chosen the most symmetric ones.

It seems that this construction is fairly topological and systematic, it may well be possible to get an explicit recursive formulation, and thus a formula for estimating . I’ll try to come back to it if indeed the example is correct.

13 February, 2009 at 5:07 pm

Terence Tao276:

Dear Thomas,

It unfortunately seems to me that the bottom row of the middle contains some horizontal lines (e.g. the three centres of the three ‘s in this bottom row). There are also some lines on the top row and on the left and right columns of this middle .

(If you write each of the ‘s in xyz notation, one sees the problem; there are a bunch of rows and columns which only have two of the three patterns x, y, z and so let some lines slip in. The ‘s with three red squares in them are the intersection of two of the x, y, z and can kill more lines, but unfortunately this doesn’t cover all the possible lines.)

But it could be that some permutation of this sort of strategy could work… it may provide the first good example we have which is not based off of the set.

13 February, 2009 at 8:13 pm

Bounds for the first few density Hales-Jewett numbers, and related quantities « What’s new[…] 2009 in math.CO | Tags: polymath1 | by Terence Tao This thread is a continuation of the previous thread here on the polymath1 project. Currently, activity is focusing on the following […]

13 February, 2009 at 8:14 pm

Terence Tao277. Thread moving

As this thread is beginning to get quite long, I am moving it to a new thread, starting at 700:

https://terrytao.wordpress.com/2009/02/13/bounds-for-the-first-few-density-hales-jewett-numbers-and-related-quantities/

14 February, 2009 at 12:24 am

Michael Peake277. lower bound of

I tried a pattern of points based on Terry.39’s suggestion. It seems to give asymptotic results similar to what he thought.

First define a sequence, of all positive numbers which, in base 3, do not contain a 1. Add 1 to all multiples of 3 in this sequence. This sequence does not contain a length-3 arithmetic progression.

It starts 1,2,7,8,19,20,25,26,55, …

Second, list all the (abc) triples for which the larger two differ by a number

from the sequence, excluding the case when the smaller two differ by 1, but then including the case when (a,b,c) is a permutation of N/3+(-1,0,1)

This had numerical asymptotics for close to

$latex 1.2-\sqrt(\log(n)) between n=1000 and n=10000

Sune.273: Sorry, I haven’t written all the numbers in yet for my pattern – I am only up to 70, so another formula is used to give lower bounds after that.

19 February, 2009 at 4:16 pm

A quick review of the polymath project « What Is Research?[…] By this time, Gowers’ blog was receiving hundreds of comments, mostly comments by Gowers himself, but also including comments from distinguished mathematicians such as Terence Tao. Tao has his own blog, and he published a post giving the background of the Hales-Jewett theorem and a later post with some of his own ideas about the problem. […]

25 March, 2009 at 12:13 am

An Open Discussion and Polls: Around Roth’s Theorem « Combinatorics and more[…] (as a little spin off to the polymath1 project, (look especially here and here)) if you have any thoughts about where the truth is for these problems, or about how to […]

11 April, 2009 at 11:17 am

Seva278. Edel?

Just to remark that the upper bound for , ascribed in 228. to Edel, is actually present in Meshulam’s paper, in the most explicit form.

12 May, 2009 at 6:26 am

Around the Cap-Set problem (B) « Combinatorics and more[…] An affine line are three vectors so that for every . A cap set is a subset of without an affine line. The cap set problem asks for the maximum size of a cap set. For information on the cap set problem itself look here. For questions regarding density of sets avoiding other types of lines look here. […]

16 July, 2009 at 1:06 pm

The Polynomial Hirsch Conjecture: A proposal for Polymath3 « Combinatorics and more[…] the idea of attempting a “polymath”-style open collaboration (see here, here and here) aiming to have some progress for these conjectures. (The Hirsch conjecture and the polynomial […]

1 May, 2010 at 10:42 am

Michael Nielsen » Introduction to the Polymath Project and “Density Hales-Jewett and Moser Numbers”[…] evolved. Four days after Gowers opened his blog up for discussion, Terence Tao used his blog to start a discussion aimed at understanding the behaviour of for small . This discussion rapidly gained momentum, and […]