You are currently browsing the tag archive for the ‘sunflowers’ tag.

A family {A_1,\dots,A_r} of sets for some {r \geq 1} is a sunflower if there is a core set {A_0} contained in each of the {A_i} such that the petal sets {A_i \backslash A_0, i=1,\dots,r} are disjoint. If {k,r \geq 1}, let {\mathrm{Sun}(k,r)} denote the smallest natural number with the property that any family of {\mathrm{Sun}(k,r)} distinct sets of cardinality at most {k} contains {r} distinct elements {A_1,\dots,A_r} that form a sunflower. The celebrated Erdös-Rado theorem asserts that {\mathrm{Sun}(k,r)} is finite; in fact Erdös and Rado gave the bounds

\displaystyle  (r-1)^k \leq \mathrm{Sun}(k,r) \leq (r-1)^k k! + 1. \ \ \ \ \ (1)

The sunflower conjecture asserts in fact that the upper bound can be improved to {\mathrm{Sun}(k,r) \leq O(1)^k r^k}. This remains open at present despite much effort (including a Polymath project); after a long series of improvements to the upper bound, the best general bound known currently is

\displaystyle  \mathrm{Sun}(k,r) \leq O( r \log(kr) )^k \ \ \ \ \ (2)

for all {k,r \geq 2}, established in 2019 by Rao (building upon a recent breakthrough a month previously of Alweiss, Lovett, Wu, and Zhang). Here we remove the easy cases {k=1} or {r=1} in order to make the logarithmic factor {\log(kr)} a little cleaner.

Rao’s argument used the Shannon noiseless coding theorem. It turns out that the argument can be arranged in the very slightly different language of Shannon entropy, and I would like to present it here. The argument proceeds by locating the core and petals of the sunflower separately (this strategy is also followed in Alweiss-Lovett-Wu-Zhang). In both cases the following definition will be key. In this post all random variables, such as random sets, will be understood to be discrete random variables taking values in a finite range. We always use boldface symbols to denote random variables, and non-boldface for deterministic quantities.

Definition 1 (Spread set) Let {R > 1}. A random set {{\bf A}} is said to be {R}-spread if one has

\displaystyle  {\mathbb P}( S \subset {\bf A}) \leq R^{-|S|}

for all sets {S}. A family {(A_i)_{i \in I}} of sets is said to be {R}-spread if {I} is non-empty and the random variable {A_{\bf i}} is {R}-spread, where {{\bf i}} is drawn uniformly from {I}.

The core can then be selected greedily in such a way that the remainder of a family becomes spread:

Lemma 2 (Locating the core) Let {(A_i)_{i \in I}} be a family of subsets of a finite set {X}, each of cardinality at most {k}, and let {R > 1}. Then there exists a “core” set {S_0} of cardinality at most {k} such that the set

\displaystyle  J := \{ i \in I: S_0 \subset A_i \} \ \ \ \ \ (3)

has cardinality at least {R^{-|S_0|} |I|}, and such that the family {(A_j \backslash S_0)_{j \in J}} is {R}-spread. Furthermore, if {|I| > R^k} and the {A_i} are distinct, then {|S_0| < k}.

Proof: We may assume {I} is non-empty, as the claim is trivial otherwise. For any {S \subset X}, define the quantity

\displaystyle  Q(S) := R^{|S|} |\{ i \in I: S \subset A_i\}|,

and let {S_0} be a subset of {X} that maximizes {Q(S_0)}. Since {Q(\emptyset) = |I| > 0} and {Q(S)=0} when {|S| >k}, we see that {0 \leq |S_0| \leq K}. If the {A_i} are distinct and {|I| > R^k}, then we also have {Q(S) \leq R^k < |I| = Q(\emptyset)} when {|S|=k}, thus in this case we have {|S_0| < k}.

Let {J} be the set (3). Since {Q(S_0) \geq Q(\emptyset)>0}, {J} is non-empty. It remains to check that the family {(A_j \backslash S_0)_{j \in J}} is {R}-spread. But for any {S \subset X} and {{\bf j}} drawn uniformly at random from {J} one has

\displaystyle  {\mathbb P}( S \subset A_{\bf j} \backslash S_0 ) = \frac{|\{ i \in I: S_0 \cup S \subset A_i\}|}{|\{ i \in I: S_0 \subset A_i\}|} = R^{|S_0|-|S_0 \cup S|} \frac{Q(S)}{Q(S_0)}.

Since {Q(S) \leq Q(S_0)} and {|S_0|-|S_0 \cup S| \geq - |S|}, we obtain the claim \Box

In view of the above lemma, the bound (2) will then follow from

Proposition 3 (Locating the petals) Let {r, k \geq 2} be natural numbers, and suppose that {R \geq C r \log(kr)} for a sufficiently large constant {C}. Let {(A_i)_{i \in I}} be a finite family of subsets of a finite set {X}, each of cardinality at most {k} which is {R}-spread. Then there exist {i_1,\dots,i_r \in I} such that {A_{i_1},\dots,A_{i_r}} is disjoint.

Indeed, to prove (2), we assume that {(A_i)_{i \in I}} is a family of sets of cardinality greater than {R^k} for some {R \geq Cr \log(kr)}; by discarding redundant elements and sets we may assume that {I} is finite and that all the {A_i} are contained in a common finite set {X}. Apply Lemma 2 to find a set {S_0 \subset X} of cardinality {|S_0| < k} such that the family {(A_j \backslash S_0)_{j \in J}} is {R}-spread. By Proposition 3 we can find {j_1,\dots,j_r \in J} such that {A_{j_1} \backslash S_0,\dots,A_{j_r} \backslash S_0} are disjoint; since these sets have cardinality {k - |S_0| > 0}, this implies that the {j_1,\dots,j_r} are distinct. Hence {A_{j_1},\dots,A_{j_r}} form a sunflower as required.

Remark 4 Proposition 3 is easy to prove if we strengthen the condition on {R} to {R > k(r-1)}. In this case, we have {\mathop{\bf P}_{i \in I}( x \in A_i) < 1/k(r-1)} for every {x \in X}, hence by the union bound we see that for any {i_1,\dots,i_j \in I} with {j \leq r-1} there exists {i_{j+1} \in I} such that {A_{i_{j+1}}} is disjoint from the set {A_{i_1} \cup \dots \cup A_{i_j}}, which has cardinality at most {k(r-1)}. Iterating this, we obtain the conclusion of Proposition 3 in this case. This recovers a bound of the form {\mathrm{Sun}(k,r) \leq (k(r-1))^k+1}, and by pursuing this idea a little further one can recover the original upper bound (1) of Erdös and Rado.

It remains to prove Proposition 3. In fact we can locate the petals one at a time, placing each petal inside a random set.

Proposition 5 (Locating a single petal) Let the notation and hypotheses be as in Proposition 3. Let {{\bf V}} be a random subset of {X}, such that each {x \in X} lies in {{\bf V}} with an independent probability of {1/r}. Then with probability greater than {1-1/r}, {{\bf V}} contains one of the {A_i}.

To see that Proposition 5 implies Proposition 3, we randomly partition {X} into {{\bf V}_1 \cup \dots \cup {\bf V}_r} by placing each {x \in X} into one of the {{\bf V}_j}, {j=1,\dots,r} chosen uniformly and independently at random. By Proposition 5 and the union bound, we see that with positive probability, it is simultaneously true for all {j=1,\dots,r} that each {{\bf V}_j} contains one of the {A_i}. Selecting one such {A_i} for each {{\bf V}_j}, we obtain the required disjoint petals.

We will prove Proposition 5 by gradually increasing the density of the random set and arranging the sets {A_i} to get quickly absorbed by this random set. The key iteration step is

Proposition 6 (Refinement inequality) Let {R > 1} and {0 < \delta < 1}. Let {{\bf A}} be a random subset of a finite set {X} which is {R}-spread, and let {{\bf V}} be a random subset of {X} independent of {{\bf A}}, such that each {x \in X} lies in {{\bf V}} with an independent probability of {\delta}. Then there exists another random subset {{\bf A}'} of {X} with the same distribution as {{\bf A}}, such that {{\bf A}' \backslash {\bf V} \subset {\bf A}} and

\displaystyle  {\mathbb E} |{\bf A}' \backslash {\bf V}| \leq \frac{5}{\log(R\delta)} {\mathbb E} |{\bf A}|.

Note that a direct application of the first moment method gives only the bound

\displaystyle  {\mathbb E} |{\bf A} \backslash {\bf V}| \leq (1-\delta) {\mathbb E} |{\bf A}|,

but the point is that by switching from {{\bf A}} to an equivalent {{\bf A}'} we can replace the {1-\delta} factor by a quantity significantly smaller than {1}.

One can iterate the above proposition, repeatedly replacing {{\bf A}, X} with {{\bf A}' \backslash {\bf V}, X \backslash {\bf V}} (noting that this preserves the {R}-spread nature {{\bf A}}) to conclude

Corollary 7 (Iterated refinement inequality) Let {R > 1}, {0 < \delta < 1}, and {m \geq 1}. Let {{\bf A}} be a random subset of a finite set {X} which is {R}-spread, and let {{\bf V}} be a random subset of {X} independent of {{\bf A}}, such that each {x \in X} lies in {{\bf V}} with an independent probability of {1-(1-\delta)^m}. Then there exists another random subset {{\bf A}'} of {X} with the same distribution as {{\bf A}}, such that

\displaystyle  {\mathbb E} |{\bf A}' \backslash {\bf V}| \leq (\frac{5}{\log(R\delta)})^m {\mathbb E} |{\bf A}|.

Now we can prove Proposition 5. Let {m} be chosen shortly. Applying Corollary 7 with {{\bf A}} drawn uniformly at random from the {(A_i)_{i \in I}}, and setting {1-(1-\delta)^m = 1/r}, or equivalently {\delta = 1 - (1 - 1/r)^{1/m}}, we have

\displaystyle  {\mathbb E} |{\bf A}' \backslash {\bf V}| \leq (\frac{5}{\log(R\delta)})^m k.

In particular, if we set {m = \lceil \log kr \rceil}, so that {\delta \sim \frac{1}{r \log kr}}, then by choice of {R} we have {\frac{5}{\log(R\delta)} < \frac{1}{2}}, hence

\displaystyle  {\mathbb E} |{\bf A}' \backslash {\bf V}| < \frac{1}{r}.

In particular with probability at least {1 - \frac{1}{r}}, there must exist {A_i} such that {|A_i \backslash {\bf V}| = 0}, giving the proposition.

It remains to establish Proposition 6. This is the difficult step, and requires a clever way to find the variant {{\bf A}'} of {{\bf A}} that has better containment properties in {{\bf V}} than {{\bf A}} does. The main trick is to make a conditional copy {({\bf A}', {\bf V}')} of {({\bf A}, {\bf V})} that is conditionally independent of {({\bf A}, {\bf V})} subject to the constraint {{\bf A} \cup {\bf V} = {\bf A}' \cup {\bf V}'}. The point here is that this constrant implies the inclusions

\displaystyle  {\bf A}' \backslash {\bf V} \subset {\bf A} \cap {\bf A}' \subset \subset {\bf A} \ \ \ \ \ (4)

and

\displaystyle  {\bf A}' \backslash {\bf A} \subset {\bf V}. \ \ \ \ \ (5)

Because of the {R}-spread hypothesis, it is hard for {{\bf A}} to contain any fixed large set. If we could apply this observation in the contrapositive to {{\bf A} \cap {\bf A}'} we could hope to get a good upper bound on the size of {{\bf A} \cap {\bf A}'} and hence on {{\bf A} \backslash {\bf V}} thanks to (4). One can also hope to improve such an upper bound by also employing (5), since it is also hard for the random set {{\bf V}} to contain a fixed large set. There are however difficulties with implementing this approach due to the fact that the random sets {{\bf A} \cap {\bf A}', {\bf A}' \backslash {\bf A}} are coupled with {{\bf A}, {\bf V}} in a moderately complicated fashion. In Rao’s argument a somewhat complicated encoding scheme was created to give information-theoretic control on these random variables; below thefold we accomplish a similar effect by using Shannon entropy inequalities in place of explicit encoding. A certain amount of information-theoretic sleight of hand is required to decouple certain random variables to the extent that the Shannon inequalities can be effectively applied. The argument bears some resemblance to the “entropy compression method” discussed in this previous blog post; there may be a way to more explicitly express the argument below in terms of that method. (There is also some kinship with the method of dependent random choice, which is used for instance to establish the Balog-Szemerédi-Gowers lemma, and was also translated into information theoretic language in these unpublished notes of Van Vu and myself.)

Read the rest of this entry »

Archives