Below the fold is a version of my talk “Recent progress on the Kakeya conjecture” that I gave at the Fefferman conference.

One of my favourite problems in mathematics is the Kakeya family of conjectures. There are many versions of these conjectures, but one of the simplest to state is the following:

Let {E \subset {\mathbb R}^n} be a compact subset of {{\mathbb R}^n} which contains a unit line segment in every direction. Then {E} has Hausdorff and Minkowski dimension {n}.

Sets {E \subset {\mathbb R}^n} which contain a unit line segment in every direction are known as Kakeya sets. It was observed by Besicovitch that for {n \geq 2}, Kakeya sets can have arbitrarily small Lebesgue measure; in fact they can have Lebesgue measure zero. This in turn implies that the solution to the Kakeya needle problem (what is the least amount of area in the plane needed to rotate a unit line segment by {360^\circ}?) is that a unit needle can be rotated in arbitrarily small area (see this previous blog post of mine for further discussion).

The conjecture is trivial in one dimension, and also proven in two dimensions (a result of Davies), but remains open in three and higher dimensions. Nevertheless, there are a number of partial results, typically of the form “Kakeya sets in {{\mathbb R}^n} have Hausdorff or Minkowski dimension at least {d}” for various values of {n} and {d} (with the objective being to get {d} all the way up to {n}“. One can also phrase such results in a largely equivalent discrete fashion, as follows. Let {0 < \delta < 1} be a small number, and let {T_1,\ldots,T_N} be a collection of {1 \times \delta} tubes which are oriented in a {\delta}-separated set of directions (thus if {T_i}, {T_j} are oriented in the direction {\omega_i}, {\omega_j} respectively for some $latex {1 \leq i

\displaystyle |\bigcup_{i=1}^N T_i| \gg_\epsilon \delta^\epsilon

for all {\epsilon > 0}, where {|E|} denotes the volume of the set {E}, and {A \gg_\epsilon B} denotes the estimate {A \geq c_\epsilon B} for some {c_\epsilon > 0} depending only on {\epsilon}. Similarly, a partial result of the form

\displaystyle |\bigcup_{i=1}^N T_i| \gg_\epsilon \delta^{n-d+\epsilon} \ \ \ \ \ (1)


for all {\epsilon > 0} and some {0 \leq d \leq n} would imply (and is basically equivalent to) the assertion that Kakeya sets have (lower) Minkowski dimension at least {d}.

There is also a somewhat stronger Kakeya maximal function conjecture which is also of interest; with the same hypotheses as above, the conjecture asserts that

\displaystyle \| \sum_{i=1}^N 1_{T_i} \|_{L^{d/(d-1)}({\mathbb R}^n)} \ll_\epsilon (\frac{1}{\delta})^{\frac{n}{d}-1+\epsilon} \ \ \ \ \ (2)


for all {1 \leq d \leq n} and {\epsilon > 0}. This conjecture is trivial for {d=1}, but the difficulty increases as {d} approaches {n}; for any fixed {d}, the estimate (2) easily implies (1), and hence that Kakeya sets in {{\mathbb R}^n} have Minkowski dimension at least {d}; it also can be used to imply that such sets have Hausdorff dimension at least {d} as well. (The terminology “Kakeya maximal function conjecture” comes from the fact that (2) is dual to a certain {L^p} type bound on the Kakeya maximal function

\displaystyle f^*_\delta(\omega) := \sup_{T // \omega} \frac{1}{|T|} \int_T |f|,

a variant of the Hardy-Littlewood maximal function which averages over {1 \times \delta} tubes oriented in a direction {\omega}, rather than on balls, but we will not discuss the maximal function further here.)

The Kakeya conjecture is known to have applications to other fields of mathematics, for instance to Fourier analysis (as observed by Fefferman), to wave equations (as observed by Wolff), to analytic number theory (as observed by Bourgain), to cryptography (as observed also by Bourgain) and to random number generation in computer science (as observed by Dvir and Wigderson). However, the focus of this talk is not on the fields of mathematics impacted by the Kakeya conjecture, but rather on the fields of mathematics that are used to make progress on this conjecture, as there is a striking diversity of mathematical techniques which have been usefully applied to produce a number of non-trivial partial results on the problem. In particular, I wish to discuss how the following areas of mathematics have been used to attack the Kakeya problem:

  • Incidence geometry;
  • Additive combinatorics;
  • Multiscale analysis;
  • Heat flows;
  • Algebraic geometry;
  • Algebraic topology.

— 1. Incidence geometry —

The earliest positive results on the Kakeya problem, when interpreted from a modern perspective, were based primarily on exploiting elementary quantitative results in incidence geometry – the study of how points, lines, planes, and the like (or more precisely, {\delta}-thickened versions of such concepts, such as balls, tubes, and slabs) intersect each other. The incidence geometry approach was greatly clarified by the introduction by Wolff in 1995 of the finite field model of the Kakeya conjectures, in which the underlying field {{\mathbb R}} is replaced by a field {{\Bbb F}_q} of some large but finite order {q} (which of course must be a prime, or a power of a prime). The analogue of the Kakeya set bound (1) is then an estimate of the form

\displaystyle \# \bigcup_{\ell \in L} \ell \gg_\epsilon q^{d-\epsilon} \ \ \ \ \ (3)


for all {\epsilon > 0}, where {L} is a family of lines in {{\Bbb F}_q^n} that contains one line in every direction (thus, {L} will have cardinality comparable to {q^{n-1}}), and {\# E} denotes the cardinality of a set {E}. In a similar vein, the analogue of (2) is the assertion that

\displaystyle \| \sum_{\ell \in L} 1_{\ell} \|_{\ell^{d/(d-1)}({\Bbb F}_q^n)} \ll_\epsilon q^{n-1+\epsilon} \ \ \ \ \ (4)


under the same hypothesis on {L}. The fact that (4) implies (3) can be easily seen from the identity

\displaystyle \| \sum_{\ell \in L} 1_{\ell} \|_{\ell^{1}({\Bbb F}_q^n)} = q \# L \sim q^n

and Hölder’s inequality, using the fact that the multiplicity function {\sum_{\ell \in L} 1_{\ell}} is supported in {\bigcup_{\ell \in L} \ell}.

There is a close analogy between the finite field Kakeya problems and the Euclidean ones; arguments that make progress in one setting can often be adapted to make progress in the other. However, there is no formal correspondence, and there are some arguments that seem to be specific to the finite field setting, and other arguments that are specific to the Euclidean one. Nevertheless, the finite field setting has proven to be an extremely useful toy model to gain intuition and insight into the (more complicated) Euclidean problem.

Very roughly speaking, every incidence geometry fact in classical Euclidean geometry can be converted (at least in principle) via elementary combinatorial arguments into a lower bound on Kakeya sets. Consider for instance the Euclidean axiom that any two lines intersect in at most one point. This can be converted to the bound {d \geq 2} for any dimension {n \geq 2}, for any one of the versions of the Kakeya problem mentioned above; this was established by Davies for the Euclidean dimension problem and by Córdoba for the Euclidean maximal function problem. We sketch a heuristic proof for the finite field set problem (3). Suppose for simplicity that {\# \bigcup_{\ell \in L} \ell} has size exactly {q^d}. There are about {q^{n-1}} lines {\ell} in {L}, each containing {q} points; if we adopt the heuristic that the {q^{n-1} \times q = q^n} points described this way are spread out uniformly in the set {\sum_{\ell \in L} 1_{\ell}}, then each point in that set should be incident to about {q^{n-d}} lines. Now, let us count the number of configurations consisting of two distinct lines in {L} intersecting at a single point. On the one hand, since {L} has about {q^{n-1}} lines, and any two lines intersect in at most one point, the number of such configurations is at most {q^{2(n-1)}}; on the other hand, since there are about {q^d} points, and each point is incident to about {q^{n-d}} lines, then the number of configurations is at least {q^d \times q^{n-d} \times q^{n-d}}. Comparing the lower and upper bounds gives the desired bound {d \geq 2}. (This argument assumed uniform distribution of the multiplicity function, but the general case can be handled similarly by applying the Cauchy-Schwarz inequality.)

There are several other instances of this incidence geometry strategy in action:

  • The axiom that two distinct points determine a line can be used to give the bound {d \geq \frac{n+1}{2}} in any dimension {n}. This was done (implicitly) by Drury for the Euclidean set and maximal problems, and explicitly by by Christ, Duandikoextea, and Rubio de Francia with a refined endpoint estimate at {d=\frac{n+1}{2}}.
  • The fact that any three lines in general position determine a regulus (ruled surface), which contains a one-dimensional family of lines incident to all three of the original lines, can be used to give the bound {d \geq \frac{7}{3}} in the three-dimensional case {n=3}; this was achieved by Schlag (and the bound was also obtained by a slightly different argument earlier by Bourgain).
  • The axiom that any two intersecting lines determine a plane, which contains a one-dimensional family of possible directions of lines, was used by Wolff to give the bound {d \geq \frac{n+2}{2}} in all dimensions {n \geq 2}. This argument relied cruically on the fact that the lines pointed in different directions; note that if one allows lines to be parallel, then a plane contains a two-dimensional family of lines rather than a one-dimensional one.
  • In four dimensions (and for the finite field set problem), I managed to improve the Wolff bound of {d \geq 3} in four dimensions {n=4} slightly to {d \geq 3 + \frac{1}{16}}, by a more complicated axiom from incidence geometry, namely that any three reguli in general position determine an algebraic hypersurface, the lines in which only point in a two-dimensional family of directions.

From this sequence, it seems that one needs to imply increasingly “high-degree” facts from incidence geometry to make deeper progress on the Kakeya conjecture, in particular one begins to transition from incidence geometry to algebraic geometry. Indeed, one can view the polynomial method of Dvir (introduced later in this post) as the natural continuation of these methods. However, there is some evidence that the incidence geometry approach, by itself, is not sufficient to establish the full conjecture. For instance, if one heuristically inserts an arbitrary incidence geometry fact involving bounded-degree algebraic varieties into the above method, it appears that one cannot obtain a lower bound on {d} of better than {n/2 + O(1)}. Also, there are near-counterexamples to the Kakeya conjecture (such as the Heisenberg group {\{ (z_1,z_2,z_3): \hbox{Im}(z_3) = \hbox{Im}(z_1 \overline{z_2}) \}} when the field {{\Bbb F}_q} admits a non-trivial conjugation {z \mapsto \overline{z}}) which only fail to contradict that conjecture due to the parallel nature of some of the lines, so any argument establishing the conjecture must make essential use of the fact that lines point in different directions.

— 2. Additive combinatorics —

In 1998, Bourgain introduced a somewhat different approach to the Kakeya problem, which relied on elementary facts of arithmetic (and in particular, addition and subtraction), rather than that of geometry, to obtain new bounds on the Kakeya problem. This additive-combinatorics method fared better than the geometric method in higher dimensions, basically because the additive structure of high-dimensional spaces was much the same as for low-dimensional spaces, even if the geometric structure becomes much different (and less intuitive).

It is convenient to illustrate the method using the finite field model problem (3), using the method of “slices” introduced by Bourgain (though it is also possible to perform the additive combinatorial method without slicing the set), though the method can also be adapted to the other variants of the Kakeya problem. As before, we argue heuristically in order to simplify the discussion. Let {E \subset {\Bbb F}_q^n} be a finite field Kakeya set, and suppose it has cardinality about {q^d}. We then write {{\Bbb F}_q^n} as {{\Bbb F} \times {\Bbb F}_q^{n-1}} and consider the three “slices” {A, B, C} of {E}, defined as the intersection of {E} with the horizontal hyperplanes {\{0\} \times {\Bbb F}_q^{n-1}}, {\{1\} \times {\Bbb F}_q^{n-1}}, and {\{1/2\} \times {\Bbb F}_q^{n-1}} respectively (let us assume {q} is odd for the sake of discussion, so that {1/2} is well-defined). Then we expect {A, B, C} to all have cardinality about {q^{d-1}}. On the other hand, {E} contains about {q^{n-1}} lines, each of which connect a point {a \in A} to a point {b \in B}, with the midpoint {\frac{a+b}{2}} lying in {C}. Since two points uniquely determine a line, we thus have about {q^{n-1}} many pairs {(a,b)} in {A \times B} whose sums {a+b} are contained in a small set, namely {2 \cdot C}. (This fact alone already leads to the lower bound {d \geq \frac{n+1}{2}} mentioned earlier.) On the other hand, since the lines all point in different directions, the differences {a-b} of all these pairs {(a,b)} are distinct. The additive combinatorial strategy is then to play off the compressed nature of the sums on one hand, and the dispersed nature of the differences on the other. One of the main ingredients here are elementary additive identities, such as

\displaystyle a+b = a'+b' \implies a-b' = a'-b,

which suggests that collisions of sums should imply collisions of differences. This identity, by itself, is insufficient to obtain any new bound on the Kakeya problem, because even if the pairs {(a,b)} and {(a',b')} come from lines in {E}, there is no reason why the pairs {(a,b')} or {(a',b)} should also. But one can then combine the above identity with further identities, such as

\displaystyle a-b = (a-b') - (a'-b') + (a'-b)

which can be used to convert collisions of some differences to collisions of other differences. Implementing this strategy rigorously using a combinatorial tool now known as the Balog-Szemerédi-Gowers lemma, Bourgain improved the bound {d \geq \frac{n+1}{2} = \frac{n-1}{2}+1} to {d \geq \frac{n-1}{2-1/13}+1} (for the Minkowski dimension in both Euclidean and finite field settings), which was superior to the bounds obtained by incidence geometry methods in high dimensions. By inserting more and more such additive identities into this framework (and taking more and more slices), the constants improved somewhat; for instance, in high dimensions, the best result so far (on the Minkowski Euclidean problem) is {d \geq \frac{n-1}{\alpha}+1}, where {\alpha = 1.675\ldots} is the largest root of {\alpha^3-4\alpha+2}, a result of Nets Katz and myself; see this survey article of Nets and myself for further discussion.

It may well be that the arithmetic approach could eventually settle the entire Kakeya conjecture (this would correpsond to lowering the exponent {\alpha} appearing in the above results all the way to {1}); there is a formalisation of this assertion, known as the “arithmetic Kakeya conjecture”, which has some amusing relationships with group theory. However, we have not been able to find additive combinatorial arguments that are so efficient that they do not lose anything in the exponents, and it is not clear whether this conjecture is within reach of known technology. One possible direction to pursue is to move from additive combinatorics (the combinatorics of addition and subtraction) to arithmetic combinatorics (the combinatorics of addition, subtraction, multiplication, and division). For instance, the sum-product phenomenon in finite fields was used by Bourgain, Katz, and myself to improve the bound on three-dimensional Kakeya sets slightly from {d \geq 5/2} to {d \geq 5/2+\epsilon} for some {\epsilon>0}. Nets Katz and I are exploring this direction further, and I hope to report on our results at some point in the future.

— 3. Multiscale analysis —

The finite field model is considered simpler than the Euclidean model for a number of reasons, but one of the main ones is that the finite field model does not have the infinite number of scales that are present in the Euclidean setting. But one can reverse this viewpoint, and instead look for ways to exploit the multitude of scales available in the Euclidean case. One promising strategy in this direction is the induction on scales strategy, introduced by Bourgain for the closely related restriction and Bochner-Riesz problems, and developed further by Wolff. The basic idea here is to deduce a Kakeya estimate at scale {\delta} from the corresponding the Kakeya estimate at a coarser scale, such as {\sqrt{\delta}}. We sketch the basic idea as follows. Suppose that the Kakeya conjecture was already established at scale {\sqrt{\delta}}; roughly speaking, this means that any connection of {\sqrt{\delta}\times 1} tubes that point in different directions (i.e. {\sqrt{\delta}}-separated directions) must be essentially disjoint. One can then argue that that this should implies the same assertion for {\delta \times 1} tubes. To justify this, we assume that these “thin” {\delta \times 1} tubes are arranged in a sufficiently “self-similar” (or “sticky”) fashion that they can be organised into “fat” {\sqrt{\delta} \times 1} tubes, which themselves point in different (i.e. {\sqrt{\delta}}-separated) directions. By the Kakeya hypotheses, these fat tubes are essentially disjoint. What about the thin tubes inside any given fat tube? Well, if one rescales a fat tube about its axis, dilating its lateral dimensions by {1/\sqrt{\delta}} so that it becomes a {1 \times 1} cylinder, then the thin tubes inside that fat tube essentially expand into fat tubes, which by hypothesis are again essentially disjoint. Since the thin tubes inside each fat tube are essentially disjoint, and the fat tubes are themselves essentially disjoint, the entire collection of thin tubes should be essentially disjoint as well.

This argument is remarkably difficult to make rigorous, in part because it is not obvious at all why the tubes should stick together in a self-similar way, but also because the fat tubes can intersect each other in a coplanar fashion, allowing the thin tubes in each fat tube to align up with an unusually high multiplicity. In a 63-page paper, Nets Katz, Izabella Laba, and I managed to implement this idea, in conjunction with much of the incidence geometry and additive combinatorics arguments discussed in previous sections, but only managed to improve the (Euclidean Minkowski dimension) bound for three-dimensional Kakeya sets from {d \geq 5/2} to {d \geq 5/2+10^{-10}} (with similar small improvements in higher dimensions). Nevertheless, the multiscale method did lead to another method which gave further results, namely the heat flow method which we now discuss.

— 4. Heat flow —

It is possible to eliminate the coplanarity and stickiness issues arising in the multiscale approach by replacing a linear Kakeya problem such as (2) with a multilinear variant, namely

\displaystyle \| \prod_{j=1}^n \sum_{i=1}^N 1_{T^{(j)}_i} \|_{L^{1/(n-1)}({\mathbb R}^n)} \ll_\epsilon \delta^{-\epsilon} \ \ \ \ \ (5)


whenever {T^{(j)}_i} for {1 \leq j \leq n} are families of {\delta \times 1}-tubes with {\delta}-separated directions, pointing close to the {j^{th}} cardinal direction {e_j}. Roughly speaking, the difference between (5) and (2) is that “coplanar” interactions have been abolished from (5) by fiat; the expression inside the norm consists only of products of tubes whose directions are in general position. With these difficulties eliminated, the induction on scales argument comes much closer to working properly. However, the degradation of constants is too poor; when passing from scale {\sqrt{\delta}} to scale {\delta}, the losses get squared, leading to a net loss of {\delta^{-C}} for some constant {C}, which is unacceptable. However, it turns out (as shown by Bennett, Carbery, and myself) that one can avoid all such losses by performing a continuous analogue of the induction on scales procedure, in which the tubes are replaced by distorted Gaussian functions, which are then continuously dilated about their major axis by heat flow, effectively increasing their thickness from {\delta} towards {1}. One then shows that (a suitable modification of) the expression in (5) increases along this heat flow, allowing one to deduce (5) at scale {\delta} from (5) at scale {1}. (One still incurs a {\delta^{-\epsilon}} loss because, for technical reasons, one can only obtain a monotonicity formula if one works with {L^{1/(n-1)-\epsilon}} norms rather than {L^{1/(n-1)}} norms, requiring an interpolation to finish the job.)

Another way to view the heat flow argument is to take the (gaussian-smoothed) tubes {1_{T^{(j)}_i}} and slide them continuously toward the origin, so that at the final stage one obtains a “bush” of (smoothed) tubes through the origin, for which (5) is easy to establish. One can use essentially the same monotonicity computations as before to show that the {L^{1/(n-1)}} norm (or more precisely, a slightly weighted {L^{1/(n-1)-\epsilon}} norm) in (5) increases along this sliding procedure, thus providing a sort of variational proof of (5).

The multilinear Kakeya conjecture (5) implies its usual counterpart (2) in two dimensions by a standard angular rescaling argument, but unfortunately it seems to be strictly weaker than the linear Kakeya conjecture in three and higher dimensions, because it says nothing about the coplanar intersections which seem to be a major feature in the higher-dimensional Kakeya problem. (Indeed, (5) suggests, roughly speaking, that if there is a counterexample to the Kakeya conjecture, it will come from “plany” Kakeya sets, in which the lines that pass through a typical point will essentially all lie on a hyperplane.) Thus far we have been unable to find a monotonicity formula that can handle the coplanar case (and perhaps one should not expect to find one, given that the extremisers for the linear maximal function are likely to resemble Besicovitch sets and thus not be amenable to a simple variational argument). Nevertheless (5) is one of the few Kakeya-type estimates we have in higher dimensions whose exponents are sharp (up to epsilons).

— 5. Algebraic geometry —

A striking breakthrough on the finite field side of the Kakeya problem was achieved recently by Dvir, by introducing a high-degree analogue of the low-degree algebraic geometry used in the incidence geometry approaches (see also this blog post for more discussion). Indeed, one can view Dvir’s argument in the incidence geometry framework, with the two key incidence geometry inputs involving high-degree hypersurfaces. The first is the following (a high-degree analogue of Wolff’s observation that a plane contains only a one-dimensional family of lines; see also my paper with Mockenhaupt, or another paper of mine, for similar lemmas):

Lemma 1 If {S} is a degree {k} hypersurface in {{\Bbb F}_q^d} with {k < q}, then the number of possible directions of lines in {S} is at most {O( k q^{d-2} )}.

Indeed, if one extends the degree {k} hypersurface {S} in {{\Bbb F}_q^d} to a degree {\leq k} hypersurface {S_\infty} at the hyperplane at infinity, then every line in {S} will extend to a point in {S_\infty}, representing the direction of {S}. (Here we use the fact that a polynomial of degree {k} cannot vanish at every point of a line unless it is identically zero (in the algebraic sense) on that line (and in particular, on the extension of that line to the plane at infinity.) So the number of possible directions is bounded by the cardinality of {S_\infty}, which is {O(k q^{d-2})} by the Schwartz-Zippel lemma.

The second ingredient can be viewed as a high-degree analogue of such facts as “two points determine a line” or “three points determine a plane”, and is as follows:

Lemma 2 If {E} is a set of points in {{\Bbb F}_q^d}, then {E} is contained inside a hypersurface of degree {O( |E|^{1/d} )}.

Indeed, to prove this lemma one needs to find a non-trivial polynomial {P} of degree at most {D} for some {D = O(|E|^{1/d})} which vanishes at every point in {E}. But the vector space of polynomials of degree at most {D} has dimension about {D^d}, and the requirement that {P} vanish at every point in {E} imposes {|E|} linear constraints, so by linear algebra one can find such a polynomial as soon as the {D^d} is much larger than {E}, and the claim follows.

Putting the two lemmas together, we see that any Kakeya set {E} (which, by definition, contains lines in {\gg q^{d-1}} directions) cannot be contained in any hypersurface of degree much less than {q}, and thus must have cardinality {\gg q^n}. (In fact, the optimal size of a Kakeya set is now known to be within a factor of {2} of {2^{-n} q^n}, a result of Dvir, Kopparty, Saraf, and Sudan.) The argument can also be adapted to establish the maximal function estimate in finite field vector spaces, and also generalises to other families of curves in algebraic varieties, as was done recently by Ellenberg, Oberlin, and myself (see this blog post for further discussion). There are even some tentative indications that these algebraic geometry methods will adapt well to more abstract algebraic settings, such as that of schemes.

— 6. Algebraic topology —

It was thought that the high-degree algebraic geometry methods of Dvir and others were strongly dependent on the discrete nature of the finite field setting, and would not extend to continuous settings such as that of Euclidean space. Very recently, however, Larry Guth managed to partially extend Dvir’s results to this case. One key observation is to replace Lemma 2 by a topological counterpart, namely

Lemma 3 (Polynomial Ham Sandwich theorem) If {B_1,\ldots,B_N} are a collection of bounded open sets in {{\mathbb R}^d}, then there exists a hypersurface {\{ x \in {\mathbb R}^d: P(x)=0\}} of degree {O(N^{1/d})} which bisects each of these open sets (thus the sets {\{ x \in B_i: P(x) > 0 \}} and {\{ x \in B_i: P(x) < 0 \}} have equal volume).

Note that if one shrinks these bounded open sets {B_i} to a tiny neighbourhood of points {x_i}, then in the limit one recovers the analogue of Lemma 2 over the reals {{\mathbb R}} (note that the proof of Lemma 2 is valid over any field). This allows one to adapt Dvir’s method to continuous settings, basically by replacing points by balls. (There is another ingredient needed, which is an isoperimetric-type inequality that asserts that if a hypersurface bisects a ball, then the intersection of that hypersurface with the ball has a large area; see this blog post for further discussion.)

Unfortunately, the arguments, like the heat flow arguments, so far are restricted to the non-coplanar case, as otherwise certain Jacobian factors begin to enter in an unfavorable fashion in the estimates, so the Euclidean linear Kakeya problem has so far not been in impacted by these results. However, the arguments of Guth are able to recover the multilinear estimate (5), and in fact can also remove the {\delta^{-\epsilon}} loss, thus obtaining a very sharp estimate. Even more recently, Guth and Katz have managed to solve a discrete analogue of the Euclidean Kakeya problem in which coplanarity has been eliminated by fiat, namely the joints problem of Sharir. Define a joint in {{\mathbb R}^d} to be a configuration of {d} concurrent lines, which do not all lie in a hyperplane. Sharir conjectured that a collection of {N} lines in {{\mathbb R}^d} could form at most {O( N^{d/(d-1)} )} joints (this is optimal, as can be seen by looking at lines parallel to the coordinate axes passing through a discrete cube of sidelength {O(N^{1/(d-1)})} and unit spacing); this conjecture has now been verified by Guth and Katz, in the three-dimensional case at least. (The estimate (5) was already observed to be closely related to the joints problem by Bennett, Carbery, and myself, although it only gives a strong result in the case when one has a quantitative lower bound on the non-coplanarity of the joints.)

In summary, there has been an influx of techniques from many different areas of mathematics that have each contributed significant progress on the Kakeya conjecture. One may still need a couple more key ideas before the problem is finally solved in full, and such ideas may come from a quite unexpected source, but with the current rate of progress I am now optimistic that we will continue to see significant advances in this area in the near future.