You are currently browsing the category archive for the ‘math.CA’ category.

There are a number of ways to construct the real numbers {{\bf R}}, for instance

  • as the metric completion of {{\bf Q}} (thus, {{\bf R}} is defined as the set of Cauchy sequences of rationals, modulo Cauchy equivalence);
  • as the space of Dedekind cuts on the rationals {{\bf Q}};
  • as the space of quasimorphisms {\phi: {\bf Z} \rightarrow {\bf Z}} on the integers, quotiented by bounded functions. (I believe this construction first appears in this paper of Street, who credits the idea to Schanuel, though the germ of this construction arguably goes all the way back to Eudoxus.)

There is also a fourth family of constructions that proceeds via nonstandard analysis, as a special case of what is known as the nonstandard hull construction. (Here I will assume some basic familiarity with nonstandard analysis and ultraproducts, as covered for instance in this previous blog post.) Given an unbounded nonstandard natural number {N \in {}^* {\bf N} \backslash {\bf N}}, one can define two external additive subgroups of the nonstandard integers {{}^* {\bf Z}}:

  • The group {O(N) := \{ n \in {}^* {\bf Z}: |n| \leq CN \hbox{ for some } C \in {\bf N} \}} of all nonstandard integers of magnitude less than or comparable to {N}; and
  • The group {o(N) := \{ n \in {}^* {\bf Z}: |n| \leq C^{-1} N \hbox{ for all } C \in {\bf N} \}} of nonstandard integers of magnitude infinitesimally smaller than {N}.

The group {o(N)} is a subgroup of {O(N)}, so we may form the quotient group {O(N)/o(N)}. This space is isomorphic to the reals {{\bf R}}, and can in fact be used to construct the reals:

Proposition 1 For any coset {n + o(N)} of {O(N)/o(N)}, there is a unique real number {\hbox{st} \frac{n}{N}} with the property that {\frac{n}{N} = \hbox{st} \frac{n}{N} + o(1)}. The map {n + o(N) \mapsto \hbox{st} \frac{n}{N}} is then an isomorphism between the additive groups {O(N)/o(N)} and {{\bf R}}.

Proof: Uniqueness is clear. For existence, observe that the set {\{ x \in {\bf R}: Nx \leq n + o(N) \}} is a Dedekind cut, and its supremum can be verified to have the required properties for {\hbox{st} \frac{n}{N}}. \Box

In a similar vein, we can view the unit interval {[0,1]} in the reals as the quotient

\displaystyle  [0,1] \equiv [N] / o(N) \ \ \ \ \ (1)

where {[N]} is the nonstandard (i.e. internal) set {\{ n \in {\bf N}: n \leq N \}}; of course, {[N]} is not a group, so one should interpret {[N]/o(N)} as the image of {[N]} under the quotient map {{}^* {\bf Z} \rightarrow {}^* {\bf Z} / o(N)} (or {O(N) \rightarrow O(N)/o(N)}, if one prefers). Or to put it another way, (1) asserts that {[0,1]} is the image of {[N]} with respect to the map {\pi: n \mapsto \hbox{st} \frac{n}{N}}.

In this post I would like to record a nice measure-theoretic version of the equivalence (1), which essentially appears already in standard texts on Loeb measure (see e.g. this text of Cutland). To describe the results, we must first quickly recall the construction of Loeb measure on {[N]}. Given an internal subset {A} of {[N]}, we may define the elementary measure {\mu_0(A)} of {A} by the formula

\displaystyle  \mu_0(A) := \hbox{st} \frac{|A|}{N}.

This is a finitely additive probability measure on the Boolean algebra of internal subsets of {[N]}. We can then construct the Loeb outer measure {\mu^*(A)} of any subset {A \subset [N]} in complete analogy with Lebesgue outer measure by the formula

\displaystyle  \mu^*(A) := \inf \sum_{n=1}^\infty \mu_0(A_n)

where {(A_n)_{n=1}^\infty} ranges over all sequences of internal subsets of {[N]} that cover {A}. We say that a subset {A} of {[N]} is Loeb measurable if, for any (standard) {\epsilon>0}, one can find an internal subset {B} of {[N]} which differs from {A} by a set of Loeb outer measure at most {\epsilon}, and in that case we define the Loeb measure {\mu(A)} of {A} to be {\mu^*(A)}. It is a routine matter to show (e.g. using the Carathéodory extension theorem) that the space {{\mathcal L}} of Loeb measurable sets is a {\sigma}-algebra, and that {\mu} is a countably additive probability measure on this space that extends the elementary measure {\mu_0}. Thus {[N]} now has the structure of a probability space {([N], {\mathcal L}, \mu)}.

Now, the group {o(N)} acts (Loeb-almost everywhere) on the probability space {[N]} by the addition map, thus {T^h n := n+h} for {n \in [N]} and {h \in o(N)} (excluding a set of Loeb measure zero where {n+h} exits {[N]}). This action is clearly seen to be measure-preserving. As such, we can form the invariant factor {Z^0_{o(N)}([N]) = ([N], {\mathcal L}^{o(N)}, \mu\downharpoonright_{{\mathcal L}^{o(N)}})}, defined by restricting attention to those Loeb measurable sets {A \subset [N]} with the property that {T^h A} is equal {\mu}-almost everywhere to {A} for each {h \in o(N)}.

The claim is then that this invariant factor is equivalent (up to almost everywhere equivalence) to the unit interval {[0,1]} with Lebesgue measure {m} (and the trivial action of {o(N)}), by the same factor map {\pi: n \mapsto \hbox{st} \frac{n}{N}} used in (1). More precisely:

Theorem 2 Given a set {A \in {\mathcal L}^{o(N)}}, there exists a Lebesgue measurable set {B \subset [0,1]}, unique up to {m}-a.e. equivalence, such that {A} is {\mu}-a.e. equivalent to the set {\pi^{-1}(B) := \{ n \in [N]: \hbox{st} \frac{n}{N} \in B \}}. Conversely, if {B \in [0,1]} is Lebesgue measurable, then {\pi^{-1}(B)} is in {{\mathcal L}^{o(N)}}, and {\mu( \pi^{-1}(B) ) = m( B )}.

More informally, we have the measure-theoretic version

\displaystyle  [0,1] \equiv Z^0_{o(N)}( [N] )

of (1).

Proof: We first prove the converse. It is clear that {\pi^{-1}(B)} is {o(N)}-invariant, so it suffices to show that {\pi^{-1}(B)} is Loeb measurable with Loeb measure {m(B)}. This is easily verified when {B} is an elementary set (a finite union of intervals). By countable subadditivity of outer measure, this implies that Loeb outer measure of {\pi^{-1}(E)} is bounded by the Lebesgue outer measure of {E} for any set {E \subset [0,1]}; since every Lebesgue measurable set differs from an elementary set by a set of arbitrarily small Lebesgue outer measure, the claim follows.

Now we establish the forward claim. Uniqueness is clear from the converse claim, so it suffices to show existence. Let {A \in {\mathcal L}^{o(N)}}. Let {\epsilon>0} be an arbitrary standard real number, then we can find an internal set {A_\epsilon \subset [N]} which differs from {A} by a set of Loeb measure at most {\epsilon}. As {A} is {o(N)}-invariant, we conclude that for every {h \in o(N)}, {A_\epsilon} and {T^h A_\epsilon} differ by a set of Loeb measure (and hence elementary measure) at most {2\epsilon}. By the (contrapositive of the) underspill principle, there must exist a standard {\delta>0} such that {A_\epsilon} and {T^h A_\epsilon} differ by a set of elementary measure at most {2\epsilon} for all {|h| \leq \delta N}. If we then define the nonstandard function {f_\epsilon: [N] \rightarrow {}^* {\bf R}} by the formula

\displaystyle  f(n) := \hbox{st} \frac{1}{\delta N} \sum_{m \in [N]: m \leq n \leq m+\delta N} 1_{A_\epsilon}(m),

then from the (nonstandard) triangle inequality we have

\displaystyle  \frac{1}{N} \sum_{n \in [N]} |f(n) - 1_{A_\epsilon}(n)| \leq 3\epsilon

(say). On the other hand, {f} has the Lipschitz continuity property

\displaystyle  |f(n)-f(m)| \leq \frac{2|n-m|}{\delta N}

and so in particular we see that

\displaystyle  \hbox{st} f(n) = \tilde f( \hbox{st} \frac{n}{N} )

for some Lipschitz continuous function {\tilde f: [0,1] \rightarrow [0,1]}. If we then let {E_\epsilon} be the set where {\tilde f \geq 1 - \sqrt{\epsilon}}, one can check that {A_\epsilon} differs from {\pi^{-1}(E_\epsilon)} by a set of Loeb outer measure {O(\sqrt{\epsilon})}, and hence {A} does so also. Sending {\epsilon} to zero, we see (from the converse claim) that {1_{E_\epsilon}} is a Cauchy sequence in {L^1} and thus converges in {L^1} for some Lebesgue measurable {E}. The sets {A_\epsilon} then converge in Loeb outer measure to {\pi^{-1}(E)}, giving the claim. \Box

Thanks to the Lebesgue differentiation theorem, the conditional expectation {{\bf E}( f | Z^0_{o(N)}([N]))} of a bounded Loeb-measurable function {f: [N] \rightarrow {\bf R}} can be expressed (as a function on {[0,1]}, defined {m}-a.e.) as

\displaystyle  {\bf E}( f | Z^0_{o(N)}([N]))(x) := \lim_{\epsilon \rightarrow 0} \frac{1}{2\epsilon} \int_{[x-\epsilon N,x+\epsilon N]} f\ d\mu.

By the abstract ergodic theorem from the previous post, one can also view this conditional expectation as the element in the closed convex hull of the shifts {T^h f}, {h = o(N)} of minimal {L^2} norm. In particular, we obtain a form of the von Neumann ergodic theorem in this context: the averages {\frac{1}{H} \sum_{h=1}^H T^h f} for {H=O(N)} converge (as a net, rather than a sequence) in {L^2} to {{\bf E}( f | Z^0_{o(N)}([N]))}.

If {f: [N] \rightarrow [-1,1]} is (the standard part of) an internal function, that is to say the ultralimit of a sequence {f_n: [N_n] \rightarrow [-1,1]} of finitary bounded functions, one can view the measurable function {F := {\bf E}( f | Z^0_{o(N)}([N]))} as a limit of the {f_n} that is analogous to the “graphons” that emerge as limits of graphs (see e.g. the recent text of Lovasz on graph limits). Indeed, the measurable function {F: [0,1] \rightarrow [-1,1]} is related to the discrete functions {f_n: [N_n] \rightarrow [-1,1]} by the formula

\displaystyle  \int_a^b F(x)\ dx = \hbox{st} \lim_{n \rightarrow p} \frac{1}{N_n} \sum_{a N_n \leq m \leq b N_n} f_n(m)

for all {0 \leq a < b \leq 1}, where {p} is the nonprincipal ultrafilter used to define the nonstandard universe. In particular, from the Arzela-Ascoli diagonalisation argument there is a subsequence {n_j} such that

\displaystyle  \int_a^b F(x)\ dx = \lim_{j \rightarrow \infty} \frac{1}{N_{n_j}} \sum_{a N_{n_j} \leq m \leq b N_{n_j}} f_n(m),

thus {F} is the asymptotic density function of the {f_n}. For instance, if {f_n} is the indicator function of a randomly chosen subset of {[N_n]}, then the asymptotic density function would equal {1/2} (almost everywhere, at least).

I’m continuing to look into understanding the ergodic theory of {o(N)} actions, as I believe this may allow one to apply ergodic theory methods to the “single-scale” or “non-asymptotic” setting (in which one averages only over scales comparable to a large parameter {N}, rather than the traditional asymptotic approach of letting the scale go to infinity). I’m planning some further posts in this direction, though this is still a work in progress.

The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if {U: H \rightarrow H} is a unitary operator on a Hilbert space {H}, and {v \in H} is a vector in that Hilbert space, then one has

\displaystyle  \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N U^n v = \pi_{H^U} v

in the strong topology, where {H^U := \{ w \in H: Uw = w \}} is the {U}-invariant subspace of {H}, and {\pi_{H^U}} is the orthogonal projection to {H^U}. (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if {G} is a countable amenable group acting on a Hilbert space {H} by unitary transformations {g: H \rightarrow H}, and {v \in H} is a vector in that Hilbert space, then one has

\displaystyle  \lim_{N \rightarrow \infty} \frac{1}{|\Phi_N|} \sum_{g \in \Phi_N} gv = \pi_{H^G} v \ \ \ \ \ (1)

for any Folner sequence {\Phi_N} of {G}, where {H^G := \{ w \in H: gw = w \hbox{ for all }g \in G \}} is the {G}-invariant subspace. Thus one can interpret {\pi_{H^G} v} as a certain average of elements of the orbit {Gv := \{ gv: g \in G \}} of {v}.

I recently discovered that there is a simple variant of this ergodic theorem that holds even when the group {G} is not amenable (or not discrete), using a more abstract notion of averaging:

Theorem 1 (Abstract ergodic theorem) Let {G} be an arbitrary group acting unitarily on a Hilbert space {H}, and let {v} be a vector in {H}. Then {\pi_{H^G} v} is the element in the closed convex hull of {Gv := \{ gv: g \in G \}} of minimal norm, and is also the unique element of {H^G} in this closed convex hull.

Proof: As the closed convex hull of {Gv} is closed, convex, and non-empty in a Hilbert space, it is a classical fact (see e.g. Proposition 1 of this previous post) that it has a unique element {F} of minimal norm. If {T_g F \neq F} for some {g}, then the midpoint of {T_g F} and {F} would be in the closed convex hull and be of smaller norm, a contradiction; thus {F} is {G}-invariant. To finish the first claim, it suffices to show that {v-F} is orthogonal to every element {h} of {H^G}. But if this were not the case for some such {h}, we would have {\langle T_g v - F, h \rangle = \langle v-F,h\rangle \neq 0} for all {g \in G}, and thus on taking convex hulls {\langle F-F,h\rangle = \langle f-F,f\rangle \neq 0}, a contradiction.

Finally, since {T_g v - F} is orthogonal to {H^G}, the same is true for {F'-F} for any {F'} in the closed convex hull of {Gv}, and this gives the second claim. \Box

This result is due to Alaoglu and Birkhoff. It implies the amenable ergodic theorem (1); indeed, given any {\epsilon>0}, Theorem 1 implies that there is a finite convex combination {v_\epsilon} of shifts {gv} of {v} which lies within {\epsilon} (in the {H} norm) to {\pi_{H^G} v}. By the triangle inequality, all the averages {\frac{1}{|\Phi_N|} \sum_{g \in \Phi_N} gv_\epsilon} also lie within {\epsilon} of {\pi_{H^G} v}, but by the Folner property this implies that the averages {\frac{1}{|\Phi_N|} \sum_{g \in \Phi_N} gv} are eventually within {2\epsilon} (say) of {\pi_{H^G} v}, giving the claim.

It turns out to be possible to use Theorem 1 as a substitute for the mean ergodic theorem in a number of contexts, thus removing the need for an amenability hypothesis. Here is a basic application:

Corollary 2 (Relative orthogonality) Let {G} be a group acting unitarily on a Hilbert space {H}, and let {V} be a {G}-invariant subspace of {H}. Then {V} and {H^G} are relatively orthogonal over their common subspace {V^G}, that is to say the restrictions of {V} and {H^G} to the orthogonal complement of {V^G} are orthogonal to each other.

Proof: By Theorem 1, we have {\pi_{H^G} v = \pi_{V^G} v} for all {v \in V}, and the claim follows. (Thanks to Gergely Harcos for this short argument.) \Box

Now we give a more advanced application of Theorem 1, to establish some “Mackey theory” over arbitrary groups {G}. Define a {G}-system {(X, {\mathcal X}, \mu, (T_g)_{g \in G})} to be a probability space {X = (X, {\mathcal X}, \mu)} together with a measure-preserving action {(T_g)_{g \in G}} of {G} on {X}; this gives an action of {G} on {L^2(X) = L^2(X,{\mathcal X},\mu)}, which by abuse of notation we also call {T_g}:

\displaystyle  T_g f := f \circ T_{g^{-1}}.

(In this post we follow the usual convention of defining the {L^p} spaces by quotienting out by almost everywhere equivalence.) We say that a {G}-system is ergodic if {L^2(X)^G} consists only of the constants.

(A technical point: the theory becomes slightly cleaner if we interpret our measure spaces abstractly (or “pointlessly“), removing the underlying space {X} and quotienting {{\mathcal X}} by the {\sigma}-ideal of null sets, and considering maps such as {T_g} only on this quotient {\sigma}-algebra (or on the associated von Neumann algebra {L^\infty(X)} or Hilbert space {L^2(X)}). However, we will stick with the more traditional setting of classical probability spaces here to keep the notation familiar, but with the understanding that many of the statements below should be understood modulo null sets.)

A factor {Y = (Y, {\mathcal Y}, \nu, (S_g)_{g \in G})} of a {G}-system {X = (X,{\mathcal X},\mu, (T_g)_{g \in G})} is another {G}-system together with a factor map {\pi: X \rightarrow Y} which commutes with the {G}-action (thus {T_g \pi = \pi S_g} for all {g \in G}) and respects the measure in the sense that {\mu(\pi^{-1}(E)) = \nu(E)} for all {E \in {\mathcal Y}}. For instance, the {G}-invariant factor {Z^0_G(X) := (X, {\mathcal X}^G, \mu\downharpoonright_{{\mathcal X}^G}, (T_g)_{g \in G})}, formed by restricting {X} to the invariant algebra {{\mathcal X}^G := \{ E \in {\mathcal X}: T_g E = E \hbox{ a.e. for all } g \in G \}}, is a factor of {X}. (This factor is the first factor in an important hierachy, the next element of which is the Kronecker factor {Z^1_G(X)}, but we will not discuss higher elements of this hierarchy further here.) If {Y} is a factor of {X}, we refer to {X} as an extension of {Y}.

From Corollary 2 we have

Corollary 3 (Relative independence) Let {X} be a {G}-system for a group {G}, and let {Y} be a factor of {X}. Then {Y} and {Z^0_G(X)} are relatively independent over their common factor {Z^0_G(Y)}, in the sense that the spaces {L^2(Y)} and {L^2(Z^0_G(X))} are relatively orthogonal over {L^2(Z^0_G(Y))} when all these spaces are embedded into {L^2(X)}.

This has a simple consequence regarding the product {X \times Y = (X \times Y, {\mathcal X} \times {\mathcal Y}, \mu \times \nu, (T_g \oplus S_g)_{g \in G})} of two {G}-systems {X = (X, {\mathcal X}, \mu, (T_g)_{g \in G})} and {Y = (Y, {\mathcal Y}, \nu, (S_g)_{g \in G})}, in the case when the {Y} action is trivial:

Lemma 4 If {X,Y} are two {G}-systems, with the action of {G} on {Y} trivial, then {Z^0_G(X \times Y)} is isomorphic to {Z^0_G(X) \times Y} in the obvious fashion.

This lemma is immediate for countable {G}, since for a {G}-invariant function {f}, one can ensure that {T_g f = f} holds simultaneously for all {g \in G} outside of a null set, but is a little trickier for uncountable {G}.

Proof: It is clear that {Z^0_G(X) \times Y} is a factor of {Z^0_G(X \times Y)}. To obtain the reverse inclusion, suppose that it fails, thus there is a non-zero {f \in L^2(Z^0_G(X \times Y))} which is orthogonal to {L^2(Z^0_G(X) \times Y)}. In particular, we have {fg} orthogonal to {L^2(Z^0_G(X))} for any {g \in L^\infty(Y)}. Since {fg} lies in {L^2(Z^0_G(X \times Y))}, we conclude from Corollary 3 (viewing {X} as a factor of {X \times Y}) that {fg} is also orthogonal to {L^2(X)}. Since {g} is an arbitrary element of {L^\infty(Y)}, we conclude that {f} is orthogonal to {L^2(X \times Y)} and in particular is orthogonal to itself, a contradiction. (Thanks to Gergely Harcos for this argument.) \Box

Now we discuss the notion of a group extension.

Definition 5 (Group extension) Let {G} be an arbitrary group, let {Y = (Y, {\mathcal Y}, \nu, (S_g)_{g \in G})} be a {G}-system, and let {K} be a compact metrisable group. A {K}-extension of {Y} is an extension {X = (X, {\mathcal X}, \mu, (T_g)_{g \in G})} whose underlying space is {X = Y \times K} (with {{\mathcal X}} the product of {{\mathcal Y}} and the Borel {\sigma}-algebra on {K}), the factor map is {\pi: (y,k) \mapsto y}, and the shift maps {T_g} are given by

\displaystyle  T_g ( y, k ) = (S_g y, \rho_g(y) k )

where for each {g \in G}, {\rho_g: Y \rightarrow K} is a measurable map (known as the cocycle associated to the {K}-extension {X}).

An important special case of a {K}-extension arises when the measure {\mu} is the product of {\nu} with the Haar measure {dk} on {K}. In this case, {X} also has a {K}-action {k': (y,k) \mapsto (y,k(k')^{-1})} that commutes with the {G}-action, making {X} a {G \times K}-system. More generally, {\mu} could be the product of {\nu} with the Haar measure {dh} of some closed subgroup {H} of {K}, with {\rho_g} taking values in {H}; then {X} is now a {G \times H} system. In this latter case we will call {X} {H}-uniform.

If {X} is a {K}-extension of {Y} and {U: Y \rightarrow K} is a measurable map, we can define the gauge transform {X_U} of {X} to be the {K}-extension of {Y} whose measure {\mu_U} is the pushforward of {\mu} under the map {(y,k) \mapsto (y, U(y) k)}, and whose cocycles {\rho_{g,U}: Y \rightarrow K} are given by the formula

\displaystyle  \rho_{g,U}(y) := U(gy) \rho_g(y) U(y)^{-1}.

It is easy to see that {X_U} is a {K}-extension that is isomorphic to {X} as a {K}-extension of {Y}; we will refer to {X_U} and {X} as equivalent systems, and {\rho_{g,U}} as cohomologous to {\rho_g}. We then have the following fundamental result of Mackey and of Zimmer:

Theorem 6 (Mackey-Zimmer theorem) Let {G} be an arbitrary group, let {Y} be an ergodic {G}-system, and let {K} be a compact metrisable group. Then every ergodic {K}-extension {X} of {Y} is equivalent to an {H}-uniform extension of {Y} for some closed subgroup {H} of {K}.

This theorem is usually stated for amenable groups {G}, but by using Theorem 1 (or more precisely, Corollary 3) the result is in fact also valid for arbitrary groups; we give the proof below the fold. (In the usual formulations of the theorem, {X} and {Y} are also required to be Lebesgue spaces, or at least standard Borel, but again with our abstract approach here, such hypotheses will be unnecessary.) Among other things, this theorem plays an important role in the Furstenberg-Zimmer structural theory of measure-preserving systems (as well as subsequent refinements of this theory by Host and Kra); see this previous blog post for some relevant discussion. One can obtain similar descriptions of non-ergodic extensions via the ergodic decomposition, but the result becomes more complicated to state, and we will not do so here.

Read the rest of this entry »

Given two unit vectors {v,w} in a real inner product space, one can define the correlation between these vectors to be their inner product {\langle v, w \rangle}, or in more geometric terms, the cosine of the angle {\angle(v,w)} subtended by {v} and {w}. By the Cauchy-Schwarz inequality, this is a quantity between {-1} and {+1}, with the extreme positive correlation {+1} occurring when {v,w} are identical, the extreme negative correlation {-1} occurring when {v,w} are diametrically opposite, and the zero correlation {0} occurring when {v,w} are orthogonal. This notion is closely related to the notion of correlation between two non-constant square-integrable real-valued random variables {X,Y}, which is the same as the correlation between two unit vectors {v,w} lying in the Hilbert space {L^2(\Omega)} of square-integrable random variables, with {v} being the normalisation of {X} defined by subtracting off the mean {\mathbf{E} X} and then dividing by the standard deviation of {X}, and similarly for {w} and {Y}.

One can also define correlation for complex (Hermitian) inner product spaces by taking the real part {\hbox{Re} \langle , \rangle} of the complex inner product to recover a real inner product.

While reading the (highly recommended) recent popular maths book “How not to be wrong“, by my friend and co-author Jordan Ellenberg, I came across the (important) point that correlation is not necessarily transitive: if {X} correlates with {Y}, and {Y} correlates with {Z}, then this does not imply that {X} correlates with {Z}. A simple geometric example is provided by the three unit vectors

\displaystyle  u := (1,0); v := (\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}); w := (0,1)

in the Euclidean plane {{\bf R}^2}: {u} and {v} have a positive correlation of {\frac{1}{\sqrt{2}}}, as does {v} and {w}, but {u} and {w} are not correlated with each other. Or: for a typical undergraduate course, it is generally true that good exam scores are correlated with a deep understanding of the course material, and memorising from flash cards are correlated with good exam scores, but this does not imply that memorising flash cards is correlated with deep understanding of the course material.

However, there are at least two situations in which some partial version of transitivity of correlation can be recovered. The first is in the “99%” regime in which the correlations are very close to {1}: if {u,v,w} are unit vectors such that {u} is very highly correlated with {v}, and {v} is very highly correlated with {w}, then this does imply that {u} is very highly correlated with {w}. Indeed, from the identity

\displaystyle  \| u-v \| = 2^{1/2} (1 - \langle u,v\rangle)^{1/2}

(and similarly for {v-w} and {u-w}) and the triangle inequality

\displaystyle  \|u-w\| \leq \|u-v\| + \|v-w\|,

we see that

\displaystyle  (1 - \langle u,w \rangle)^{1/2} \leq (1 - \langle u,v\rangle)^{1/2} + (1 - \langle v,w\rangle)^{1/2}. \ \ \ \ \ (1)

Thus, for instance, if {\langle u, v \rangle \geq 1-\epsilon} and {\langle v,w \rangle \geq 1-\epsilon}, then {\langle u,w \rangle \geq 1-4\epsilon}. This is of course closely related to (though slightly weaker than) the triangle inequality for angles:

\displaystyle  \angle(u,w) \leq \angle(u,v) + \angle(v,w).

Remark 1 (Thanks to Andrew Granville for conversations leading to this observation.) The inequality (1) also holds for sub-unit vectors, i.e. vectors {u,v,w} with {\|u\|, \|v\|, \|w\| \leq 1}. This comes by extending {u,v,w} in directions orthogonal to all three original vectors and to each other in order to make them unit vectors, enlarging the ambient Hilbert space {H} if necessary. More concretely, one can apply (1) to the unit vectors

\displaystyle  (u, \sqrt{1-\|u\|^2}, 0, 0), (v, 0, \sqrt{1-\|v\|^2}, 0), (w, 0, 0, \sqrt{1-\|w\|^2})

in {H \times {\bf R}^3}.

But even in the “{1\%}” regime in which correlations are very weak, there is still a version of transitivity of correlation, known as the van der Corput lemma, which basically asserts that if a unit vector {v} is correlated with many unit vectors {u_1,\dots,u_n}, then many of the pairs {u_i,u_j} will then be correlated with each other. Indeed, from the Cauchy-Schwarz inequality

\displaystyle  |\langle v, \sum_{i=1}^n u_i \rangle|^2 \leq \|v\|^2 \| \sum_{i=1}^n u_i \|^2

we see that

\displaystyle  (\sum_{i=1}^n \langle v, u_i \rangle)^2 \leq \sum_{1 \leq i,j \leq n} \langle u_i, u_j \rangle. \ \ \ \ \ (2)

Thus, for instance, if {\langle v, u_i \rangle \geq \epsilon} for at least {\epsilon n} values of {i=1,\dots,n}, then {\sum_{1 \leq i,j \leq n} \langle u_i, u_j \rangle} must be at least {\epsilon^3 n^2}, which implies that {\langle u_i, u_j \rangle \geq \epsilon^3/2} for at least {\epsilon^3 n^2/2} pairs {(i,j)}. Or as another example: if a random variable {X} exhibits at least {1\%} positive correlation with {n} other random variables {Y_1,\dots,Y_n}, then if {n > 10,000}, at least two distinct {Y_i,Y_j} must have positive correlation with each other (although this argument does not tell you which pair {Y_i,Y_j} are so correlated). Thus one can view this inequality as a sort of `pigeonhole principle” for correlation.

A similar argument (multiplying each {u_i} by an appropriate sign {\pm 1}) shows the related van der Corput inequality

\displaystyle  (\sum_{i=1}^n |\langle v, u_i \rangle|)^2 \leq \sum_{1 \leq i,j \leq n} |\langle u_i, u_j \rangle|, \ \ \ \ \ (3)

and this inequality is also true for complex inner product spaces. (Also, the {u_i} do not need to be unit vectors for this inequality to hold.)

Geometrically, the picture is this: if {v} positively correlates with all of the {u_1,\dots,u_n}, then the {u_1,\dots,u_n} are all squashed into a somewhat narrow cone centred at {v}. The cone is still wide enough to allow a few pairs {u_i, u_j} to be orthogonal (or even negatively correlated) with each other, but (when {n} is large enough) it is not wide enough to allow all of the {u_i,u_j} to be so widely separated. Remarkably, the bound here does not depend on the dimension of the ambient inner product space; while increasing the number of dimensions should in principle add more “room” to the cone, this effect is counteracted by the fact that in high dimensions, almost all pairs of vectors are close to orthogonal, and the exceptional pairs that are even weakly correlated to each other become exponentially rare. (See this previous blog post for some related discussion; in particular, Lemma 2 from that post is closely related to the van der Corput inequality presented here.)

A particularly common special case of the van der Corput inequality arises when {v} is a unit vector fixed by some unitary operator {T}, and the {u_i} are shifts {u_i = T^i u} of a single unit vector {u}. In this case, the inner products {\langle v, u_i \rangle} are all equal, and we arrive at the useful van der Corput inequality

\displaystyle  |\langle v, u \rangle|^2 \leq \frac{1}{n^2} \sum_{1 \leq i,j \leq n} |\langle T^i u, T^j u \rangle|. \ \ \ \ \ (4)

(In fact, one can even remove the absolute values from the right-hand side, by using (2) instead of (4).) Thus, to show that {v} has negligible correlation with {u}, it suffices to show that the shifts of {u} have negligible correlation with each other.

Here is a basic application of the van der Corput inequality:

Proposition 1 (Weyl equidistribution estimate) Let {P: {\bf Z} \rightarrow {\bf R}/{\bf Z}} be a polynomial with at least one non-constant coefficient irrational. Then one has

\displaystyle  \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N e( P(n) ) = 0,

where {e(x) := e^{2\pi i x}}.

Note that this assertion implies the more general assertion

\displaystyle  \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N e( kP(n) ) = 0

for any non-zero integer {k} (simply by replacing {P} by {kP}), which by the Weyl equidistribution criterion is equivalent to the sequence {P(1), P(2),\dots} being asymptotically equidistributed in {{\bf R}/{\bf Z}}.

Proof: We induct on the degree {d} of the polynomial {P}, which must be at least one. If {d} is equal to one, the claim is easily established from the geometric series formula, so suppose that {d>1} and that the claim has already been proven for {d-1}. If the top coefficient {a_d} of {P(n) = a_d n^d + \dots + a_0} is rational, say {a_d = \frac{p}{q}}, then by partitioning the natural numbers into residue classes modulo {q}, we see that the claim follows from the induction hypothesis; so we may assume that the top coefficient {a_d} is irrational.

In order to use the van der Corput inequality as stated above (i.e. in the formalism of inner product spaces) we will need a non-principal ultrafilter {p} (see e.g this previous blog post for basic theory of ultrafilters); we leave it as an exercise to the reader to figure out how to present the argument below without the use of ultrafilters (or similar devices, such as Banach limits). The ultrafilter {p} defines an inner product {\langle, \rangle_p} on bounded complex sequences {z = (z_1,z_2,z_3,\dots)} by setting

\displaystyle  \langle z, w \rangle_p := \hbox{st} \lim_{N \rightarrow p} \frac{1}{N} \sum_{n=1}^N z_n \overline{w_n}.

Strictly speaking, this inner product is only positive semi-definite rather than positive definite, but one can quotient out by the null vectors to obtain a positive-definite inner product. To establish the claim, it will suffice to show that

\displaystyle  \langle 1, e(P) \rangle_p = 0

for every non-principal ultrafilter {p}.

Note that the space of bounded sequences (modulo null vectors) admits a shift {T}, defined by

\displaystyle  T (z_1,z_2,\dots) := (z_2,z_3,\dots).

This shift becomes unitary once we quotient out by null vectors, and the constant sequence {1} is clearly a unit vector that is invariant with respect to the shift. So by the van der Corput inequality, we have

\displaystyle  |\langle 1, e(P) \rangle_p| \leq \frac{1}{n^2} \sum_{1 \leq i,j \leq n} |\langle T^i e(P), T^j e(P) \rangle_p|

for any {n \geq 1}. But we may rewrite {\langle T^i e(P), T^j e(P) \rangle = \langle 1, e(T^j P - T^i P) \rangle_p}. Then observe that if {i \neq j}, {T^j P - T^i P} is a polynomial of degree {d-1} whose {d-1} coefficient is irrational, so by induction hypothesis we have {\langle T^i e(P), T^j e(P) \rangle_p = 0} for {i \neq j}. For {i=j} we of course have {\langle T^i e(P), T^j e(P) \rangle_p = 1}, and so

\displaystyle  |\langle 1, e(P) \rangle_p| \leq \frac{1}{n^2} \times n

for any {n}. Letting {n \rightarrow \infty}, we obtain the claim. \Box

Let {V} be a quasiprojective variety defined over a finite field {{\bf F}_q}, thus for instance {V} could be an affine variety

\displaystyle  V = \{ x \in {\bf A}^d: P_1(x) = \dots = P_m(x) = 0\} \ \ \ \ \ (1)

where {{\bf A}^d} is {d}-dimensional affine space and {P_1,\dots,P_m: {\bf A}^d \rightarrow {\bf A}} are a finite collection of polynomials with coefficients in {{\bf F}_q}. Then one can define the set {V[{\bf F}_q]} of {{\bf F}_q}-rational points, and more generally the set {V[{\bf F}_{q^n}]} of {{\bf F}_{q^n}}-rational points for any {n \geq 1}, since {{\bf F}_{q^n}} can be viewed as a field extension of {{\bf F}_q}. Thus for instance in the affine case (1) we have

\displaystyle  V[{\bf F}_{q^n}] := \{ x \in {\bf F}_{q^n}^d: P_1(x) = \dots = P_m(x) = 0\}.

The Weil conjectures are concerned with understanding the number

\displaystyle  S_n := |V[{\bf F}_{q^n}]| \ \ \ \ \ (2)

of {{\bf F}_{q^n}}-rational points over a variety {V}. The first of these conjectures was proven by Dwork, and can be phrased as follows.

Theorem 1 (Rationality of the zeta function) Let {V} be a quasiprojective variety defined over a finite field {{\bf F}_q}, and let {S_n} be given by (2). Then there exist a finite number of algebraic integers {\alpha_1,\dots,\alpha_k, \beta_1,\dots,\beta_{k'} \in O_{\overline{{\bf Q}}}} (known as characteristic values of {V}), such that

\displaystyle  S_n = \alpha_1^n + \dots + \alpha_k^n - \beta_1^n - \dots - \beta_{k'}^n

for all {n \geq 1}.

After cancelling, we may of course assume that {\alpha_i \neq \beta_j} for any {i=1,\dots,k} and {j=1,\dots,k'}, and then it is easy to see (as we will see below) that the {\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'}} become uniquely determined up to permutations of the {\alpha_1,\dots,\alpha_k} and {\beta_1,\dots,\beta_{k'}}. These values are known as the characteristic values of {V}. Since {S_n} is a rational integer (i.e. an element of {{\bf Z}}) rather than merely an algebraic integer (i.e. an element of the ring of integers {O_{\overline{{\bf Q}}}} of the algebraic closure {\overline{{\bf Q}}} of {{\bf Q}}), we conclude from the above-mentioned uniqueness that the set of characteristic values are invariant with respect to the Galois group {Gal(\overline{{\bf Q}} / {\bf Q} )}. To emphasise this Galois invariance, we will not fix a specific embedding {\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}} of the algebraic numbers into the complex field {{\bf C} = {\bf C}_\infty}, but work with all such embeddings simultaneously. (Thus, for instance, {\overline{{\bf Q}}} contains three cube roots of {2}, but which of these is assigned to the complex numbers {2^{1/3}}, {e^{2\pi i/3} 2^{1/3}}, {e^{4\pi i/3} 2^{1/3}} will depend on the choice of embedding {\iota_\infty}.)

An equivalent way of phrasing Dwork’s theorem is that the ({T}-form of the) zeta function

\displaystyle \zeta_V(T) := \exp( \sum_{n=1}^\infty \frac{S_n}{n} T^n )

associated to {V} (which is well defined as a formal power series in {T}, at least) is equal to a rational function of {T} (with the {\alpha_1,\dots,\alpha_k} and {\beta_1,\dots,\beta_{k'}} being the poles and zeroes of {\zeta_V} respectively). Here, we use the formal exponential

\displaystyle  \exp(X) := 1 + X + \frac{X^2}{2!} + \frac{X^3}{3!} + \dots.

Equivalently, the ({s}-form of the) zeta-function {s \mapsto \zeta_V(q^{-s})} is a meromorphic function on the complex numbers {{\bf C}} which is also periodic with period {2\pi i/\log q}, and which has only finitely many poles and zeroes up to this periodicity.

Dwork’s argument relies primarily on {p}-adic analysis – an analogue of complex analysis, but over an algebraically complete (and metrically complete) extension {{\bf C}_p} of the {p}-adic field {{\bf Q}_p}, rather than over the Archimedean complex numbers {{\bf C}}. The argument is quite effective, and in particular gives explicit upper bounds for the number {k+k'} of characteristic values in terms of the complexity of the variety {V}; for instance, in the affine case (1) with {V} of degree {D}, Bombieri used Dwork’s methods (in combination with Deligne’s theorem below) to obtain the bound {k+k' \leq (4D+9)^{2d+1}}, and a subsequent paper of Hooley established the slightly weaker bound {k+k' \leq (11D+11)^{d+m+2}} purely from Dwork’s methods (a similar bound had also been pointed out in unpublished work of Dwork). In particular, one has bounds that are uniform in the field {{\bf F}_q}, which is an important fact for many analytic number theory applications.

These {p}-adic arguments stand in contrast with Deligne’s resolution of the last (and deepest) of the Weil conjectures:

Theorem 2 (Riemann hypothesis) Let {V} be a quasiprojective variety defined over a finite field {{\bf F}_q}, and let {\lambda \in \overline{{\bf Q}}} be a characteristic value of {V}. Then there exists a natural number {w} such that {|\iota_\infty(\lambda)|_\infty = q^{w/2}} for every embedding {\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}, where {| |_\infty} denotes the usual absolute value on the complex numbers {{\bf C} = {\bf C}_\infty}. (Informally: {\lambda} and all of its Galois conjugates have complex magnitude {q^{w/2}}.)

To put it another way that closely resembles the classical Riemann hypothesis, all the zeroes and poles of the {s}-form {s \mapsto \zeta_V(q^{-s})} lie on the critical lines {\{ s \in {\bf C}: \hbox{Re}(s) = \frac{w}{2} \}} for {w=0,1,2,\dots}. (See this previous blog post for further comparison of various instantiations of the Riemann hypothesis.) Whereas Dwork uses {p}-adic analysis, Deligne uses the essentially orthogonal technique of ell-adic cohomology to establish his theorem. However, ell-adic methods can be used (via the Grothendieck-Lefschetz trace formula) to establish rationality, and conversely, in this paper of Kedlaya p-adic methods are used to establish the Riemann hypothesis. As pointed out by Kedlaya, the ell-adic methods are tied to the intrinsic geometry of {V} (such as the structure of sheaves and covers over {V}), while the {p}-adic methods are more tied to the extrinsic geometry of {V} (how {V} sits inside its ambient affine or projective space).

In this post, I would like to record my notes on Dwork’s proof of Theorem 1, drawing heavily on the expositions of Serre, Hooley, Koblitz, and others.

The basic strategy is to control the rational integers {S_n} both in an “Archimedean” sense (embedding the rational integers inside the complex numbers {{\bf C}_\infty} with the usual norm {||_\infty}) as well as in the “{p}-adic” sense, with {p} the characteristic of {{\bf F}_q} (embedding the integers now in the “complexification” {{\bf C}_p} of the {p}-adic numbers {{\bf Q}_p}, which is equipped with a norm {||_p} that we will recall later). (This is in contrast to the methods of ell-adic cohomology, in which one primarily works over an {\ell}-adic field {{\bf Q}_\ell} with {\ell \neq p,\infty}.) The Archimedean control is trivial:

Proposition 3 (Archimedean control of {S_n}) With {S_n} as above, and any embedding {\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}, we have

\displaystyle  |\iota_\infty(S_n)|_\infty \leq C q^{A n}

for all {n} and some {C, A >0} independent of {n}.

Proof: Since {S_n} is a rational integer, {|\iota_\infty(S_n)|_\infty} is just {|S_n|_\infty}. By decomposing {V} into affine pieces, we may assume that {V} is of the affine form (1), then we trivially have {|S_n|_\infty \leq q^{nd}}, and the claim follows. \Box

Another way of thinking about this Archimedean control is that it guarantees that the zeta function {T \mapsto \zeta_V(T)} can be defined holomorphically on the open disk in {{\bf C}_\infty} of radius {q^{-A}} centred at the origin.

The {p}-adic control is significantly more difficult, and is the main component of Dwork’s argument:

Proposition 4 ({p}-adic control of {S_n}) With {S_n} as above, and using an embedding {\iota_p: \overline{{\bf Q}} \rightarrow {\bf C}_p} (defined later) with {p} the characteristic of {{\bf F}_q}, we can find for any real {A > 0} a finite number of elements {\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'} \in {\bf C}_p} such that

\displaystyle  |\iota_p(S_n) - (\alpha_1^n + \dots + \alpha_k^n - \beta_1^n - \dots - \beta_{k'}^n)|_p \leq q^{-An}

for all {n}.

Another way of thinking about this {p}-adic control is that it guarantees that the zeta function {T \mapsto \zeta_V(T)} can be defined meromorphically on the entire {p}-adic complex field {{\bf C}_p}.

Proposition 4 is ostensibly much weaker than Theorem 1 because of (a) the error term of {p}-adic magnitude at most {Cq^{-An}}; (b) the fact that the number {k+k'} of potential characteristic values here may go to infinity as {A \rightarrow \infty}; and (c) the potential characteristic values {\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'}} only exist inside the complexified {p}-adics {{\bf C}_p}, rather than in the algebraic integers {O_{\overline{{\bf Q}}}}. However, it turns out that by combining {p}-adic control on {S_n} in Proposition 4 with the trivial control on {S_n} in Proposition 3, one can obtain Theorem 1 by an elementary argument that does not use any further properties of {S_n} (other than the obvious fact that the {S_n} are rational integers), with the {A} in Proposition 4 chosen to exceed the {A} in Proposition 3. We give this argument (essentially due to Borel) below the fold.

The proof of Proposition 4 can be split into two pieces. The first piece, which can be viewed as the number-theoretic component of the proof, uses external descriptions of {V} such as (1) to obtain the following decomposition of {S_n}:

Proposition 5 (Decomposition of {S_n}) With {\iota_p} and {S_n} as above, we can decompose {\iota_p(S_n)} as a finite linear combination (over the integers) of sequences {S'_n \in {\bf C}_p}, such that for each such sequence {n \mapsto S'_n}, the zeta functions

\displaystyle  \zeta'(T) := \exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n ) = \sum_{n=0}^\infty c_n T^n

are entire in {{\bf C}_p}, by which we mean that

\displaystyle  |c_n|_p^{1/n} \rightarrow 0

as {n \rightarrow \infty}.

This proposition will ultimately be a consequence of the properties of the Teichmuller lifting {\tau: \overline{{\bf F}_p}^\times \rightarrow {\bf C}_p^\times}.

The second piece, which can be viewed as the “{p}-adic complex analytic” component of the proof, relates the {p}-adic entire nature of a zeta function with control on the associated sequence {S'_n}, and can be interpreted (after some manipulation) as a {p}-adic version of the Weierstrass preparation theorem:

Proposition 6 ({p}-adic Weierstrass preparation theorem) Let {S'_n} be a sequence in {{\bf C}_p}, such that the zeta function

\displaystyle  \zeta'(T) := \exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n )

is entire in {{\bf C}_p}. Then for any real {A > 0}, there exist a finite number of elements {\beta_1,\dots,\beta_{k'} \in {\bf C}_p} such that

\displaystyle  |\iota_p(S'_n) + \beta_1^n + \dots + \beta_{k'}^n|_p \leq q^{-An}

for all {n} and some {C>0}.

Clearly, the combination of Proposition 5 and Proposition 6 (and the non-Archimedean nature of the {||_p} norm) imply Proposition 4.

Read the rest of this entry »

This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.

Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.

One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:

Conjecture 1 (Kakeya conjecture) Let {E} be a subset of {{\bf R}^3} that contains a unit line segment in every direction. Then {\hbox{dim}(E) = 3}.

This conjecture is not precisely formulated here, because we have not specified exactly what type of set {E} is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):

Conjecture 2 (Kakeya conjecture, again) Let {{\cal L}} be a family of lines in {{\bf R}^3} that meet {B(0,1)} and contain a line in each direction. Let {E} be the union of the restriction {\ell \cap B(0,2)} to {B(0,2)} of every line {\ell} in {{\cal L}}. Then {\hbox{dim}(E) = 3}.

As the space of all directions in {{\bf R}^3} is two-dimensional, we thus see that {{\cal L}} is an (at least) two-dimensional subset of the four-dimensional space of lines in {{\bf R}^3} (actually, it lies in a compact subset of this space, since we have constrained the lines to meet {B(0,1)}). One could then ask if this is the only property of {{\cal L}} that is needed to establish the Kakeya conjecture, that is to say if any subset of {B(0,2)} which contains a two-dimensional family of lines (restricted to {B(0,2)}, and meeting {B(0,1)}) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in {B(0,2)} (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:

Conjecture 3 (Strong Kakeya conjecture) Let {{\cal L}} be a two-dimensional family of lines in {{\bf R}^3} that meet {B(0,1)}, and assume the Wolff axiom that no (affine) plane contains more than a one-dimensional family of lines in {{\cal L}}. Let {E} be the union of the restriction {\ell \cap B(0,2)} of every line {\ell} in {{\cal L}}. Then {\hbox{dim}(E) = 3}.

Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie close to a plane, rather than exactly on the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.

In 1995, Wolff established the important lower bound {\hbox{dim}(E) \geq 5/2} (for various notions of dimension, e.g. Hausdorff dimension) for sets {E} in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the {5/2} barrier, coming from the possible existence of half-dimensional (approximate) subfields of the reals {{\bf R}}. To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:

Conjecture 4 (Strong Kakeya conjecture over {{\bf C}}) Let {{\cal L}} be a four (real) dimensional family of complex lines in {{\bf C}^3} that meet the unit ball {B(0,1)} in {{\bf C}^3}, and assume the Wolff axiom that no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in {{\cal L}}. Let {E} be the union of the restriction {\ell \cap B(0,2)} of every complex line {\ell} in {{\cal L}}. Then {E} has real dimension {6}.

The argument of Wolff can be adapted to the complex case to show that all sets {E} occuring in Conjecture 4 have real dimension at least {5}. Unfortunately, this is sharp, due to the following fundamental counterexample:

Proposition 5 (Heisenberg group counterexample) Let {H \subset {\bf C}^3} be the Heisenberg group

\displaystyle  H = \{ (z_1,z_2,z_3) \in {\bf C}^3: \hbox{Im}(z_1) = \hbox{Im}(z_2 \overline{z_3}) \}

and let {{\cal L}} be the family of complex lines

\displaystyle  \ell_{s,t,\alpha} := \{ (\overline{\alpha} z + t, z, sz + \alpha): z \in {\bf C} \}

with {s,t \in {\bf R}} and {\alpha \in {\bf C}}. Then {H} is a five (real) dimensional subset of {{\bf C}^3} that contains every line in the four (real) dimensional set {{\cal L}}; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in {{\cal L}}. In particular, the strong Kakeya conjecture over the complex numbers is false.

This proposition is proven by a routine computation, which we omit here. The group structure on {H} is given by the group law

\displaystyle  (z_1,z_2,z_3) \cdot (w_1,w_2,w_3) = (z_1 + w_1 + z_2 \overline{w_3} - z_3 \overline{w_2}, z_2 +w_2, z_3+w_3),

giving {E} the structure of a {2}-step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over {{\bf R}^2}. Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines {{\cal L}} in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines

\displaystyle  \ell_{0,t,0} = \{ (t, z, 0): z \in {\bf C}\}

with {t \in {\bf R}}; multiplying this family of lines on the right by a group element in {H} gives other families of parallel lines, which in fact sweep out all of {{\cal L}}.

The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield {{\bf R}} of {{\bf C}}, which induces an involution {z \mapsto \overline{z}} which can then be used to define the Heisenberg group {H} through the formula

\displaystyle  H = \{ (z_1,z_2,z_3) \in {\bf C}^3: z_1 - \overline{z_1} = z_2 \overline{z_3} - z_3 \overline{z_2} \}.

Analogous Heisenberg counterexamples can also be constructed if one works over finite fields {{\bf F}_{q^2}} that contain a “half-dimensional” subfield {{\bf F}_q}; we leave the details to the interested reader. Morally speaking, if {{\bf R}} in turn contained a subfield of dimension {1/2} (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.

We thus see that to go beyond the {5/2} dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:

  • (a) Exploit the distinct directions of the lines in {{\mathcal L}} in a way that goes beyond the Wolff axiom; or
  • (b) Exploit the fact that {{\bf R}} does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).

(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)

Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of {5/2} for Kakeya sets very slightly to {5/2+10^{-10}} (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of {{\bf F}_p}, and then pursued route (b) to obtain a corresponding improvement {5/2+\epsilon} to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.

Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:

  1. Assume that the (strong) Kakeya conjecture fails, so that there are sets {E} of the form in Conjecture 3 of dimension {3-\sigma} for some {\sigma>0}. Assume that {E} is “optimal”, in the sense that {\sigma} is as large as possible.
  2. Use the optimality of {E} (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets {E}, namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining {E} to “behave like” a putative Heisenberg group counterexample.
  3. By playing all these structural properties off of each other, show that {E} can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.

Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set {E} for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.

Read the rest of this entry »

Roth’s theorem on arithmetic progressions asserts that every subset of the integers {{\bf Z}} of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:

Theorem 1 (Roth’s theorem) Let {G = (G,+)} be a compact abelian group, with Haar probability measure {\mu}, which is {2}-divisible (i.e. the map {x \mapsto 2x} is surjective) and let {A} be a measurable subset of {G} with {\mu(A) \geq \alpha} for some {0 < \alpha < 1}. Then we have

\displaystyle  \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r)\ d\mu(x) d\mu(r) \gg_\alpha 1,

where {X \gg_\alpha Y} denotes the bound {X \geq c_\alpha Y} for some {c_\alpha > 0} depending only on {\alpha}.

This theorem is usually formulated in the case that {G} is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group {G = {\bf Z}/N{\bf Z}} of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of {2}-divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant {c_\alpha} on {\alpha}, but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the {2}-divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the {2r} shift in that case.

We can deduce Theorem 1 from the following more general Khintchine-type statement. Let {\hat G} denote the Pontryagin dual of a compact abelian group {G}, that is to say the set of all continuous homomorphisms {\xi: x \mapsto \xi \cdot x} from {G} to the (additive) unit circle {{\bf R}/{\bf Z}}. Thus {\hat G} is a discrete abelian group, and functions {f \in L^2(G)} have a Fourier transform {\hat f \in \ell^2(\hat G)} defined by

\displaystyle  \hat f(\xi) := \int_G f(x) e^{-2\pi i \xi \cdot x}\ d\mu(x).

If {G} is {2}-divisible, then {\hat G} is {2}-torsion-free in the sense that the map {\xi \mapsto 2 \xi} is injective. For any finite set {S \subset \hat G} and any radius {\rho>0}, define the Bohr set

\displaystyle  B(S,\rho) := \{ x \in G: \sup_{\xi \in S} \| \xi \cdot x \|_{{\bf R}/{\bf Z}} < \rho \}

where {\|\theta\|_{{\bf R}/{\bf Z}}} denotes the distance of {\theta} to the nearest integer. We refer to the cardinality {|S|} of {S} as the rank of the Bohr set. We record a simple volume bound on Bohr sets:

Lemma 2 (Volume packing bound) Let {G} be a compact abelian group with Haar probability measure {\mu}. For any Bohr set {B(S,\rho)}, we have

\displaystyle  \mu( B( S, \rho ) ) \gg_{|S|, \rho} 1.

Proof: We can cover the torus {({\bf R}/{\bf Z})^S} by {O_{|S|,\rho}(1)} translates {\theta+Q} of the cube {Q := \{ (\theta_\xi)_{\xi \in S} \in ({\bf R}/{\bf Z})^S: \sup_{\xi \in S} \|\theta_\xi\|_{{\bf R}/{\bf Z}} < \rho/2 \}}. Then the sets {\{ x \in G: (\xi \cdot x)_{\xi \in S} \in \theta + Q \}} form an cover of {G}. But all of these sets lie in a translate of {B(S,\rho)}, and the claim then follows from the translation invariance of {\mu}. \Box

Given any Bohr set {B(S,\rho)}, we define a normalised “Lipschitz” cutoff function {\nu_{B(S,\rho)}: G \rightarrow {\bf R}} by the formula

\displaystyle  \nu_{B(S,\rho)}(x) = c_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})_+ \ \ \ \ \ (1)

where {c_{B(S,\rho)}} is the constant such that

\displaystyle  \int_G \nu_{B(S,\rho)}\ d\mu = 1,

thus

\displaystyle c_{B(S,\rho)} = \left( \int_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})\ d\mu(x) \right)^{-1}.

The function {\nu_{B(S,\rho)}} should be viewed as an {L^1}-normalised “tent function” cutoff to {B(S,\rho)}. Note from Lemma 2 that

\displaystyle  1 \ll_{|S|,\rho} c_{B(S,\rho)} \ll_{|S|,\rho} 1. \ \ \ \ \ (2)

We then have the following sharper version of Theorem 1:

Theorem 3 (Roth-Khintchine theorem) Let {G = (G,+)} be a {2}-divisible compact abelian group, with Haar probability measure {\mu}, and let {\epsilon>0}. Then for any measurable function {f: G \rightarrow [0,1]}, there exists a Bohr set {B(S,\rho)} with {|S| \ll_\epsilon 1} and {\rho \gg_\epsilon 1} such that

\displaystyle  \int_G \int_G f(x) f(x+r) f(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \ \ \ \ \ (3)

\displaystyle  \geq (\int_G f\ d\mu)^3 - O(\epsilon)

where {*} denotes the convolution operation

\displaystyle  f*g(x) := \int_G f(y) g(x-y)\ d\mu(y).

A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with {f := 1_A} and {\epsilon} equal to a small multiple of {\alpha^3} to conclude that there is a Bohr set {B(S,\rho)} with {|S| \ll_\alpha 1} and {\rho \gg_\alpha 1} such that

\displaystyle  \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \gg \alpha^3.

But from (2) we have the pointwise bound {\nu_{B(S,\rho)}*\nu_{B(S,\rho)} \ll_\alpha 1}, and Theorem 1 follows.

Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set {B(S,\rho)} to capture all the “large Fourier coefficients” of {f}, but such that a certain “dilate” of {B(S,\rho)} does not capture much more Fourier energy of {f} than {B(S,\rho)} itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of {f} into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the {\nu_{B(S,\rho)}} considered above to achieve a similar effect.

Read the rest of this entry »

Throughout this post, we will work only at the formal level of analysis, ignoring issues of convergence of integrals, justifying differentiation under the integral sign, and so forth. (Rigorous justification of the conservation laws and other identities arising from the formal manipulations below can usually be established in an a posteriori fashion once the identities are in hand, without the need to rigorously justify the manipulations used to come up with these identities).

It is a remarkable fact in the theory of differential equations that many of the ordinary and partial differential equations that are of interest (particularly in geometric PDE, or PDE arising from mathematical physics) admit a variational formulation; thus, a collection {\Phi: \Omega \rightarrow M} of one or more fields on a domain {\Omega} taking values in a space {M} will solve the differential equation of interest if and only if {\Phi} is a critical point to the functional

\displaystyle  J[\Phi] := \int_\Omega L( x, \Phi(x), D\Phi(x) )\ dx \ \ \ \ \ (1)

involving the fields {\Phi} and their first derivatives {D\Phi}, where the Lagrangian {L: \Sigma \rightarrow {\bf R}} is a function on the vector bundle {\Sigma} over {\Omega \times M} consisting of triples {(x, q, \dot q)} with {x \in \Omega}, {q \in M}, and {\dot q: T_x \Omega \rightarrow T_q M} a linear transformation; we also usually keep the boundary data of {\Phi} fixed in case {\Omega} has a non-trivial boundary, although we will ignore these issues here. (We also ignore the possibility of having additional constraints imposed on {\Phi} and {D\Phi}, which require the machinery of Lagrange multipliers to deal with, but which will only serve as a distraction for the current discussion.) It is common to use local coordinates to parameterise {\Omega} as {{\bf R}^d} and {M} as {{\bf R}^n}, in which case {\Sigma} can be viewed locally as a function on {{\bf R}^d \times {\bf R}^n \times {\bf R}^{dn}}.

Example 1 (Geodesic flow) Take {\Omega = [0,1]} and {M = (M,g)} to be a Riemannian manifold, which we will write locally in coordinates as {{\bf R}^n} with metric {g_{ij}(q)} for {i,j=1,\dots,n}. A geodesic {\gamma: [0,1] \rightarrow M} is then a critical point (keeping {\gamma(0),\gamma(1)} fixed) of the energy functional

\displaystyle  J[\gamma] := \frac{1}{2} \int_0^1 g_{\gamma(t)}( D\gamma(t), D\gamma(t) )\ dt

or in coordinates (ignoring coordinate patch issues, and using the usual summation conventions)

\displaystyle  J[\gamma] = \frac{1}{2} \int_0^1 g_{ij}(\gamma(t)) \dot \gamma^i(t) \dot \gamma^j(t)\ dt.

As discussed in this previous post, both the Euler equations for rigid body motion, and the Euler equations for incompressible inviscid flow, can be interpreted as geodesic flow (though in the latter case, one has to work really formally, as the manifold {M} is now infinite dimensional).

More generally, if {\Omega = (\Omega,h)} is itself a Riemannian manifold, which we write locally in coordinates as {{\bf R}^d} with metric {h_{ab}(x)} for {a,b=1,\dots,d}, then a harmonic map {\Phi: \Omega \rightarrow M} is a critical point of the energy functional

\displaystyle  J[\Phi] := \frac{1}{2} \int_\Omega h(x) \otimes g_{\gamma(x)}( D\gamma(x), D\gamma(x) )\ dh(x)

or in coordinates (again ignoring coordinate patch issues)

\displaystyle  J[\Phi] = \frac{1}{2} \int_{{\bf R}^d} h_{ab}(x) g_{ij}(\Phi(x)) (\partial_a \Phi^i(x)) (\partial_b \Phi^j(x))\ \sqrt{\det(h(x))}\ dx.

If we replace the Riemannian manifold {\Omega} by a Lorentzian manifold, such as Minkowski space {{\bf R}^{1+3}}, then the notion of a harmonic map is replaced by that of a wave map, which generalises the scalar wave equation (which corresponds to the case {M={\bf R}}).

Example 2 ({N}-particle interactions) Take {\Omega = {\bf R}} and {M = {\bf R}^3 \otimes {\bf R}^N}; then a function {\Phi: \Omega \rightarrow M} can be interpreted as a collection of {N} trajectories {q_1,\dots,q_N: {\bf R} \rightarrow {\bf R}^3} in space, which we give a physical interpretation as the trajectories of {N} particles. If we assign each particle a positive mass {m_1,\dots,m_N > 0}, and also introduce a potential energy function {V: M \rightarrow {\bf R}}, then it turns out that Newton’s laws of motion {F=ma} in this context (with the force {F_i} on the {i^{th}} particle being given by the conservative force {-\nabla_{q_i} V}) are equivalent to the trajectories {q_1,\dots,q_N} being a critical point of the action functional

\displaystyle  J[\Phi] := \int_{\bf R} \sum_{i=1}^N \frac{1}{2} m_i |\dot q_i(t)|^2 - V( q_1(t),\dots,q_N(t) )\ dt.

Formally, if {\Phi = \Phi_0} is a critical point of a functional {J[\Phi]}, this means that

\displaystyle  \frac{d}{ds} J[ \Phi[s] ]|_{s=0} = 0

whenever {s \mapsto \Phi[s]} is a (smooth) deformation with {\Phi[0]=\Phi_0} (and with {\Phi[s]} respecting whatever boundary conditions are appropriate). Interchanging the derivative and integral, we (formally, at least) arrive at

\displaystyle  \int_\Omega \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}\ dx = 0. \ \ \ \ \ (2)

Write {\delta \Phi := \frac{d}{ds} \Phi[s]|_{s=0}} for the infinitesimal deformation of {\Phi_0}. By the chain rule, {\frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}} can be expressed in terms of {x, \Phi_0(x), \delta \Phi(x), D\Phi_0(x), D \delta \Phi(x)}. In coordinates, we have

\displaystyle  \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \delta \Phi^i(x) L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) \ \ \ \ \ (3)

\displaystyle  + \partial_{x^a} \delta \Phi^i(x) L_{\partial_{x^a} q^i} (x,\Phi_0(x), D\Phi_0(x)),

where we parameterise {\Sigma} by {x, (q^i)_{i=1,\dots,n}, (\partial_{x^a} q^i)_{a=1,\dots,d; i=1,\dots,n}}, and we use subscripts on {L} to denote partial derivatives in the various coefficients. (One can of course work in a coordinate-free manner here if one really wants to, but the notation becomes a little cumbersome due to the need to carefully split up the tangent space of {\Sigma}, and we will not do so here.) Thus we can view (2) as an integral identity that asserts the vanishing of a certain integral, whose integrand involves {x, \Phi_0(x), \delta \Phi(x), D\Phi_0(x), D \delta \Phi(x)}, where {\delta \Phi} vanishes at the boundary but is otherwise unconstrained.

A general rule of thumb in PDE and calculus of variations is that whenever one has an integral identity of the form {\int_\Omega F(x)\ dx = 0} for some class of functions {F} that vanishes on the boundary, then there must be an associated differential identity {F = \hbox{div} X} that justifies this integral identity through Stokes’ theorem. This rule of thumb helps explain why integration by parts is used so frequently in PDE to justify integral identities. The rule of thumb can fail when one is dealing with “global” or “cohomologically non-trivial” integral identities of a topological nature, such as the Gauss-Bonnet or Kazhdan-Warner identities, but is quite reliable for “local” or “cohomologically trivial” identities, such as those arising from calculus of variations.

In any case, if we apply this rule to (2), we expect that the integrand {\frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}} should be expressible as a spatial divergence. This is indeed the case:

Proposition 1 (Formal) Let {\Phi = \Phi_0} be a critical point of the functional {J[\Phi]} defined in (1). Then for any deformation {s \mapsto \Phi[s]} with {\Phi[0] = \Phi_0}, we have

\displaystyle  \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \hbox{div} X \ \ \ \ \ (4)

where {X} is the vector field that is expressible in coordinates as

\displaystyle  X^a := \delta \Phi^i(x) L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)). \ \ \ \ \ (5)

Proof: Comparing (4) with (3), we see that the claim is equivalent to the Euler-Lagrange equation

\displaystyle  L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) - \partial_{x^a} L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)) = 0. \ \ \ \ \ (6)

The same computation, together with an integration by parts, shows that (2) may be rewritten as

\displaystyle  \int_\Omega ( L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) - \partial_{x^a} L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)) ) \delta \Phi^i(x)\ dx = 0.

Since {\delta \Phi^i(x)} is unconstrained on the interior of {\Omega}, the claim (6) follows (at a formal level, at least). \Box

Many variational problems also enjoy one-parameter continuous symmetries: given any field {\Phi_0} (not necessarily a critical point), one can place that field in a one-parameter family {s \mapsto \Phi[s]} with {\Phi[0] = \Phi_0}, such that

\displaystyle  J[ \Phi[s] ] = J[ \Phi[0] ]

for all {s}; in particular,

\displaystyle  \frac{d}{ds} J[ \Phi[s] ]|_{s=0} = 0,

which can be written as (2) as before. Applying the previous rule of thumb, we thus expect another divergence identity

\displaystyle  \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \hbox{div} Y \ \ \ \ \ (7)

whenever {s \mapsto \Phi[s]} arises from a continuous one-parameter symmetry. This expectation is indeed the case in many examples. For instance, if the spatial domain {\Omega} is the Euclidean space {{\bf R}^d}, and the Lagrangian (when expressed in coordinates) has no direct dependence on the spatial variable {x}, thus

\displaystyle  L( x, \Phi(x), D\Phi(x) ) = L( \Phi(x), D\Phi(x) ), \ \ \ \ \ (8)

then we obtain {d} translation symmetries

\displaystyle  \Phi[s](x) := \Phi(x - s e^a )

for {a=1,\dots,d}, where {e^1,\dots,e^d} is the standard basis for {{\bf R}^d}. For a fixed {a}, the left-hand side of (7) then becomes

\displaystyle  \frac{d}{ds} L( \Phi(x-se^a), D\Phi(x-se^a) )|_{s=0} = -\partial_{x^a} [ L( \Phi(x), D\Phi(x) ) ]

\displaystyle  = \hbox{div} Y

where {Y(x) = - L(\Phi(x), D\Phi(x)) e^a}. Another common type of symmetry is a pointwise symmetry, in which

\displaystyle  L( x, \Phi[s](x), D\Phi[s](x) ) = L( x, \Phi[0](x), D\Phi[0](x) ) \ \ \ \ \ (9)

for all {x}, in which case (7) clearly holds with {Y=0}.

If we subtract (4) from (7), we obtain the celebrated theorem of Noether linking symmetries with conservation laws:

Theorem 2 (Noether’s theorem) Suppose that {\Phi_0} is a critical point of the functional (1), and let {\Phi[s]} be a one-parameter continuous symmetry with {\Phi[0] = \Phi_0}. Let {X} be the vector field in (5), and let {Y} be the vector field in (7). Then we have the pointwise conservation law

\displaystyle  \hbox{div}(X-Y) = 0.

In particular, for one-dimensional variational problems, in which {\Omega \subset {\bf R}}, we have the conservation law {(X-Y)(t) = (X-Y)(0)} for all {t \in \Omega} (assuming of course that {\Omega} is connected and contains {0}).

Noether’s theorem gives a systematic way to locate conservation laws for solutions to variational problems. For instance, if {\Omega \subset {\bf R}} and the Lagrangian has no explicit time dependence, thus

\displaystyle  L(t, \Phi(t), \dot \Phi(t)) = L(\Phi(t), \dot \Phi(t)),

then by using the time translation symmetry {\Phi[s](t) := \Phi(t-s)}, we have

\displaystyle  Y(t) = - L( \Phi(t), \dot\Phi(t) )

as discussed previously, whereas we have {\delta \Phi(t) = - \dot \Phi(t)}, and hence by (5)

\displaystyle  X(t) := - \dot \Phi^i(x) L_{\dot q^i}(\Phi(t), \dot \Phi(t)),

and so Noether’s theorem gives conservation of the Hamiltonian

\displaystyle  H(t) := \dot \Phi^i(x) L_{\dot q^i}(\Phi(t), \dot \Phi(t))- L(\Phi(t), \dot \Phi(t)). \ \ \ \ \ (10)

For instance, for geodesic flow, the Hamiltonian works out to be

\displaystyle  H(t) = \frac{1}{2} g_{ij}(\gamma(t)) \dot \gamma^i(t) \dot \gamma^j(t),

so we see that the speed of the geodesic is conserved over time.

For pointwise symmetries (9), {Y} vanishes, and so Noether’s theorem simplifies to {\hbox{div} X = 0}; in the one-dimensional case {\Omega \subset {\bf R}}, we thus see from (5) that the quantity

\displaystyle  \delta \Phi^i(t) L_{\dot q^i}(t,\Phi_0(t), \dot \Phi_0(t)) \ \ \ \ \ (11)

is conserved in time. For instance, for the {N}-particle system in Example 2, if we have the translation invariance

\displaystyle  V( q_1 + h, \dots, q_N + h ) = V( q_1, \dots, q_N )

for all {q_1,\dots,q_N,h \in {\bf R}^3}, then we have the pointwise translation symmetry

\displaystyle  q_i[s](t) := q_i(t) + s e^j

for all {i=1,\dots,N}, {s \in{\bf R}} and some {j=1,\dots,3}, in which case {\dot q_i(t) = e^j}, and the conserved quantity (11) becomes

\displaystyle  \sum_{i=1}^n m_i \dot q_i^j(t);

as {j=1,\dots,3} was arbitrary, this establishes conservation of the total momentum

\displaystyle  \sum_{i=1}^n m_i \dot q_i(t).

Similarly, if we have the rotation invariance

\displaystyle  V( R q_1, \dots, Rq_N ) = V( q_1, \dots, q_N )

for any {q_1,\dots,q_N \in {\bf R}^3} and {R \in SO(3)}, then we have the pointwise rotation symmetry

\displaystyle  q_i[s](t) := \exp( s A ) q_i(t)

for any skew-symmetric real {3 \times 3} matrix {A}, in which case {\dot q_i(t) = A q_i(t)}, and the conserved quantity (11) becomes

\displaystyle  \sum_{i=1}^n m_i \langle A q_i(t), \dot q_i(t) \rangle;

since {A} is an arbitrary skew-symmetric matrix, this establishes conservation of the total angular momentum

\displaystyle  \sum_{i=1}^n m_i q_i(t) \wedge \dot q_i(t).

Below the fold, I will describe how Noether’s theorem can be used to locate all of the conserved quantities for the Euler equations of inviscid fluid flow, discussed in this previous post, by interpreting that flow as geodesic flow in an infinite dimensional manifold.

Read the rest of this entry »

I’ve just uploaded to the arXiv the paper “Finite time blowup for an averaged three-dimensional Navier-Stokes equation“, submitted to J. Amer. Math. Soc.. The main purpose of this paper is to formalise the “supercriticality barrier” for the global regularity problem for the Navier-Stokes equation, which roughly speaking asserts that it is not possible to establish global regularity by any “abstract” approach which only uses upper bound function space estimates on the nonlinear part of the equation, combined with the energy identity. This is done by constructing a modification of the Navier-Stokes equations with a nonlinearity that obeys essentially all of the function space estimates that the true Navier-Stokes nonlinearity does, and which also obeys the energy identity, but for which one can construct solutions that blow up in finite time. Results of this type had been previously established by Montgomery-Smith, Gallagher-Paicu, and Li-Sinai for variants of the Navier-Stokes equation without the energy identity, and by Katz-Pavlovic and by Cheskidov for dyadic analogues of the Navier-Stokes equations in five and higher dimensions that obeyed the energy identity (see also the work of Plechac and Sverak and of Hou and Lei that also suggest blowup for other Navier-Stokes type models obeying the energy identity in five and higher dimensions), but to my knowledge this is the first blowup result for a Navier-Stokes type equation in three dimensions that also obeys the energy identity. Intriguingly, the method of proof in fact hints at a possible route to establishing blowup for the true Navier-Stokes equations, which I am now increasingly inclined to believe is the case (albeit for a very small set of initial data).

To state the results more precisely, recall that the Navier-Stokes equations can be written in the form

\displaystyle  \partial_t u + (u \cdot \nabla) u = \nu \Delta u + \nabla p

for a divergence-free velocity field {u} and a pressure field {p}, where {\nu>0} is the viscosity, which we will normalise to be one. We will work in the non-periodic setting, so the spatial domain is {{\bf R}^3}, and for sake of exposition I will not discuss matters of regularity or decay of the solution (but we will always be working with strong notions of solution here rather than weak ones). Applying the Leray projection {P} to divergence-free vector fields to this equation, we can eliminate the pressure, and obtain an evolution equation

\displaystyle  \partial_t u = \Delta u + B(u,u) \ \ \ \ \ (1)

purely for the velocity field, where {B} is a certain bilinear operator on divergence-free vector fields (specifically, {B(u,v) = -\frac{1}{2} P( (u \cdot \nabla) v + (v \cdot \nabla) u)}. The global regularity problem for Navier-Stokes is then equivalent to the global regularity problem for the evolution equation (1).

An important feature of the bilinear operator {B} appearing in (1) is the cancellation law

\displaystyle  \langle B(u,u), u \rangle = 0

(using the {L^2} inner product on divergence-free vector fields), which leads in particular to the fundamental energy identity

\displaystyle  \frac{1}{2} \int_{{\bf R}^3} |u(T,x)|^2\ dx + \int_0^T \int_{{\bf R}^3} |\nabla u(t,x)|^2\ dx dt = \frac{1}{2} \int_{{\bf R}^3} |u(0,x)|^2\ dx.

This identity (and its consequences) provide essentially the only known a priori bound on solutions to the Navier-Stokes equations from large data and arbitrary times. Unfortunately, as discussed in this previous post, the quantities controlled by the energy identity are supercritical with respect to scaling, which is the fundamental obstacle that has defeated all attempts to solve the global regularity problem for Navier-Stokes without any additional assumptions on the data or solution (e.g. perturbative hypotheses, or a priori control on a critical norm such as the {L^\infty_t L^3_x} norm).

Our main result is then (slightly informally stated) as follows

Theorem 1 There exists an averaged version {\tilde B} of the bilinear operator {B}, of the form

\displaystyle  \tilde B(u,v) := \int_\Omega m_{3,\omega}(D) Rot_{3,\omega}

\displaystyle B( m_{1,\omega}(D) Rot_{1,\omega} u, m_{2,\omega}(D) Rot_{2,\omega} v )\ d\mu(\omega)

for some probability space {(\Omega, \mu)}, some spatial rotation operators {Rot_{i,\omega}} for {i=1,2,3}, and some Fourier multipliers {m_{i,\omega}} of order {0}, for which one still has the cancellation law

\displaystyle  \langle \tilde B(u,u), u \rangle = 0

and for which the averaged Navier-Stokes equation

\displaystyle  \partial_t u = \Delta u + \tilde B(u,u) \ \ \ \ \ (2)

admits solutions that blow up in finite time.

(There are some integrability conditions on the Fourier multipliers {m_{i,\omega}} required in the above theorem in order for the conclusion to be non-trivial, but I am omitting them here for sake of exposition.)

Because spatial rotations and Fourier multipliers of order {0} are bounded on most function spaces, {\tilde B} automatically obeys almost all of the upper bound estimates that {B} does. Thus, this theorem blocks any attempt to prove global regularity for the true Navier-Stokes equations which relies purely on the energy identity and on upper bound estimates for the nonlinearity; one must use some additional structure of the nonlinear operator {B} which is not shared by an averaged version {\tilde B}. Such additional structure certainly exists – for instance, the Navier-Stokes equation has a vorticity formulation involving only differential operators rather than pseudodifferential ones, whereas a general equation of the form (2) does not. However, “abstract” approaches to global regularity generally do not exploit such structure, and thus cannot be used to affirmatively answer the Navier-Stokes problem.

It turns out that the particular averaged bilinear operator {B} that we will use will be a finite linear combination of local cascade operators, which take the form

\displaystyle  C(u,v) := \sum_{n \in {\bf Z}} (1+\epsilon_0)^{5n/2} \langle u, \psi_{1,n} \rangle \langle v, \psi_{2,n} \rangle \psi_{3,n}

where {\epsilon_0>0} is a small parameter, {\psi_1,\psi_2,\psi_3} are Schwartz vector fields whose Fourier transform is supported on an annulus, and {\psi_{i,n}(x) := (1+\epsilon_0)^{3n/2} \psi_i( (1+\epsilon_0)^n x)} is an {L^2}-rescaled version of {\psi_i} (basically a “wavelet” of wavelength about {(1+\epsilon_0)^{-n}} centred at the origin). Such operators were essentially introduced by Katz and Pavlovic as dyadic models for {B}; they have the essentially the same scaling property as {B} (except that one can only scale along powers of {1+\epsilon_0}, rather than over all positive reals), and in fact they can be expressed as an average of {B} in the sense of the above theorem, as can be shown after a somewhat tedious amount of Fourier-analytic symbol manipulations.

If we consider nonlinearities {\tilde B} which are a finite linear combination of local cascade operators, then the equation (2) more or less collapses to a system of ODE in certain “wavelet coefficients” of {u}. The precise ODE that shows up depends on what precise combination of local cascade operators one is using. Katz and Pavlovic essentially considered a single cascade operator together with its “adjoint” (needed to preserve the energy identity), and arrived (more or less) at the system of ODE

\displaystyle  \partial_t X_n = - (1+\epsilon_0)^{2n} X_n + (1+\epsilon_0)^{\frac{5}{2}(n-1)} X_{n-1}^2 - (1+\epsilon_0)^{\frac{5}{2} n} X_n X_{n+1} \ \ \ \ \ (3)

where {X_n: [0,T] \rightarrow {\bf R}} are scalar fields for each integer {n}. (Actually, Katz-Pavlovic worked with a technical variant of this particular equation, but the differences are not so important for this current discussion.) Note that the quadratic terms on the RHS carry a higher exponent of {1+\epsilon_0} than the dissipation term; this reflects the supercritical nature of this evolution (the energy {\frac{1}{2} \sum_n X_n^2} is monotone decreasing in this flow, so the natural size of {X_n} given the control on the energy is {O(1)}). There is a slight technical issue with the dissipation if one wishes to embed (3) into an equation of the form (2), but it is minor and I will not discuss it further here.

In principle, if the {X_n} mode has size comparable to {1} at some time {t_n}, then energy should flow from {X_n} to {X_{n+1}} at a rate comparable to {(1+\epsilon_0)^{\frac{5}{2} n}}, so that by time {t_{n+1} \approx t_n + (1+\epsilon_0)^{-\frac{5}{2} n}} or so, most of the energy of {X_n} should have drained into the {X_{n+1}} mode (with hardly any energy dissipated). Since the series {\sum_{n \geq 1} (1+\epsilon_0)^{-\frac{5}{2} n}} is summable, this suggests finite time blowup for this ODE as the energy races ever more quickly to higher and higher modes. Such a scenario was indeed established by Katz and Pavlovic (and refined by Cheskidov) if the dissipation strength {(1+\epsilon)^{2n}} was weakened somewhat (the exponent {2} has to be lowered to be less than {\frac{5}{3}}). As mentioned above, this is enough to give a version of Theorem 1 in five and higher dimensions.

On the other hand, it was shown a few years ago by Barbato, Morandin, and Romito that (3) in fact admits global smooth solutions (at least in the dyadic case {\epsilon_0=1}, and assuming non-negative initial data). Roughly speaking, the problem is that as energy is being transferred from {X_n} to {X_{n+1}}, energy is also simultaneously being transferred from {X_{n+1}} to {X_{n+2}}, and as such the solution races off to higher modes a bit too prematurely, without absorbing all of the energy from lower modes. This weakens the strength of the blowup to the point where the moderately strong dissipation in (3) is enough to kill the high frequency cascade before a true singularity occurs. Because of this, the original Katz-Pavlovic model cannot quite be used to establish Theorem 1 in three dimensions. (Actually, the original Katz-Pavlovic model had some additional dispersive features which allowed for another proof of global smooth solutions, which is an unpublished result of Nazarov.)

To get around this, I had to “engineer” an ODE system with similar features to (3) (namely, a quadratic nonlinearity, a monotone total energy, and the indicated exponents of {(1+\epsilon_0)} for both the dissipation term and the quadratic terms), but for which the cascade of energy from scale {n} to scale {n+1} was not interrupted by the cascade of energy from scale {n+1} to scale {n+2}. To do this, I needed to insert a delay in the cascade process (so that after energy was dumped into scale {n}, it would take some time before the energy would start to transfer to scale {n+1}), but the process also needed to be abrupt (once the process of energy transfer started, it needed to conclude very quickly, before the delayed transfer for the next scale kicked in). It turned out that one could build a “quadratic circuit” out of some basic “quadratic gates” (analogous to how an electrical circuit could be built out of basic gates such as amplifiers or resistors) that achieved this task, leading to an ODE system essentially of the form

\displaystyle \partial_t X_{1,n} = - (1+\epsilon_0)^{2n} X_{1,n}

\displaystyle  + (1+\epsilon_0)^{5n/2} (- \epsilon^{-2} X_{3,n} X_{4,n} - \epsilon X_{1,n} X_{2,n} - \epsilon^2 \exp(-K^{10}) X_{1,n} X_{3,n}

\displaystyle  + K X_{4,n-1}^2)

\displaystyle  \partial_t X_{2,n} = - (1+\epsilon_0)^{2n} X_{2,n} + (1+\epsilon_0)^{5n/2} (\epsilon X_{1,n}^2 - \epsilon^{-1} K^{10} X_{3,n}^2)

\displaystyle  \partial_t X_{3,n} = - (1+\epsilon_0)^{2n} X_{3,n} + (1+\epsilon_0)^{5n/2} (\epsilon^2 \exp(-K^{10}) X_{1,n}^2

\displaystyle + \epsilon^{-1} K^{10} X_{2,n} X_{3,n} )

\displaystyle  \partial_t X_{4,n} =- (1+\epsilon_0)^{2n} X_{4,n} + (1+\epsilon_0)^{5n/2} (\epsilon^{-2} X_{3,n} X_{1,n}

\displaystyle - (1+\epsilon_0)^{5/2} K X_{4,n} X_{1,n+1})

where {K \geq 1} is a suitable large parameter and {\epsilon > 0} is a suitable small parameter (much smaller than {1/K}). To visualise the dynamics of such a system, I found it useful to describe this system graphically by a “circuit diagram” that is analogous (but not identical) to the circuit diagrams arising in electrical engineering:

circuit-1

The coupling constants here range widely from being very large to very small; in practice, this makes the {X_{2,n}} and {X_{3,n}} modes absorb very little energy, but exert a sizeable influence on the remaining modes. If a lot of energy is suddenly dumped into {X_{1,n}}, what happens next is roughly as follows: for a moderate period of time, nothing much happens other than a trickle of energy into {X_{2,n}}, which in turn causes a rapid exponential growth of {X_{3,n}} (from a very low base). After this delay, {X_{3,n}} suddenly crosses a certain threshold, at which point it causes {X_{1,n}} and {X_{4,n}} to exchange energy back and forth with extreme speed. The energy from {X_{4,n}} then rapidly drains into {X_{1,n+1}}, and the process begins again (with a slight loss in energy due to the dissipation). If one plots the total energy {E_n := \frac{1}{2} ( X_{1,n}^2 + X_{2,n}^2 + X_{3,n}^2 + X_{4,n}^2 )} as a function of time, it looks schematically like this:

energy-blowup

As in the previous heuristic discussion, the time between cascades from one frequency scale to the next decay exponentially, leading to blowup at some finite time {T}. (One could describe the dynamics here as being similar to the famous “lighting the beacons” scene in the Lord of the Rings movies, except that (a) as each beacon gets ignited, the previous one is extinguished, as per the energy identity; (b) the time between beacon lightings decrease exponentially; and (c) there is no soundtrack.)

There is a real (but remote) possibility that this sort of construction can be adapted to the true Navier-Stokes equations. The basic blowup mechanism in the averaged equation is that of a von Neumann machine, or more precisely a construct (built within the laws of the inviscid evolution {\partial_t u = \tilde B(u,u)}) that, after some time delay, manages to suddenly create a replica of itself at a finer scale (and to largely erase its original instantiation in the process). In principle, such a von Neumann machine could also be built out of the laws of the inviscid form of the Navier-Stokes equations (i.e. the Euler equations). In physical terms, one would have to build the machine purely out of an ideal fluid (i.e. an inviscid incompressible fluid). If one could somehow create enough “logic gates” out of ideal fluid, one could presumably build a sort of “fluid computer”, at which point the task of building a von Neumann machine appears to reduce to a software engineering exercise rather than a PDE problem (providing that the gates are suitably stable with respect to perturbations, but (as with actual computers) this can presumably be done by converting the analog signals of fluid mechanics into a more error-resistant digital form). The key thing missing in this program (in both senses of the word) to establish blowup for Navier-Stokes is to construct the logic gates within the laws of ideal fluids. (Compare with the situation for cellular automata such as Conway’s “Game of Life“, in which Turing complete computers, universal constructors, and replicators have all been built within the laws of that game.)

This is the sixth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page (which has recently returned to full functionality, after a partial outage).

The current focus is on improving the upper bound on {H_1} under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from {H_1 \leq 8} to {H_1 \leq 6}, which looks to be the limit of the method (see this previous comment for a semi-rigorous reason as to why {H_1 \leq 4} is not possible with this method). With the most general Selberg sieve available, the problem reduces to the following three-dimensional variational one:

Problem 1 Does there exist a (not necessarily convex) polytope {R \subset [0,1]^3} with quantities {0 \leq \varepsilon_1,\varepsilon_2,\varepsilon_3 \leq 1}, and a non-trivial square-integrable function {F: {\bf R}^3 \rightarrow {\bf R}} supported on {R} such that

  • {R + R \subset \{ (x,y,z) \in [0,2]^3: \min(x+y,y+z,z+x) \leq 2 \},}
  • {\int_0^\infty F(x,y,z)\ dx = 0} when {y+z \geq 1+\varepsilon_1};
  • {\int_0^\infty F(x,y,z)\ dy = 0} when {x+z \geq 1+\varepsilon_2};
  • {\int_0^\infty F(x,y,z)\ dz = 0} when {x+y \geq 1+\varepsilon_3};

and such that we have the inequality

\displaystyle  \int_{y+z \leq 1-\varepsilon_1} (\int_{\bf R} F(x,y,z)\ dx)^2\ dy dz

\displaystyle + \int_{z+x \leq 1-\varepsilon_2} (\int_{\bf R} F(x,y,z)\ dy)^2\ dz dx

\displaystyle + \int_{x+y \leq 1-\varepsilon_3} (\int_{\bf R} F(x,y,z)\ dz)^2\ dx dy

\displaystyle  > 2 \int_R F(x,y,z)^2\ dx dy dz?

(Initially it was assumed that {R} was convex, but we have now realised that this is not necessary.)

An affirmative answer to this question will imply {H_1 \leq 6} on GEH. We are “within almost two percent” of this claim; we cannot quite reach {2} yet, but have got as far as {1.959633}. However, we have not yet fully optimised {F} in the above problem.

The most promising route so far is to take the symmetric polytope

\displaystyle  R = \{ (x,y,z) \in [0,1]^3: x+y+z \leq 3/2 \}

with {F} symmetric as well, and {\varepsilon_1=\varepsilon_2=\varepsilon_3=\varepsilon} (we suspect that the optimal {\varepsilon} will be roughly {1/6}). (However, it is certainly worth also taking a look at easier model problems, such as the polytope {{\cal R}'_3 := \{ (x,y,z) \in [0,1]^3: x+y,y+z,z+x \leq 1\}}, which has no vanishing marginal conditions to contend with; more recently we have been looking at the non-convex polytope {R = \{x+y,x+z \leq 1 \} \cup \{ x+y,y+z \leq 1 \} \cup \{ x+z,y+z \leq 1\}}.) Some further details of this particular case are given below the fold.

There should still be some progress to be made in the other regimes of interest – the unconditional bound on {H_1} (currently at {270}), and on any further progress in asymptotic bounds for {H_m} for larger {m} – but the current focus is certainly on the bound on {H_1} on GEH, as we seem to be tantalisingly close to an optimal result here.

Read the rest of this entry »

This is the fifth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page (which has recently returned to full functionality, after a partial outage). In particular, the upper bound for {H_1} has been shaved a little from {272} to {270}, and we have very recently achieved the bound {H_1 \leq 8} on the generalised Elliott-Halberstam conjecture GEH, formulated as Conjecture 1 of this paper of Bombieri, Friedlander, and Iwaniec. We also have explicit bounds for {H_m} for {m \leq 5}, both with and without the assumption of the Elliott-Halberstam conjecture, as well as slightly sharper asymptotics for the upper bound for {H_m} as {m \rightarrow \infty}.

The basic strategy for bounding {H_m} still follows the general paradigm first laid out by Goldston, Pintz, Yildirim: given an admissible {k}-tuple {(h_1,\dots,h_k)}, one needs to locate a non-negative sieve weight {\nu: {\bf Z} \rightarrow {\bf R}^+}, supported on an interval {[x,2x]} for a large {x}, such that the ratio

\displaystyle  \frac{\sum_{i=1}^k \sum_n \nu(n) 1_{n+h_i \hbox{ prime}}}{\sum_n \nu(n)} \ \ \ \ \ (1)

is asymptotically larger than {m} as {x \rightarrow \infty}; this will show that {H_m \leq h_k-h_1}. Thus one wants to locate a sieve weight {\nu} for which one has good lower bounds on the numerator and good upper bounds on the denominator.

One can modify this paradigm slightly, for instance by adding the additional term {\sum_n \nu(n) 1_{n+h_1,\dots,n+h_k \hbox{ composite}}} to the numerator, or by subtracting the term {\sum_n \nu(n) 1_{n+h_1,n+h_k \hbox{ prime}}} from the numerator (which allows one to reduce the bound {h_k-h_1} to {\max(h_k-h_2,h_{k-1}-h_1)}); however, the numerical impact of these tweaks have proven to be negligible thus far.

Despite a number of experiments with other sieves, we are still relying primarily on the Selberg sieve

\displaystyle  \nu(n) := 1_{n=b\ (W)} 1_{[x,2x]}(n) \lambda(n)^2

where {\lambda(n)} is the divisor sum

\displaystyle  \lambda(n) := \sum_{d_1|n+h_1, \dots, d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R}, \dots, \frac{\log d_k}{\log R})

with {R = x^{\theta/2}}, {\theta} is the level of distribution ({\theta=1/2-} if relying on Bombieri-Vinogradov, {\theta=1-} if assuming Elliott-Halberstam, and (in principle) {\theta = \frac{1}{2} + \frac{13}{540}-} if using Polymath8a technology), and {f: [0,+\infty)^k \rightarrow {\bf R}} is a smooth, compactly supported function. Most of the progress has come by enlarging the class of cutoff functions {f} one is permitted to use.

The baseline bounds for the numerator and denominator in (1) (as established for instance in this previous post) are as follows. If {f} is supported on the simplex

\displaystyle  {\cal R}_k := \{ (t_1,\dots,t_k) \in [0,+\infty)^k: t_1+\dots+t_k < 1 \},

and we define the mixed partial derivative {F: [0,+\infty)^k \rightarrow {\bf R}} by

\displaystyle  F(t_1,\dots,t_k) = \frac{\partial^k}{\partial t_1 \dots \partial t_k} f(t_1,\dots,t_k)

then the denominator in (1) is

\displaystyle  \frac{Bx}{W} (I_k(F) + o(1)) \ \ \ \ \ (2)

where

\displaystyle  B := (\frac{W}{\phi(W) \log R})^k

and

\displaystyle  I_k(F) := \int_{[0,+\infty)^k} F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k.

Similarly, the numerator of (1) is

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_k(F) + o(1)) \ \ \ \ \ (3)

where

\displaystyle  J_k^{(m)}(F) := \int_{[0,+\infty)^{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

Thus, if we let {M_k} be the supremum of the ratio

\displaystyle  \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

whenever {F} is supported on {{\cal R}_k} and is non-vanishing, then one can prove {H_m \leq h_k - h_1} whenever

\displaystyle  M_k > \frac{2m}{\theta}.

We can improve this baseline in a number of ways. Firstly, with regards to the denominator in (1), if one upgrades the Elliott-Halberstam hypothesis {EH[\theta]} to the generalised Elliott-Halberstam hypothesis {GEH[\theta]} (currently known for {\theta < 1/2}, thanks to Motohashi, but conjectured for {\theta < 1}), the asymptotic (2) holds under the more general hypothesis that {F} is supported in a polytope {R}, as long as {R} obeys the inclusion

\displaystyle  R + R \subset \bigcup_{m=1}^k \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: \ \ \ \ \ (4)

\displaystyle  t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 2; t_m < 2/\theta \} \cup \frac{2}{\theta} \cdot {\cal R}_k;

examples of polytopes {R} obeying this constraint include the modified simplex

\displaystyle  {\cal R}'_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 1

\displaystyle \hbox{ for all } 1 \leq m \leq k \},

the prism

\displaystyle  {\cal R}_{k-1} \times [0, 1/\theta)

the dilated simplex

\displaystyle  \frac{1}{\theta} \cdot {\cal R}_k

and the truncated simplex

\displaystyle  \frac{k}{k-1} \cdot {\cal R}_k \cap [0,1/\theta)^k.

See this previous post for a proof of these claims.

With regards to the numerator, the asymptotic (3) is valid whenever, for each {1 \leq m \leq k}, the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {{\cal R}_{k-1}}. This is automatic if {F} is supported on {{\cal R}_k}, or on the slightly larger region {{\cal R}'_k}, but is an additional constraint when {F} is supported on one of the other polytopes {R} mentioned above.

More recently, we have obtained a more flexible version of the above asymptotic: if the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}} for some {0 < \varepsilon < 1}, then the numerator of (1) has a lower bound of

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_{k,\varepsilon}(F) + o(1))

where

\displaystyle  J_{k,\varepsilon}^{(m)}(F) := \int_{(1-\varepsilon) \cdot {\cal R}_{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

A proof is given here. Putting all this together, we can conclude

Theorem 1 Suppose we can find {0 \leq \varepsilon < 1} and a function {F} supported on a polytope {R} obeying (4), not identically zero and with all marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanishing outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}}, and with

\displaystyle  \frac{\sum_{m=1}^k J_{k,\varepsilon}^{(m)}(F)}{I_k(F)} > \frac{2m}{\theta}.

Then {GEH[\theta]} implies {H_m \leq h_k-h_1}.

In principle, this very flexible criterion for upper bounding {H_m} should lead to better bounds than before, and in particular we have now established {H_1 \leq 8} on GEH.

Another promising direction is to try to improve the analysis at medium {k} (more specifically, in the regime {k \sim 50}), which is where we are currently at without EH or GEH through numerical quadratic programming. Right now we are only using {\theta=1/2} and using the baseline {M_k} analysis, basically for two reasons:

  • We do not have good numerical formulae for integrating polynomials on any region more complicated than the simplex {{\cal R}_k} in medium dimension.
  • The estimates {MPZ^{(i)}[\varpi,\delta]} produced by Polymath8a involve a {\delta} parameter, which introduces additional restrictions on the support of {F} (conservatively, it restricts {F} to {[0,\delta']^k} where {\delta' := \frac{\delta}{1/4+\varpi}} and {\theta = 1/2 + 2 \varpi}; it should be possible to be looser than this (as was done in Polymath8a) but this has not been fully explored yet). This then triggers the previous obstacle of having to integrate on something other than a simplex.

However, these look like solvable problems, and so I would expect that further unconditional improvement for {H_1} should be possible.

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 4,594 other followers