You are currently browsing the category archive for the ‘expository’ category.

Let ${{\bf F}_q}$ be a finite field of order ${q = p^n}$, and let ${C}$ be an absolutely irreducible smooth projective curve defined over ${{\bf F}_q}$ (and hence over the algebraic closure ${k := \overline{{\bf F}_q}}$ of that field). For instance, ${C}$ could be the projective elliptic curve

$\displaystyle C = \{ [x,y,z]: y^2 z = x^3 + ax z^2 + b z^3 \}$

in the projective plane ${{\bf P}^2 = \{ [x,y,z]: (x,y,z) \neq (0,0,0) \}}$, where ${a,b \in {\bf F}_q}$ are coefficients whose discriminant ${-16(4a^3+27b^2)}$ is non-vanishing, which is the projective version of the affine elliptic curve

$\displaystyle \{ (x,y): y^2 = x^3 + ax + b \}.$

To each such curve ${C}$ one can associate a genus ${g}$, which we will define later; for instance, elliptic curves have genus ${1}$. We can also count the cardinality ${|C({\bf F}_q)|}$ of the set ${C({\bf F}_q)}$ of ${{\bf F}_q}$-points of ${C}$. The Hasse-Weil bound relates the two:

Theorem 1 (Hasse-Weil bound) ${||C({\bf F}_q)| - q - 1| \leq 2g\sqrt{q}}$.

The usual proofs of this bound proceed by first establishing a trace formula of the form

$\displaystyle |C({\bf F}_{p^n})| = p^n - \sum_{i=1}^{2g} \alpha_i^n + 1 \ \ \ \ \ (1)$

for some complex numbers ${\alpha_1,\dots,\alpha_{2g}}$ independent of ${n}$; this is in fact a special case of the Lefschetz-Grothendieck trace formula, and can be interpreted as an assertion that the zeta function associated to the curve ${C}$ is rational. The task is then to establish a bound ${|\alpha_i| \leq \sqrt{p}}$ for all ${i=1,\dots,2g}$; this (or more precisely, the slightly stronger assertion ${|\alpha_i| = \sqrt{p}}$) is the Riemann hypothesis for such curves. This can be done either by passing to the Jacobian variety of ${C}$ and using a certain duality available on the cohomology of such varieties, known as Rosati involution; alternatively, one can pass to the product surface ${C \times C}$ and apply the Riemann-Roch theorem for that surface.

In 1969, Stepanov introduced an elementary method (a version of what is now known as the polynomial method) to count (or at least to upper bound) the quantity ${|C({\bf F}_q)|}$. The method was initially restricted to hyperelliptic curves, but was soon extended to general curves. In particular, Bombieri used this method to give a short proof of the following weaker version of the Hasse-Weil bound:

Theorem 2 (Weak Hasse-Weil bound) If ${q}$ is a perfect square, and ${q \geq (g+1)^4}$, then ${|C({\bf F}_q)| \leq q + (2g+1) \sqrt{q} + 1}$.

In fact, the bound on ${|C({\bf F}_q)|}$ can be sharpened a little bit further, as we will soon see.

Theorem 2 is only an upper bound on ${|C({\bf F}_q)|}$, but there is a Galois-theoretic trick to convert (a slight generalisation of) this upper bound to a matching lower bound, and if one then uses the trace formula (1) (and the “tensor power trick” of sending ${n}$ to infinity to control the weights ${\alpha_i}$) one can then recover the full Hasse-Weil bound. We discuss these steps below the fold.

I’ve discussed Bombieri’s proof of Theorem 2 in this previous post (in the special case of hyperelliptic curves), but now wish to present the full proof, with some minor simplifications from Bombieri’s original presentation; it is mostly elementary, with the deepest fact from algebraic geometry needed being Riemann’s inequality (a weak form of the Riemann-Roch theorem).

The first step is to reinterpret ${|C({\bf F}_q)|}$ as the number of points of intersection between two curves ${C_1,C_2}$ in the surface ${C \times C}$. Indeed, if we define the Frobenius endomorphism ${\hbox{Frob}_q}$ on any projective space by

$\displaystyle \hbox{Frob}_q( [x_0,\dots,x_n] ) := [x_0^q, \dots, x_n^q]$

then this map preserves the curve ${C}$, and the fixed points of this map are precisely the ${{\bf F}_q}$ points of ${C}$:

$\displaystyle C({\bf F}_q) = \{ z \in C: \hbox{Frob}_q(z) = z \}.$

Thus one can interpret ${|C({\bf F}_q)|}$ as the number of points of intersection between the diagonal curve

$\displaystyle \{ (z,z): z \in C \}$

and the Frobenius graph

$\displaystyle \{ (z, \hbox{Frob}_q(z)): z \in C \}$

which are copies of ${C}$ inside ${C \times C}$. But we can use the additional hypothesis that ${q}$ is a perfect square to write this more symmetrically, by taking advantage of the fact that the Frobenius map has a square root

$\displaystyle \hbox{Frob}_q = \hbox{Frob}_{\sqrt{q}}^2$

with ${\hbox{Frob}_{\sqrt{q}}}$ also preserving ${C}$. One can then also interpret ${|C({\bf F}_q)|}$ as the number of points of intersection between the curve

$\displaystyle C_1 := \{ (z, \hbox{Frob}_{\sqrt{q}}(z)): z \in C \} \ \ \ \ \ (2)$

and its transpose

$\displaystyle C_2 := \{ (\hbox{Frob}_{\sqrt{q}}(w), w): w \in C \}.$

Let ${k(C \times C)}$ be the field of rational functions on ${C \times C}$ (with coefficients in ${k}$), and define ${k(C_1)}$, ${k(C_2)}$, and ${k(C_1 \cap C_2)}$ analogously )(although ${C_1 \cap C_2}$ is likely to be disconnected, so ${k(C_1 \cap C_2)}$ will just be a ring rather than a field. We then (morally) have the commuting square

$\displaystyle \begin{array}{ccccc} && k(C \times C) && \\ & \swarrow & & \searrow & \\ k(C_1) & & & & k(C_2) \\ & \searrow & & \swarrow & \\ && k(C_1 \cap C_2) && \end{array},$

if we ignore the issue that a rational function on, say, ${C \times C}$, might blow up on all of ${C_1}$ and thus not have a well-defined restriction to ${C_1}$. We use ${\pi_1: k(C \times C) \rightarrow k(C_1)}$ and ${\pi_2: k(C \times C) \rightarrow k(C_2)}$ to denote the restriction maps. Furthermore, we have obvious isomorphisms ${\iota_1: k(C_1) \rightarrow k(C)}$, ${\iota_2: k(C_2) \rightarrow k(C)}$ coming from composing with the graphing maps ${z \mapsto (z, \hbox{Frob}_{\sqrt{q}}(z))}$ and ${w \mapsto (\hbox{Frob}_{\sqrt{q}}(w), w)}$.

The idea now is to find a rational function ${f \in k(C \times C)}$ on the surface ${C \times C}$ of controlled degree which vanishes when restricted to ${C_1}$, but is non-vanishing (and not blowing up) when restricted to ${C_2}$. On ${C_2}$, we thus get a non-zero rational function ${f \downharpoonright_{C_2}}$ of controlled degree which vanishes on ${C_1 \cap C_2}$ – which then lets us bound the cardinality of ${C_1 \cap C_2}$ in terms of the degree of ${f \downharpoonright_{C_2}}$. (In Bombieri’s original argument, one required vanishing to high order on the ${C_1}$ side, but in our presentation, we have factored out a ${\hbox{Frob}_{\sqrt{q}}}$ term which removes this high order vanishing condition.)

To find this ${f}$, we will use linear algebra. Namely, we will locate a finite-dimensional subspace ${V}$ of ${k(C \times C)}$ (consisting of certain “controlled degree” rational functions) which projects injectively to ${k(C_2)}$, but whose projection to ${k(C_1)}$ has strictly smaller dimension than ${V}$ itself. The rank-nullity theorem then forces the existence of a non-zero element ${P}$ of ${V}$ whose projection to ${k(C_1)}$ vanishes, but whose projection to ${k(C_2)}$ is non-zero.

Now we build ${V}$. Pick a ${{\bf F}_q}$ point ${P_\infty}$ of ${C}$, which we will think of as being a point at infinity. (For the purposes of proving Theorem 2, we may clearly assume that ${C({\bf F}_q)}$ is non-empty.) Thus ${P_\infty}$ is fixed by ${\hbox{Frob}_q}$. To simplify the exposition, we will also assume that ${P_\infty}$ is fixed by the square root ${\hbox{Frob}_{\sqrt{q}}}$ of ${\hbox{Frob}_q}$; in the opposite case when ${\hbox{Frob}_{\sqrt{q}}}$ has order two when acting on ${P_\infty}$, the argument is essentially the same, but all references to ${P_\infty}$ in the second factor of ${C \times C}$ need to be replaced by ${\hbox{Frob}_{\sqrt{q}} P_\infty}$ (we leave the details to the interested reader).

For any natural number ${n}$, define ${R_n}$ to be the set of rational functions ${f \in k(C)}$ which are allowed to have a pole of order up to ${n}$ at ${P_\infty}$, but have no other poles on ${C}$; note that as we are assuming ${C}$ to be smooth, it is unambiguous what a pole is (and what order it will have). (In the fancier language of divisors and Cech cohomology, we have ${R_n = H^0( C, {\mathcal O}_C(-n P_\infty) )}$.) The space ${R_n}$ is clearly a vector space over ${k}$; one can view intuitively as the space of “polynomials” on ${C}$ of “degree” at most ${n}$. When ${n=0}$, ${R_0}$ consists just of the constant functions. Indeed, if ${f \in R_0}$, then the image ${f(C)}$ of ${f}$ avoids ${\infty}$ and so lies in the affine line ${k = {\mathbf P}^1 \backslash \{\infty\}}$; but as ${C}$ is projective, the image ${f(C)}$ needs to be compact (hence closed) in ${{\mathbf P}^1}$, and must therefore be a point, giving the claim.

For higher ${n \geq 1}$, we have the easy relations

$\displaystyle \hbox{dim}(R_{n-1}) \leq \hbox{dim}(R_n) \leq \hbox{dim}(R_{n-1})+1. \ \ \ \ \ (3)$

The former inequality just comes from the trivial inclusion ${R_{n-1} \subset R_n}$. For the latter, observe that if two functions ${f, g}$ lie in ${R_n}$, so that they each have a pole of order at most ${n}$ at ${P_\infty}$, then some linear combination of these functions must have a pole of order at most ${n-1}$ at ${P_\infty}$; thus ${R_{n-1}}$ has codimension at most one in ${R_n}$, giving the claim.

From (3) and induction we see that each of the ${R_n}$ are finite dimensional, with the trivial upper bound

$\displaystyle \hbox{dim}(R_n) \leq n+1. \ \ \ \ \ (4)$

Riemann’s inequality complements this with the lower bound

$\displaystyle \hbox{dim}(R_n) \geq n+1-g, \ \ \ \ \ (5)$

thus one has ${\hbox{dim}(R_n) = \hbox{dim}(R_{n-1})+1}$ for all but at most ${g}$ exceptions (in fact, exactly ${g}$ exceptions as it turns out). This is a consequence of the Riemann-Roch theorem; it can be proven from abstract nonsense (the snake lemma) if one defines the genus ${g}$ in a non-standard fashion (as the dimension of the first Cech cohomology ${H^1(C)}$ of the structure sheaf ${{\mathcal O}_C}$ of ${C}$), but to obtain this inequality with a standard definition of ${g}$ (e.g. as the dimension of the zeroth Cech cohomolgy ${H^0(C, \Omega_C^1)}$ of the line bundle of differentials) requires the more non-trivial tool of Serre duality.

At any rate, now that we have these vector spaces ${R_n}$, we will define ${V \subset k(C \times C)}$ to be a tensor product space

$\displaystyle V = R_\ell \otimes R_m$

for some natural numbers ${\ell, m \geq 0}$ which we will optimise in later. That is to say, ${V}$ is spanned by functions of the form ${(z,w) \mapsto f(z) g(w)}$ with ${f \in R_\ell}$ and ${g \in R_m}$. This is clearly a linear subspace of ${k(C \times C)}$ of dimension ${\hbox{dim}(R_\ell) \hbox{dim}(R_m)}$, and hence by Rieman’s inequality we have

$\displaystyle \hbox{dim}(V) \geq (\ell+1-g) (m+1-g) \ \ \ \ \ (6)$

if

$\displaystyle \ell,m \geq g-1. \ \ \ \ \ (7)$

Observe that ${\iota_1 \circ \pi_1}$ maps a tensor product ${(z,w) \mapsto f(z) g(w)}$ to a function ${z \mapsto f(z) g(\hbox{Frob}_{\sqrt{q}} z)}$. If ${f \in R_\ell}$ and ${g \in R_m}$, then we see that the function ${z \mapsto f(z) g(\hbox{Frob}_{\sqrt{q}} z)}$ has a pole of order at most ${\ell+m\sqrt{q}}$ at ${P_\infty}$. We conclude that

$\displaystyle \iota_1 \circ \pi_1( V ) \subset R_{\ell + m\sqrt{q}} \ \ \ \ \ (8)$

and in particular by (4)

$\displaystyle \hbox{dim}(\pi_1(V)) \leq \ell + m \sqrt{q} + 1 \ \ \ \ \ (9)$

and similarly

$\displaystyle \hbox{dim}(\pi_2(V)) \leq \ell \sqrt{q} + m + 1. \ \ \ \ \ (10)$

We will choose ${m}$ to be a bit bigger than ${\ell}$, to make the ${\pi_2}$ image of ${V}$ smaller than that of ${\pi_1}$. From (6), (10) we see that if we have the inequality

$\displaystyle (\ell+1-g) (m+1-g) > \ell \sqrt{q}+m + 1 \ \ \ \ \ (11)$

(together with (7)) then ${\pi_2}$ cannot be injective.

On the other hand, we have the following basic fact:

Lemma 3 (Injectivity) If

$\displaystyle \ell < \sqrt{q}, \ \ \ \ \ (12)$

then ${\pi_1: V \rightarrow \pi_1(V)}$ is injective.

Proof: From (3), we can find a linear basis ${f_1,\dots,f_a}$ of ${R_\ell}$ such that each of the ${f_i}$ has a distinct order ${d_i}$ of pole at ${P_\infty}$ (somewhere between ${0}$ and ${\ell}$ inclusive). Similarly, we may find a linear basis ${g_1,\dots,g_b}$ of ${R_m}$ such that each of the ${g_j}$ has a distinct order ${e_j}$ of pole at ${P_\infty}$ (somewhere between ${0}$ and ${m}$ inclusive). The functions ${z \mapsto f_i(z) g_j(\hbox{Frob}_{\sqrt{q}} z)}$ then span ${\iota_1(\pi_1(V))}$, and the order of pole at ${P_\infty}$ is ${d_i + \sqrt{q} e_j}$. But since ${\ell < \sqrt{q}}$, these orders are all distinct, and so these functions must be linearly independent. The claim follows. $\Box$

This gives us the following bound:

Proposition 4 Let ${\ell,m}$ be natural numbers such that (7), (11), (12) hold. Then ${|C({\bf F}_q)| \leq \ell + m \sqrt{q}}$.

Proof: As ${\pi_2}$ is not injective, we can find ${f \in V}$ with ${\pi_2(f)}$ vanishing. By the above lemma, the function ${\iota_1(\pi_1(f))}$ is then non-zero, but it must also vanish on ${\iota_1(C_1 \cap C_2)}$, which has cardinality ${|C({\bf F}_q)|}$. On the other hand, by (8), ${\iota_1(\pi_1(f))}$ has a pole of order at most ${\ell+m\sqrt{q}}$ at ${P_\infty}$ and no other poles. Since the number of poles and zeroes of a rational function on a projective curve must add up to zero, the claim follows. $\Box$

If ${q \geq (g+1)^4}$, we may make the explicit choice

$\displaystyle m := \sqrt{q}+2g; \quad \ell := \lfloor \frac{g}{g+1} \sqrt{q} \rfloor + g + 1$

and a brief calculation then gives Theorem 2. In some cases one can optimise things a bit further. For instance, in the genus zero case ${g=0}$ (e.g. if ${C}$ is just the projective line ${{\mathbf P}^1}$) one may take ${\ell=1, m = \sqrt{q}}$ and conclude the absolutely sharp bound ${|C({\bf F}_q)| \leq q+1}$ in this case; in the case of the projective line ${{\mathbf P}^1}$, the function ${f}$ is in fact the very concrete function ${f(z,w) := z - w^{\sqrt{q}}}$.

Remark 1 When ${q = p^{2n+1}}$ is not a perfect square, one can try to run the above argument using the factorisation ${\hbox{Frob}_q = \hbox{Frob}_{p^n} \hbox{Frob}_{p^{n+1}}}$ instead of ${\hbox{Frob}_q = \hbox{Frob}_{\sqrt{q}} \hbox{Frob}_{\sqrt{q}}}$. This gives a weaker version of the above bound, of the shape ${|C({\bf F}_q)| \leq q + O( \sqrt{p} \sqrt{q} )}$. In the hyperelliptic case at least, one can erase this loss by working with a variant of the argument in which one requires ${f}$ to vanish to high order at ${C_1}$, rather than just to first order; see this survey article of mine for details.

Roth’s theorem on arithmetic progressions asserts that every subset of the integers ${{\bf Z}}$ of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:

Theorem 1 (Roth’s theorem) Let ${G = (G,+)}$ be a compact abelian group, with Haar probability measure ${\mu}$, which is ${2}$-divisible (i.e. the map ${x \mapsto 2x}$ is surjective) and let ${A}$ be a measurable subset of ${G}$ with ${\mu(A) \geq \alpha}$ for some ${0 < \alpha < 1}$. Then we have

$\displaystyle \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r)\ d\mu(x) d\mu(r) \gg_\alpha 1,$

where ${X \gg_\alpha Y}$ denotes the bound ${X \geq c_\alpha Y}$ for some ${c_\alpha > 0}$ depending only on ${\alpha}$.

This theorem is usually formulated in the case that ${G}$ is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group ${G = {\bf Z}/N{\bf Z}}$ of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of ${2}$-divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant ${c_\alpha}$ on ${\alpha}$, but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the ${2}$-divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the ${2r}$ shift in that case.

We can deduce Theorem 1 from the following more general Khintchine-type statement. Let ${\hat G}$ denote the Pontryagin dual of a compact abelian group ${G}$, that is to say the set of all continuous homomorphisms ${\xi: x \mapsto \xi \cdot x}$ from ${G}$ to the (additive) unit circle ${{\bf R}/{\bf Z}}$. Thus ${\hat G}$ is a discrete abelian group, and functions ${f \in L^2(G)}$ have a Fourier transform ${\hat f \in \ell^2(\hat G)}$ defined by

$\displaystyle \hat f(\xi) := \int_G f(x) e^{-2\pi i \xi \cdot x}\ d\mu(x).$

If ${G}$ is ${2}$-divisible, then ${\hat G}$ is ${2}$-torsion-free in the sense that the map ${\xi \mapsto 2 \xi}$ is injective. For any finite set ${S \subset \hat G}$ and any radius ${\rho>0}$, define the Bohr set

$\displaystyle B(S,\rho) := \{ x \in G: \sup_{\xi \in S} \| \xi \cdot x \|_{{\bf R}/{\bf Z}} < \rho \}$

where ${\|\theta\|_{{\bf R}/{\bf Z}}}$ denotes the distance of ${\theta}$ to the nearest integer. We refer to the cardinality ${|S|}$ of ${S}$ as the rank of the Bohr set. We record a simple volume bound on Bohr sets:

Lemma 2 (Volume packing bound) Let ${G}$ be a compact abelian group with Haar probability measure ${\mu}$. For any Bohr set ${B(S,\rho)}$, we have

$\displaystyle \mu( B( S, \rho ) ) \gg_{|S|, \rho} 1.$

Proof: We can cover the torus ${({\bf R}/{\bf Z})^S}$ by ${O_{|S|,\rho}(1)}$ translates ${\theta+Q}$ of the cube ${Q := \{ (\theta_\xi)_{\xi \in S} \in ({\bf R}/{\bf Z})^S: \sup_{\xi \in S} \|\theta_\xi\|_{{\bf R}/{\bf Z}} < \rho/2 \}}$. Then the sets ${\{ x \in G: (\xi \cdot x)_{\xi \in S} \in \theta + Q \}}$ form an cover of ${G}$. But all of these sets lie in a translate of ${B(S,\rho)}$, and the claim then follows from the translation invariance of ${\mu}$. $\Box$

Given any Bohr set ${B(S,\rho)}$, we define a normalised “Lipschitz” cutoff function ${\nu_{B(S,\rho)}: G \rightarrow {\bf R}}$ by the formula

$\displaystyle \nu_{B(S,\rho)}(x) = c_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})_+ \ \ \ \ \ (1)$

where ${c_{B(S,\rho)}}$ is the constant such that

$\displaystyle \int_G \nu_{B(S,\rho)}\ d\mu = 1,$

thus

$\displaystyle c_{B(S,\rho)} = \left( \int_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})\ d\mu(x) \right)^{-1}.$

The function ${\nu_{B(S,\rho)}}$ should be viewed as an ${L^1}$-normalised “tent function” cutoff to ${B(S,\rho)}$. Note from Lemma 2 that

$\displaystyle 1 \ll_{|S|,\rho} c_{B(S,\rho)} \ll_{|S|,\rho} 1. \ \ \ \ \ (2)$

We then have the following sharper version of Theorem 1:

Theorem 3 (Roth-Khintchine theorem) Let ${G = (G,+)}$ be a ${2}$-divisible compact abelian group, with Haar probability measure ${\mu}$, and let ${\epsilon>0}$. Then for any measurable function ${f: G \rightarrow [0,1]}$, there exists a Bohr set ${B(S,\rho)}$ with ${|S| \ll_\epsilon 1}$ and ${\rho \gg_\epsilon 1}$ such that

$\displaystyle \int_G \int_G f(x) f(x+r) f(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \ \ \ \ \ (3)$

$\displaystyle \geq (\int_G f\ d\mu)^3 - O(\epsilon)$

where ${*}$ denotes the convolution operation

$\displaystyle f*g(x) := \int_G f(y) g(x-y)\ d\mu(y).$

A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with ${f := 1_A}$ and ${\epsilon}$ equal to a small multiple of ${\alpha^3}$ to conclude that there is a Bohr set ${B(S,\rho)}$ with ${|S| \ll_\alpha 1}$ and ${\rho \gg_\alpha 1}$ such that

$\displaystyle \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \gg \alpha^3.$

But from (2) we have the pointwise bound ${\nu_{B(S,\rho)}*\nu_{B(S,\rho)} \ll_\alpha 1}$, and Theorem 1 follows.

Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set ${B(S,\rho)}$ to capture all the “large Fourier coefficients” of ${f}$, but such that a certain “dilate” of ${B(S,\rho)}$ does not capture much more Fourier energy of ${f}$ than ${B(S,\rho)}$ itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of ${f}$ into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the ${\nu_{B(S,\rho)}}$ considered above to achieve a similar effect.

Let ${f: {\bf R}^3 \rightarrow {\bf R}}$ be an irreducible polynomial in three variables. As ${{\bf R}}$ is not algebraically closed, the zero set ${Z_{\bf R}(f) = \{ x \in{\bf R}^3: f(x)=0\}}$ can split into various components of dimension between ${0}$ and ${2}$. For instance, if ${f(x_1,x_2,x_3) = x_1^2+x_2^2}$, the zero set ${Z_{\bf R}(f)}$ is a line; more interestingly, if ${f(x_1,x_2,x_3) = x_3^2 + x_2^2 - x_2^3}$, then ${Z_{\bf R}(f)}$ is the union of a line and a surface (or the product of an acnodal cubic curve with a line). We will assume that the ${2}$-dimensional component ${Z_{{\bf R},2}(f)}$ is non-empty, thus defining a real surface in ${{\bf R}^3}$. In particular, this hypothesis implies that ${f}$ is not just irreducible over ${{\bf R}}$, but is in fact absolutely irreducible (i.e. irreducible over ${{\bf C}}$), since otherwise one could use the complex factorisation of ${f}$ to contain ${Z_{\bf R}(f)}$ inside the intersection ${{\bf Z}_{\bf C}(g) \cap {\bf Z}_{\bf C}(\bar{g})}$ of the complex zero locus of complex polynomial ${g}$ and its complex conjugate, with ${g,\bar{g}}$ having no common factor, forcing ${Z_{\bf R}(f)}$ to be at most one-dimensional. (For instance, in the case ${f(x_1,x_2,x_3)=x_1^2+x_2^2}$, one can take ${g(z_1,z_2,z_3) = z_1 + i z_2}$.) Among other things, this makes ${{\bf Z}_{{\bf R},2}(f)}$ a Zariski-dense subset of ${{\bf Z}_{\bf C}(f)}$, thus any polynomial identity which holds true at every point of ${{\bf Z}_{{\bf R},2}(f)}$, also holds true on all of ${{\bf Z}_{\bf C}(f)}$. This allows us to easily use tools from algebraic geometry in this real setting, even though the reals are not quite algebraically closed.

The surface ${Z_{{\bf R},2}(f)}$ is said to be ruled if, for a Zariski open dense set of points ${x \in Z_{{\bf R},2}(f)}$, there exists a line ${l_x = \{ x+tv_x: t \in {\bf R} \}}$ through ${x}$ for some non-zero ${v_x \in {\bf R}^3}$ which is completely contained in ${Z_{{\bf R},2}(f)}$, thus

$\displaystyle f(x+tv_x)=0$

for all ${t \in {\bf R}}$. Also, a point ${x \in {\bf Z}_{{\bf R},2}(f)}$ is said to be a flecnode if there exists a line ${l_x = \{ x+tv_x: t \in {\bf R}\}}$ through ${x}$ for some non-zero ${v_x \in {\bf R}^3}$ which is tangent to ${Z_{{\bf R},2}(f)}$ to third order, in the sense that

$\displaystyle f(x+tv_x)=O(t^4)$

as ${t \rightarrow 0}$, or equivalently that

$\displaystyle \frac{d^j}{dt^j} f(x+tv_x)|_{t=0} = 0 \ \ \ \ \ (1)$

for ${j=0,1,2,3}$. Clearly, if ${Z_{{\bf R},2}(f)}$ is a ruled surface, then a Zariski open dense set of points on ${Z_{{\bf R},2}}$ are a flecnode. We then have the remarkable theorem of Cayley and Salmon asserting the converse:

Theorem 1 (Cayley-Salmon theorem) Let ${f: {\bf R}^3 \rightarrow {\bf R}}$ be an irreducible polynomial with ${{\bf Z}_{{\bf R},2}}$ non-empty. Suppose that a Zariski dense set of points in ${Z_{{\bf R},2}(f)}$ are flecnodes. Then ${Z_{{\bf R},2}(f)}$ is a ruled surface.

Among other things, this theorem was used in the celebrated result of Guth and Katz that almost solved the Erdos distance problem in two dimensions, as discussed in this previous blog post. Vanishing to third order is necessary: observe that in a surface of negative curvature, such as the saddle ${\{ (x_1,x_2,x_3): x_3 = x_1^2 - x_2^2 \}}$, every point on the surface is tangent to second order to a line (the line in the direction for which the second fundamental form vanishes).

The original proof of the Cayley-Salmon theorem, dating back to at least 1915, is not easily accessible and not written in modern language. A modern proof of this theorem (together with substantial generalisations, for instance to higher dimensions) is given by Landsberg; the proof uses the machinery of modern algebraic geometry. The purpose of this post is to record an alternate proof of the Cayley-Salmon theorem based on classical differential geometry (in particular, the notion of torsion of a curve) and basic ODE methods (in particular, Gronwall’s inequality and the Picard existence theorem). The idea is to “integrate” the lines ${l_x}$ indicated by the flecnode to produce smooth curves ${\gamma}$ on the surface ${{\bf Z}_{{\bf R},2}}$; one then uses the vanishing (1) and some basic calculus to conclude that these curves have zero torsion and are thus planar curves. Some further manipulation using (1) (now just to second order instead of third) then shows that these curves are in fact straight lines, giving the ruling on the surface.

Update: Janos Kollar has informed me that the above theorem was essentially known to Monge in 1809; see his recent arXiv note for more details.

I thank Larry Guth and Micha Sharir for conversations leading to this post.

A core foundation of the subject now known as arithmetic combinatorics (and particularly the subfield of additive combinatorics) are the elementary sum set estimates (sometimes known as “Ruzsa calculus”) that relate the cardinality of various sum sets

$\displaystyle A+B := \{ a+b: a \in A, b \in B \}$

and difference sets

$\displaystyle A-B := \{ a-b: a \in A, b \in B \},$

as well as iterated sumsets such as ${3A=A+A+A}$, ${2A-2A=A+A-A-A}$, and so forth. Here, ${A, B}$ are finite non-empty subsets of some additive group ${G = (G,+)}$ (classically one took ${G={\bf Z}}$ or ${G={\bf R}}$, but nowadays one usually considers more general additive groups). Some basic estimates in this vein are the following:

Lemma 1 (Ruzsa covering lemma) Let ${A, B}$ be finite non-empty subsets of ${G}$. Then ${A}$ may be covered by at most ${\frac{|A+B|}{|B|}}$ translates of ${B-B}$.

Proof: Consider a maximal set of disjoint translates ${a+B}$ of ${B}$ by elements ${a \in A}$. These translates have cardinality ${|B|}$, are disjoint, and lie in ${A+B}$, so there are at most ${\frac{|A+B|}{|B|}}$ of them. By maximality, for any ${a' \in A}$, ${a'+B}$ must intersect at least one of the selected ${a+B}$, thus ${a' \in a+B-B}$, and the claim follows. $\Box$

Lemma 2 (Ruzsa triangle inequality) Let ${A,B,C}$ be finite non-empty subsets of ${G}$. Then ${|A-C| \leq \frac{|A-B| |B-C|}{|B|}}$.

Proof: Consider the addition map ${+: (x,y) \mapsto x+y}$ from ${(A-B) \times (B-C)}$ to ${G}$. Every element ${a-c}$ of ${A - C}$ has a preimage ${\{ (x,y) \in (A-B) \times (B-C)\}}$ of this map of cardinality at least ${|B|}$, thanks to the obvious identity ${a-c = (a-b) + (b-c)}$ for each ${b \in B}$. Since ${(A-B) \times (B-C)}$ has cardinality ${|A-B| |B-C|}$, the claim follows. $\Box$

Such estimates (which are covered, incidentally, in Section 2 of my book with Van Vu) are particularly useful for controlling finite sets ${A}$ of small doubling, in the sense that ${|A+A| \leq K|A|}$ for some bounded ${K}$. (There are deeper theorems, most notably Freiman’s theorem, which give more control than what elementary Ruzsa calculus does, however the known bounds in the latter theorem are worse than polynomial in ${K}$ (although it is conjectured otherwise), whereas the elementary estimates are almost all polynomial in ${K}$.)

However, there are some settings in which the standard sum set estimates are not quite applicable. One such setting is the continuous setting, where one is dealing with bounded open sets in an additive Lie group (e.g. ${{\bf R}^n}$ or a torus ${({\bf R}/{\bf Z})^n}$) rather than a finite setting. Here, one can largely replicate the discrete sum set estimates by working with a Haar measure in place of cardinality; this is the approach taken for instance in this paper of mine. However, there is another setting, which one might dub the “discretised” setting (as opposed to the “discrete” setting or “continuous” setting), in which the sets ${A}$ remain finite (or at least discretisable to be finite), but for which there is a certain amount of “roundoff error” coming from the discretisation. As a typical example (working now in a non-commutative multiplicative setting rather than an additive one), consider the orthogonal group ${O_n({\bf R})}$ of orthogonal ${n \times n}$ matrices, and let ${A}$ be the matrices obtained by starting with all of the orthogonal matrice in ${O_n({\bf R})}$ and rounding each coefficient of each matrix in this set to the nearest multiple of ${\epsilon}$, for some small ${\epsilon>0}$. This forms a finite set (whose cardinality grows as ${\epsilon\rightarrow 0}$ like a certain negative power of ${\epsilon}$). In the limit ${\epsilon \rightarrow 0}$, the set ${A}$ is not a set of small doubling in the discrete sense. However, ${A \cdot A}$ is still close to ${A}$ in a metric sense, being contained in the ${O_n(\epsilon)}$-neighbourhood of ${A}$. Another key example comes from graphs ${\Gamma := \{ (x, f(x)): x \in G \}}$ of maps ${f: A \rightarrow H}$ from a subset ${A}$ of one additive group ${G = (G,+)}$ to another ${H = (H,+)}$. If ${f}$ is “approximately additive” in the sense that for all ${x,y \in G}$, ${f(x+y)}$ is close to ${f(x)+f(y)}$ in some metric, then ${\Gamma}$ might not have small doubling in the discrete sense (because ${f(x+y)-f(x)-f(y)}$ could take a large number of values), but could be considered a set of small doubling in a discretised sense.

One would like to have a sum set (or product set) theory that can handle these cases, particularly in “high-dimensional” settings in which the standard methods of passing back and forth between continuous, discrete, or discretised settings behave poorly from a quantitative point of view due to the exponentially large doubling constant of balls. One way to do this is to impose a translation invariant metric ${d}$ on the underlying group ${G = (G,+)}$ (reverting back to additive notation), and replace the notion of cardinality by that of metric entropy. There are a number of almost equivalent ways to define this concept:

Definition 3 Let ${(X,d)}$ be a metric space, let ${E}$ be a subset of ${X}$, and let ${r>0}$ be a radius.

• The packing number ${N^{pack}_r(E)}$ is the largest number of points ${x_1,\dots,x_n}$ one can pack inside ${E}$ such that the balls ${B(x_1,r),\dots,B(x_n,r)}$ are disjoint.
• The internal covering number ${N^{int}_r(E)}$ is the fewest number of points ${x_1,\dots,x_n \in E}$ such that the balls ${B(x_1,r),\dots,B(x_n,r)}$ cover ${E}$.
• The external covering number ${N^{ext}_r(E)}$ is the fewest number of points ${x_1,\dots,x_n \in X}$ such that the balls ${B(x_1,r),\dots,B(x_n,r)}$ cover ${E}$.
• The metric entropy ${N^{ent}_r(E)}$ is the largest number of points ${x_1,\dots,x_n}$ one can find in ${E}$ that are ${r}$-separated, thus ${d(x_i,x_j) \geq r}$ for all ${i \neq j}$.

It is an easy exercise to verify the inequalities

$\displaystyle N^{ent}_{2r}(E) \leq N^{pack}_r(E) \leq N^{ext}_r(E) \leq N^{int}_r(E) \leq N^{ent}_r(E)$

for any ${r>0}$, and that ${N^*_r(E)}$ is non-increasing in ${r}$ and non-decreasing in ${E}$ for the three choices ${* = pack,ext,ent}$ (but monotonicity in ${E}$ can fail for ${*=int}$!). It turns out that the external covering number ${N^{ent}_r(E)}$ is slightly more convenient than the other notions of metric entropy, so we will abbreviate ${N_r(E) = N^{ent}_r(E)}$. The cardinality ${|E|}$ can be viewed as the limit of the entropies ${N^*_r(E)}$ as ${r \rightarrow 0}$.

If we have the bounded doubling property that ${B(0,2r)}$ is covered by ${O(1)}$ translates of ${B(0,r)}$ for each ${r>0}$, and one has a Haar measure ${m}$ on ${G}$ which assigns a positive finite mass to each ball, then any of the above entropies ${N^*_r(E)}$ is comparable to ${m( E + B(0,r) ) / m(B(0,r))}$, as can be seen by simple volume packing arguments. Thus in the bounded doubling setting one can usually use the measure-theoretic sum set theory to derive entropy-theoretic sumset bounds (see e.g. this paper of mine for an example of this). However, it turns out that even in the absence of bounded doubling, one still has an entropy analogue of most of the elementary sum set theory, except that one has to accept some degradation in the radius parameter ${r}$ by some absolute constant. Such losses can be acceptable in applications in which the underlying sets ${A}$ are largely “transverse” to the balls ${B(0,r)}$, so that the ${N_r}$-entropy of ${A}$ is largely independent of ${A}$; this is a situation which arises in particular in the case of graphs ${\Gamma = \{ (x,f(x)): x \in G \}}$ discussed above, if one works with “vertical” metrics whose balls extend primarily in the vertical direction. (I hope to present a specific application of this type here in the near future.)

Henceforth we work in an additive group ${G}$ equipped with a translation-invariant metric ${d}$. (One can also generalise things slightly by allowing the metric to attain the values ${0}$ or ${+\infty}$, without changing much of the analysis below.) By the Heine-Borel theorem, any precompact set ${E}$ will have finite entropy ${N_r(E)}$ for any ${r>0}$. We now have analogues of the two basic Ruzsa lemmas above:

Lemma 4 (Ruzsa covering lemma) Let ${A, B}$ be precompact non-empty subsets of ${G}$, and let ${r>0}$. Then ${A}$ may be covered by at most ${\frac{N_r(A+B)}{N_r(B)}}$ translates of ${B-B+B(0,2r)}$.

Proof: Let ${a_1,\dots,a_n \in A}$ be a maximal set of points such that the sets ${a_i + B + B(0,r)}$ are all disjoint. Then the sets ${a_i+B}$ are disjoint in ${A+B}$ and have entropy ${N_r(a_i+B)=N_r(B)}$, and furthermore any ball of radius ${r}$ can intersect at most one of the ${a_i+B}$. We conclude that ${N_r(A+B) \geq n N_r(B)}$, so ${n \leq \frac{N_r(A+B)}{N_r(B)}}$. If ${a \in A}$, then ${a+B+B(0,r)}$ must intersect one of the ${a_i + B + B(0,r)}$, so ${a \in a_i + B-B + B(0,2r)}$, and the claim follows. $\Box$

Lemma 5 (Ruzsa triangle inequality) Let ${A,B,C}$ be precompact non-empty subsets of ${G}$, and let ${r>0}$. Then ${N_{4r}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(B)}}$.

Proof: Consider the addition map ${+: (x,y) \mapsto x+y}$ from ${(A-B) \times (B-C)}$ to ${G}$. The domain ${(A-B) \times (B-C)}$ may be covered by ${N_r(A-B) N_r(B-C)}$ product balls ${B(x,r) \times B(y,r)}$. Every element ${a-c}$ of ${A - C}$ has a preimage ${\{ (x,y) \in (A-B) \times (B-C)\}}$ of this map which projects to a translate of ${B}$, and thus must meet at least ${N_r(B)}$ of these product balls. However, if two elements of ${A-C}$ are separated by a distance of at least ${4r}$, then no product ball can intersect both preimages. We thus see that ${N_{4r}^{ent}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(A-C)}}$, and the claim follows. $\Box$

Below the fold we will record some further metric entropy analogues of sum set estimates (basically redoing much of Chapter 2 of my book with Van Vu). Unfortunately there does not seem to be a direct way to abstractly deduce metric entropy results from their sum set analogues (basically due to the failure of a certain strong version of Freiman’s theorem, as discussed in this previous post); nevertheless, the proofs of the discrete arguments are elementary enough that they can be modified with a small amount of effort to handle the entropy case. (In fact, there should be a very general model-theoretic framework in which both the discrete and entropy arguments can be processed in a unified manner; see this paper of Hrushovski for one such framework.)

It is also likely that many of the arguments here extend to the non-commutative setting, but for simplicity we will not pursue such generalisations here.

As in the previous post, all computations here are at the formal level only.

In the previous blog post, the Euler equations for inviscid incompressible fluid flow were interpreted in a Lagrangian fashion, and then Noether’s theorem invoked to derive the known conservation laws for these equations. In a bit more detail: starting with Lagrangian space ${{\cal L} = ({\bf R}^n, \hbox{vol})}$ and Eulerian space ${{\cal E} = ({\bf R}^n, \eta, \hbox{vol})}$, we let ${M}$ be the space of volume-preserving, orientation-preserving maps ${\Phi: {\cal L} \rightarrow {\cal E}}$ from Lagrangian space to Eulerian space. Given a curve ${\Phi: {\bf R} \rightarrow M}$, we can define the Lagrangian velocity field ${\dot \Phi: {\bf R} \times {\cal L} \rightarrow T{\cal E}}$ as the time derivative of ${\Phi}$, and the Eulerian velocity field ${u := \dot \Phi \circ \Phi^{-1}: {\bf R} \times {\cal E} \rightarrow T{\cal E}}$. The volume-preserving nature of ${\Phi}$ ensures that ${u}$ is a divergence-free vector field:

$\displaystyle \nabla \cdot u = 0. \ \ \ \ \ (1)$

If we formally define the functional

$\displaystyle J[\Phi] := \frac{1}{2} \int_{\bf R} \int_{{\cal E}} |u(t,x)|^2\ dx dt = \frac{1}{2} \int_R \int_{{\cal L}} |\dot \Phi(t,x)|^2\ dx dt$

then one can show that the critical points of this functional (with appropriate boundary conditions) obey the Euler equations

$\displaystyle [\partial_t + u \cdot \nabla] u = - \nabla p$

$\displaystyle \nabla \cdot u = 0$

for some pressure field ${p: {\bf R} \times {\cal E} \rightarrow {\bf R}}$. As discussed in the previous post, the time translation symmetry of this functional yields conservation of the Hamiltonian

$\displaystyle \frac{1}{2} \int_{{\cal E}} |u(t,x)|^2\ dx = \frac{1}{2} \int_{{\cal L}} |\dot \Phi(t,x)|^2\ dx;$

the rigid motion symmetries of Eulerian space give conservation of the total momentum

$\displaystyle \int_{{\cal E}} u(t,x)\ dx$

and total angular momentum

$\displaystyle \int_{{\cal E}} x \wedge u(t,x)\ dx;$

and the diffeomorphism symmetries of Lagrangian space give conservation of circulation

$\displaystyle \int_{\Phi(\gamma)} u^*$

for any closed loop ${\gamma}$ in ${{\cal L}}$, or equivalently pointwise conservation of the Lagrangian vorticity ${\Phi^* \omega = \Phi^* du^*}$, where ${u^*}$ is the ${1}$-form associated with the vector field ${u}$ using the Euclidean metric ${\eta}$ on ${{\cal E}}$, with ${\Phi^*}$ denoting pullback by ${\Phi}$.

It turns out that one can generalise the above calculations. Given any self-adjoint operator ${A}$ on divergence-free vector fields ${u: {\cal E} \rightarrow {\bf R}}$, we can define the functional

$\displaystyle J_A[\Phi] := \frac{1}{2} \int_{\bf R} \int_{{\cal E}} u(t,x) \cdot A u(t,x)\ dx dt;$

as we shall see below the fold, critical points of this functional (with appropriate boundary conditions) obey the generalised Euler equations

$\displaystyle [\partial_t + u \cdot \nabla] Au + (\nabla u) \cdot Au= - \nabla \tilde p \ \ \ \ \ (2)$

$\displaystyle \nabla \cdot u = 0$

for some pressure field ${\tilde p: {\bf R} \times {\cal E} \rightarrow {\bf R}}$, where ${(\nabla u) \cdot Au}$ in coordinates is ${\partial_i u_j Au_j}$ with the usual summation conventions. (When ${A=1}$, ${(\nabla u) \cdot Au = \nabla(\frac{1}{2} |u|^2)}$, and this term can be absorbed into the pressure ${\tilde p}$, and we recover the usual Euler equations.) Time translation symmetry then gives conservation of the Hamiltonian

$\displaystyle \frac{1}{2} \int_{{\cal E}} u(t,x) \cdot A u(t,x)\ dx.$

If the operator ${A}$ commutes with rigid motions on ${{\cal E}}$, then we have conservation of total momentum

$\displaystyle \int_{{\cal E}} Au(t,x)\ dx$

and total angular momentum

$\displaystyle \int_{{\cal E}} x \wedge Au(t,x)\ dx,$

and the diffeomorphism symmetries of Lagrangian space give conservation of circulation

$\displaystyle \int_{\Phi(\gamma)} (Au)^*$

or pointwise conservation of the Lagrangian vorticity ${\Phi^* \theta := \Phi^* d(Au)^*}$. These applications of Noether’s theorem proceed exactly as the previous post; we leave the details to the interested reader.

One particular special case of interest arises in two dimensions ${n=2}$, when ${A}$ is the inverse derivative ${A = |\nabla|^{-1} = (-\Delta)^{-1/2}}$. The vorticity ${\theta = d(Au)^*}$ is a ${2}$-form, which in the two-dimensional setting may be identified with a scalar. In coordinates, if we write ${u = (u_1,u_2)}$, then

$\displaystyle \theta = \partial_{x_1} |\nabla|^{-1} u_2 - \partial_{x_2} |\nabla|^{-1} u_1.$

Since ${u}$ is also divergence-free, we may therefore write

$\displaystyle u = (- \partial_{x_2} \psi, \partial_{x_1} \psi )$

where the stream function ${\psi}$ is given by the formula

$\displaystyle \psi = |\nabla|^{-1} \theta.$

If we take the curl of the generalised Euler equation (2), we obtain (after some computation) the surface quasi-geostrophic equation

$\displaystyle [\partial_t + u \cdot \nabla] \theta = 0 \ \ \ \ \ (3)$

$\displaystyle u = (-\partial_{x_2} |\nabla|^{-1} \theta, \partial_{x_1} |\nabla|^{-1} \theta).$

This equation has strong analogies with the three-dimensional incompressible Euler equations, and can be viewed as a simplified model for that system; see this paper of Constantin, Majda, and Tabak for details.

Now we can specialise the general conservation laws derived previously to this setting. The conserved Hamiltonian is

$\displaystyle \frac{1}{2} \int_{{\bf R}^2} u\cdot |\nabla|^{-1} u\ dx = \frac{1}{2} \int_{{\bf R}^2} \theta \psi\ dx = \frac{1}{2} \int_{{\bf R}^2} \theta |\nabla|^{-1} \theta\ dx$

(a law previously observed for this equation in the abovementioned paper of Constantin, Majda, and Tabak). As ${A}$ commutes with rigid motions, we also have (formally, at least) conservation of momentum

$\displaystyle \int_{{\bf R}^2} Au\ dx$

(which up to trivial transformations is also expressible in impulse form as ${\int_{{\bf R}^2} \theta x\ dx}$, after integration by parts), and conservation of angular momentum

$\displaystyle \int_{{\bf R}^2} x \wedge Au\ dx$

(which up to trivial transformations is ${\int_{{\bf R}^2} \theta |x|^2\ dx}$). Finally, diffeomorphism invariance gives pointwise conservation of Lagrangian vorticity ${\Phi^* \theta}$, thus ${\theta}$ is transported by the flow (which is also evident from (3). In particular, all integrals of the form ${\int F(\theta)\ dx}$ for a fixed function ${F}$ are conserved by the flow.

Throughout this post, we will work only at the formal level of analysis, ignoring issues of convergence of integrals, justifying differentiation under the integral sign, and so forth. (Rigorous justification of the conservation laws and other identities arising from the formal manipulations below can usually be established in an a posteriori fashion once the identities are in hand, without the need to rigorously justify the manipulations used to come up with these identities).

It is a remarkable fact in the theory of differential equations that many of the ordinary and partial differential equations that are of interest (particularly in geometric PDE, or PDE arising from mathematical physics) admit a variational formulation; thus, a collection ${\Phi: \Omega \rightarrow M}$ of one or more fields on a domain ${\Omega}$ taking values in a space ${M}$ will solve the differential equation of interest if and only if ${\Phi}$ is a critical point to the functional

$\displaystyle J[\Phi] := \int_\Omega L( x, \Phi(x), D\Phi(x) )\ dx \ \ \ \ \ (1)$

involving the fields ${\Phi}$ and their first derivatives ${D\Phi}$, where the Lagrangian ${L: \Sigma \rightarrow {\bf R}}$ is a function on the vector bundle ${\Sigma}$ over ${\Omega \times M}$ consisting of triples ${(x, q, \dot q)}$ with ${x \in \Omega}$, ${q \in M}$, and ${\dot q: T_x \Omega \rightarrow T_q M}$ a linear transformation; we also usually keep the boundary data of ${\Phi}$ fixed in case ${\Omega}$ has a non-trivial boundary, although we will ignore these issues here. (We also ignore the possibility of having additional constraints imposed on ${\Phi}$ and ${D\Phi}$, which require the machinery of Lagrange multipliers to deal with, but which will only serve as a distraction for the current discussion.) It is common to use local coordinates to parameterise ${\Omega}$ as ${{\bf R}^d}$ and ${M}$ as ${{\bf R}^n}$, in which case ${\Sigma}$ can be viewed locally as a function on ${{\bf R}^d \times {\bf R}^n \times {\bf R}^{dn}}$.

Example 1 (Geodesic flow) Take ${\Omega = [0,1]}$ and ${M = (M,g)}$ to be a Riemannian manifold, which we will write locally in coordinates as ${{\bf R}^n}$ with metric ${g_{ij}(q)}$ for ${i,j=1,\dots,n}$. A geodesic ${\gamma: [0,1] \rightarrow M}$ is then a critical point (keeping ${\gamma(0),\gamma(1)}$ fixed) of the energy functional

$\displaystyle J[\gamma] := \frac{1}{2} \int_0^1 g_{\gamma(t)}( D\gamma(t), D\gamma(t) )\ dt$

or in coordinates (ignoring coordinate patch issues, and using the usual summation conventions)

$\displaystyle J[\gamma] = \frac{1}{2} \int_0^1 g_{ij}(\gamma(t)) \dot \gamma^i(t) \dot \gamma^j(t)\ dt.$

As discussed in this previous post, both the Euler equations for rigid body motion, and the Euler equations for incompressible inviscid flow, can be interpreted as geodesic flow (though in the latter case, one has to work really formally, as the manifold ${M}$ is now infinite dimensional).

More generally, if ${\Omega = (\Omega,h)}$ is itself a Riemannian manifold, which we write locally in coordinates as ${{\bf R}^d}$ with metric ${h_{ab}(x)}$ for ${a,b=1,\dots,d}$, then a harmonic map ${\Phi: \Omega \rightarrow M}$ is a critical point of the energy functional

$\displaystyle J[\Phi] := \frac{1}{2} \int_\Omega h(x) \otimes g_{\gamma(x)}( D\gamma(x), D\gamma(x) )\ dh(x)$

or in coordinates (again ignoring coordinate patch issues)

$\displaystyle J[\Phi] = \frac{1}{2} \int_{{\bf R}^d} h_{ab}(x) g_{ij}(\Phi(x)) (\partial_a \Phi^i(x)) (\partial_b \Phi^j(x))\ \sqrt{\det(h(x))}\ dx.$

If we replace the Riemannian manifold ${\Omega}$ by a Lorentzian manifold, such as Minkowski space ${{\bf R}^{1+3}}$, then the notion of a harmonic map is replaced by that of a wave map, which generalises the scalar wave equation (which corresponds to the case ${M={\bf R}}$).

Example 2 (${N}$-particle interactions) Take ${\Omega = {\bf R}}$ and ${M = {\bf R}^3 \otimes {\bf R}^N}$; then a function ${\Phi: \Omega \rightarrow M}$ can be interpreted as a collection of ${N}$ trajectories ${q_1,\dots,q_N: {\bf R} \rightarrow {\bf R}^3}$ in space, which we give a physical interpretation as the trajectories of ${N}$ particles. If we assign each particle a positive mass ${m_1,\dots,m_N > 0}$, and also introduce a potential energy function ${V: M \rightarrow {\bf R}}$, then it turns out that Newton’s laws of motion ${F=ma}$ in this context (with the force ${F_i}$ on the ${i^{th}}$ particle being given by the conservative force ${-\nabla_{q_i} V}$) are equivalent to the trajectories ${q_1,\dots,q_N}$ being a critical point of the action functional

$\displaystyle J[\Phi] := \int_{\bf R} \sum_{i=1}^N \frac{1}{2} m_i |\dot q_i(t)|^2 - V( q_1(t),\dots,q_N(t) )\ dt.$

Formally, if ${\Phi = \Phi_0}$ is a critical point of a functional ${J[\Phi]}$, this means that

$\displaystyle \frac{d}{ds} J[ \Phi[s] ]|_{s=0} = 0$

whenever ${s \mapsto \Phi[s]}$ is a (smooth) deformation with ${\Phi[0]=\Phi_0}$ (and with ${\Phi[s]}$ respecting whatever boundary conditions are appropriate). Interchanging the derivative and integral, we (formally, at least) arrive at

$\displaystyle \int_\Omega \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}\ dx = 0. \ \ \ \ \ (2)$

Write ${\delta \Phi := \frac{d}{ds} \Phi[s]|_{s=0}}$ for the infinitesimal deformation of ${\Phi_0}$. By the chain rule, ${\frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}}$ can be expressed in terms of ${x, \Phi_0(x), \delta \Phi(x), D\Phi_0(x), D \delta \Phi(x)}$. In coordinates, we have

$\displaystyle \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \delta \Phi^i(x) L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) \ \ \ \ \ (3)$

$\displaystyle + \partial_{x^a} \delta \Phi^i(x) L_{\partial_{x^a} q^i} (x,\Phi_0(x), D\Phi_0(x)),$

where we parameterise ${\Sigma}$ by ${x, (q^i)_{i=1,\dots,n}, (\partial_{x^a} q^i)_{a=1,\dots,d; i=1,\dots,n}}$, and we use subscripts on ${L}$ to denote partial derivatives in the various coefficients. (One can of course work in a coordinate-free manner here if one really wants to, but the notation becomes a little cumbersome due to the need to carefully split up the tangent space of ${\Sigma}$, and we will not do so here.) Thus we can view (2) as an integral identity that asserts the vanishing of a certain integral, whose integrand involves ${x, \Phi_0(x), \delta \Phi(x), D\Phi_0(x), D \delta \Phi(x)}$, where ${\delta \Phi}$ vanishes at the boundary but is otherwise unconstrained.

A general rule of thumb in PDE and calculus of variations is that whenever one has an integral identity of the form ${\int_\Omega F(x)\ dx = 0}$ for some class of functions ${F}$ that vanishes on the boundary, then there must be an associated differential identity ${F = \hbox{div} X}$ that justifies this integral identity through Stokes’ theorem. This rule of thumb helps explain why integration by parts is used so frequently in PDE to justify integral identities. The rule of thumb can fail when one is dealing with “global” or “cohomologically non-trivial” integral identities of a topological nature, such as the Gauss-Bonnet or Kazhdan-Warner identities, but is quite reliable for “local” or “cohomologically trivial” identities, such as those arising from calculus of variations.

In any case, if we apply this rule to (2), we expect that the integrand ${\frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}}$ should be expressible as a spatial divergence. This is indeed the case:

Proposition 1 (Formal) Let ${\Phi = \Phi_0}$ be a critical point of the functional ${J[\Phi]}$ defined in (1). Then for any deformation ${s \mapsto \Phi[s]}$ with ${\Phi[0] = \Phi_0}$, we have

$\displaystyle \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \hbox{div} X \ \ \ \ \ (4)$

where ${X}$ is the vector field that is expressible in coordinates as

$\displaystyle X^a := \delta \Phi^i(x) L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)). \ \ \ \ \ (5)$

Proof: Comparing (4) with (3), we see that the claim is equivalent to the Euler-Lagrange equation

$\displaystyle L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) - \partial_{x^a} L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)) = 0. \ \ \ \ \ (6)$

The same computation, together with an integration by parts, shows that (2) may be rewritten as

$\displaystyle \int_\Omega ( L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) - \partial_{x^a} L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)) ) \delta \Phi^i(x)\ dx = 0.$

Since ${\delta \Phi^i(x)}$ is unconstrained on the interior of ${\Omega}$, the claim (6) follows (at a formal level, at least). $\Box$

Many variational problems also enjoy one-parameter continuous symmetries: given any field ${\Phi_0}$ (not necessarily a critical point), one can place that field in a one-parameter family ${s \mapsto \Phi[s]}$ with ${\Phi[0] = \Phi_0}$, such that

$\displaystyle J[ \Phi[s] ] = J[ \Phi[0] ]$

for all ${s}$; in particular,

$\displaystyle \frac{d}{ds} J[ \Phi[s] ]|_{s=0} = 0,$

which can be written as (2) as before. Applying the previous rule of thumb, we thus expect another divergence identity

$\displaystyle \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \hbox{div} Y \ \ \ \ \ (7)$

whenever ${s \mapsto \Phi[s]}$ arises from a continuous one-parameter symmetry. This expectation is indeed the case in many examples. For instance, if the spatial domain ${\Omega}$ is the Euclidean space ${{\bf R}^d}$, and the Lagrangian (when expressed in coordinates) has no direct dependence on the spatial variable ${x}$, thus

$\displaystyle L( x, \Phi(x), D\Phi(x) ) = L( \Phi(x), D\Phi(x) ), \ \ \ \ \ (8)$

then we obtain ${d}$ translation symmetries

$\displaystyle \Phi[s](x) := \Phi(x - s e^a )$

for ${a=1,\dots,d}$, where ${e^1,\dots,e^d}$ is the standard basis for ${{\bf R}^d}$. For a fixed ${a}$, the left-hand side of (7) then becomes

$\displaystyle \frac{d}{ds} L( \Phi(x-se^a), D\Phi(x-se^a) )|_{s=0} = -\partial_{x^a} [ L( \Phi(x), D\Phi(x) ) ]$

$\displaystyle = \hbox{div} Y$

where ${Y(x) = - L(\Phi(x), D\Phi(x)) e^a}$. Another common type of symmetry is a pointwise symmetry, in which

$\displaystyle L( x, \Phi[s](x), D\Phi[s](x) ) = L( x, \Phi[0](x), D\Phi[0](x) ) \ \ \ \ \ (9)$

for all ${x}$, in which case (7) clearly holds with ${Y=0}$.

If we subtract (4) from (7), we obtain the celebrated theorem of Noether linking symmetries with conservation laws:

Theorem 2 (Noether’s theorem) Suppose that ${\Phi_0}$ is a critical point of the functional (1), and let ${\Phi[s]}$ be a one-parameter continuous symmetry with ${\Phi[0] = \Phi_0}$. Let ${X}$ be the vector field in (5), and let ${Y}$ be the vector field in (7). Then we have the pointwise conservation law

$\displaystyle \hbox{div}(X-Y) = 0.$

In particular, for one-dimensional variational problems, in which ${\Omega \subset {\bf R}}$, we have the conservation law ${(X-Y)(t) = (X-Y)(0)}$ for all ${t \in \Omega}$ (assuming of course that ${\Omega}$ is connected and contains ${0}$).

Noether’s theorem gives a systematic way to locate conservation laws for solutions to variational problems. For instance, if ${\Omega \subset {\bf R}}$ and the Lagrangian has no explicit time dependence, thus

$\displaystyle L(t, \Phi(t), \dot \Phi(t)) = L(\Phi(t), \dot \Phi(t)),$

then by using the time translation symmetry ${\Phi[s](t) := \Phi(t-s)}$, we have

$\displaystyle Y(t) = - L( \Phi(t), \dot\Phi(t) )$

as discussed previously, whereas we have ${\delta \Phi(t) = - \dot \Phi(t)}$, and hence by (5)

$\displaystyle X(t) := - \dot \Phi^i(x) L_{\dot q^i}(\Phi(t), \dot \Phi(t)),$

and so Noether’s theorem gives conservation of the Hamiltonian

$\displaystyle H(t) := \dot \Phi^i(x) L_{\dot q^i}(\Phi(t), \dot \Phi(t))- L(\Phi(t), \dot \Phi(t)). \ \ \ \ \ (10)$

For instance, for geodesic flow, the Hamiltonian works out to be

$\displaystyle H(t) = \frac{1}{2} g_{ij}(\gamma(t)) \dot \gamma^i(t) \dot \gamma^j(t),$

so we see that the speed of the geodesic is conserved over time.

For pointwise symmetries (9), ${Y}$ vanishes, and so Noether’s theorem simplifies to ${\hbox{div} X = 0}$; in the one-dimensional case ${\Omega \subset {\bf R}}$, we thus see from (5) that the quantity

$\displaystyle \delta \Phi^i(t) L_{\dot q^i}(t,\Phi_0(t), \dot \Phi_0(t)) \ \ \ \ \ (11)$

is conserved in time. For instance, for the ${N}$-particle system in Example 2, if we have the translation invariance

$\displaystyle V( q_1 + h, \dots, q_N + h ) = V( q_1, \dots, q_N )$

for all ${q_1,\dots,q_N,h \in {\bf R}^3}$, then we have the pointwise translation symmetry

$\displaystyle q_i[s](t) := q_i(t) + s e^j$

for all ${i=1,\dots,N}$, ${s \in{\bf R}}$ and some ${j=1,\dots,3}$, in which case ${\dot q_i(t) = e^j}$, and the conserved quantity (11) becomes

$\displaystyle \sum_{i=1}^n m_i \dot q_i^j(t);$

as ${j=1,\dots,3}$ was arbitrary, this establishes conservation of the total momentum

$\displaystyle \sum_{i=1}^n m_i \dot q_i(t).$

Similarly, if we have the rotation invariance

$\displaystyle V( R q_1, \dots, Rq_N ) = V( q_1, \dots, q_N )$

for any ${q_1,\dots,q_N \in {\bf R}^3}$ and ${R \in SO(3)}$, then we have the pointwise rotation symmetry

$\displaystyle q_i[s](t) := \exp( s A ) q_i(t)$

for any skew-symmetric real ${3 \times 3}$ matrix ${A}$, in which case ${\dot q_i(t) = A q_i(t)}$, and the conserved quantity (11) becomes

$\displaystyle \sum_{i=1}^n m_i \langle A q_i(t), \dot q_i(t) \rangle;$

since ${A}$ is an arbitrary skew-symmetric matrix, this establishes conservation of the total angular momentum

$\displaystyle \sum_{i=1}^n m_i q_i(t) \wedge \dot q_i(t).$

Below the fold, I will describe how Noether’s theorem can be used to locate all of the conserved quantities for the Euler equations of inviscid fluid flow, discussed in this previous post, by interpreting that flow as geodesic flow in an infinite dimensional manifold.

The Euler equations for incompressible inviscid fluids may be written as

$\displaystyle \partial_t u + (u \cdot \nabla) u = -\nabla p$

$\displaystyle \nabla \cdot u = 0$

where ${u: [0,T] \times {\bf R}^n \rightarrow {\bf R}^n}$ is the velocity field, and ${p: [0,T] \times {\bf R}^n \rightarrow {\bf R}}$ is the pressure field. To avoid technicalities we will assume that both fields are smooth, and that ${u}$ is bounded. We will take the dimension ${n}$ to be at least two, with the three-dimensional case ${n=3}$ being of course especially interesting.

The Euler equations are the inviscid limit of the Navier-Stokes equations; as discussed in my previous post, one potential route to establishing finite time blowup for the latter equations when ${n=3}$ is to be able to construct “computers” solving the Euler equations, which generate smaller replicas of themselves in a noise-tolerant manner (as the viscosity term in the Navier-Stokes equation is to be viewed as perturbative noise).

Perhaps the most prominent obstacles to this route are the conservation laws for the Euler equations, which limit the types of final states that a putative computer could reach from a given initial state. Most famously, we have the conservation of energy

$\displaystyle \int_{{\bf R}^n} |u|^2\ dx \ \ \ \ \ (1)$

(assuming sufficient decay of the velocity field at infinity); thus for instance it would not be possible for a computer to generate a replica of itself which had greater total energy than the initial computer. This by itself is not a fatal obstruction (in this paper of mine, I constructed such a “computer” for an averaged Euler equation that still obeyed energy conservation). However, there are other conservation laws also, for instance in three dimensions one also has conservation of helicity

$\displaystyle \int_{{\bf R}^3} u \cdot (\nabla \times u)\ dx \ \ \ \ \ (2)$

and (formally, at least) one has conservation of momentum

$\displaystyle \int_{{\bf R}^3} u\ dx$

and angular momentum

$\displaystyle \int_{{\bf R}^3} x \times u\ dx$

(although, as we shall discuss below, due to the slow decay of ${u}$ at infinity, these integrals have to either be interpreted in a principal value sense, or else replaced with their vorticity-based formulations, namely impulse and moment of impulse). Total vorticity

$\displaystyle \int_{{\bf R}^3} \nabla \times u\ dx$

is also conserved, although it turns out in three dimensions that this quantity vanishes when one assumes sufficient decay at infinity. Then there are the pointwise conservation laws: the vorticity and the volume form are both transported by the fluid flow, while the velocity field (when viewed as a covector) is transported up to a gradient; among other things, this gives the transport of vortex lines as well as Kelvin’s circulation theorem, and can also be used to deduce the helicity conservation law mentioned above. In my opinion, none of these laws actually prohibits a self-replicating computer from existing within the laws of ideal fluid flow, but they do significantly complicate the task of actually designing such a computer, or of the basic “gates” that such a computer would consist of.

Below the fold I would like to record and derive all the conservation laws mentioned above, which to my knowledge essentially form the complete set of known conserved quantities for the Euler equations. The material here (although not the notation) is drawn from this text of Majda and Bertozzi.

Mertens’ theorems are a set of classical estimates concerning the asymptotic distribution of the prime numbers:

Theorem 1 (Mertens’ theorems) In the asymptotic limit ${x \rightarrow \infty}$, we have

$\displaystyle \sum_{p\leq x} \frac{\log p}{p} = \log x + O(1), \ \ \ \ \ (1)$

$\displaystyle \sum_{p\leq x} \frac{1}{p} = \log \log x + O(1), \ \ \ \ \ (2)$

and

$\displaystyle \sum_{p\leq x} \log(1-\frac{1}{p}) = -\log \log x - \gamma + o(1) \ \ \ \ \ (3)$

where ${\gamma}$ is the Euler-Mascheroni constant, defined by requiring that

$\displaystyle 1 + \frac{1}{2} + \ldots + \frac{1}{n} = \log n + \gamma + o(1) \ \ \ \ \ (4)$

in the limit ${n \rightarrow \infty}$.

The third theorem (3) is usually stated in exponentiated form

$\displaystyle \prod_{p \leq x} (1-\frac{1}{p}) = \frac{e^{-\gamma}+o(1)}{\log x},$

but in the logarithmic form (3) we see that it is strictly stronger than (2), in view of the asymptotic ${\log(1-\frac{1}{p}) = -\frac{1}{p} + O(\frac{1}{p^2})}$.

Remarkably, these theorems can be proven without the assistance of the prime number theorem

$\displaystyle \sum_{p \leq x} 1 = \frac{x}{\log x} + o( \frac{x}{\log x} ),$

which was proven about two decades after Mertens’ work. (But one can certainly use versions of the prime number theorem with good error term, together with summation by parts, to obtain good estimates on the various errors in Mertens’ theorems.) Roughly speaking, the reason for this is that Mertens’ theorems only require control on the Riemann zeta function ${\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}}$ in the neighbourhood of the pole at ${s=1}$, whereas (as discussed in this previous post) the prime number theorem requires control on the zeta function on (a neighbourhood of) the line ${\{ 1+it: t \in {\bf R} \}}$. Specifically, Mertens’ theorem is ultimately deduced from the Euler product formula

$\displaystyle \zeta(s) = \prod_p (1-\frac{1}{p^s})^{-1}, \ \ \ \ \ (5)$

valid in the region ${\hbox{Re}(s) > 1}$ (which is ultimately a Fourier-Dirichlet transform of the fundamental theorem of arithmetic), and following crude asymptotics:

Proposition 2 (Simple pole) For ${s}$ sufficiently close to ${1}$ with ${\hbox{Re}(s) > 1}$, we have

$\displaystyle \zeta(s) = \frac{1}{s-1} + O(1) \ \ \ \ \ (6)$

and

$\displaystyle \zeta'(s) = \frac{-1}{(s-1)^2} + O(1).$

Proof: For ${s}$ as in the proposition, we have ${\frac{1}{n^s} = \frac{1}{t^s} + O(\frac{1}{n^2})}$ for any natural number ${n}$ and ${n \leq t \leq n+1}$, and hence

$\displaystyle \frac{1}{n^s} = \int_n^{n+1} \frac{1}{t^s}\ dt + O( \frac{1}{n^2} ).$

Summing in ${n}$ and using the identity ${\int_1^\infty \frac{1}{t^s}\ dt = \frac{1}{s-1}}$, we obtain the first claim. Similarly, we have

$\displaystyle \frac{-\log n}{n^s} = \int_n^{n+1} \frac{-\log t}{t^s}\ dt + O( \frac{\log n}{n^2} ),$

and by summing in ${n}$ and using the identity ${\int_1^\infty \frac{-\log t}{t^s}\ dt = \frac{-1}{(s-1)^2}}$ (the derivative of the previous identity) we obtain the claim. $\Box$

The first two of Mertens’ theorems (1), (2) are relatively easy to prove, and imply the third theorem (3) except with ${\gamma}$ replaced by an unspecified absolute constant. To get the specific constant ${\gamma}$ requires a little bit of additional effort. From (4), one might expect that the appearance of ${\gamma}$ arises from the refinement

$\displaystyle \zeta(s) = \frac{1}{s-1} + \gamma + O(|s-1|) \ \ \ \ \ (7)$

that one can obtain to (6). However, it turns out that the connection is not so much with the zeta function, but with the Gamma function, and specifically with the identity ${\Gamma'(1) = - \gamma}$ (which is of course related to (7) through the functional equation for zeta, but can be proven without any reference to zeta functions). More specifically, we have the following asymptotic for the exponential integral:

Proposition 3 (Exponential integral asymptotics) For sufficiently small ${\epsilon}$, one has

$\displaystyle \int_\epsilon^\infty \frac{e^{-t}}{t}\ dt = \log \frac{1}{\epsilon} - \gamma + O(\epsilon).$

A routine integration by parts shows that this asymptotic is equivalent to the identity

$\displaystyle \int_0^\infty e^{-t} \log t\ dt = -\gamma$

which is the identity ${\Gamma'(1)=-\gamma}$ mentioned previously.

Proof: We start by using the identity ${\frac{1}{i} = \int_0^1 x^{i-1}\ dx}$ to express the harmonic series ${H_n := 1+\frac{1}{2}+\ldots+\frac{1}{n}}$ as

$\displaystyle H_n = \int_0^1 1 + x + \ldots + x^{n-1}\ dx$

or on summing the geometric series

$\displaystyle H_n = \int_0^1 \frac{1-x^n}{1-x}\ dx.$

Since ${\int_0^{1-1/n} \frac{1}{1-x} = \log n}$, we thus have

$\displaystyle H_n - \log n = \int_0^1 \frac{1_{[1-1/n,1]}(x) - x^n}{1-x}\ dx;$

making the change of variables ${x = 1-\frac{t}{n}}$, this becomes

$\displaystyle H_n - \log n = \int_0^n \frac{1_{[0,1]}(t) - (1-\frac{t}{n})^n}{t}\ dt.$

As ${n \rightarrow \infty}$, ${\frac{1_{[0,1]}(t) - (1-\frac{t}{n})^n}{t}}$ converges pointwise to ${\frac{1_{[0,1]}(t) - e^{-t}}{t}}$ and is pointwise dominated by ${O( e^{-t} )}$. Taking limits as ${n \rightarrow \infty}$ using dominated convergence, we conclude that

$\displaystyle \gamma = \int_0^\infty \frac{1_{[0,1]}(t) - e^{-t}}{t}\ dt.$

or equivalently

$\displaystyle \int_0^\infty \frac{e^{-t} - 1_{[0,\epsilon]}(t)}{t}\ dt = \log \frac{1}{\epsilon} - \gamma.$

The claim then follows by bounding the ${\int_0^\epsilon}$ portion of the integral on the left-hand side. $\Box$

Below the fold I would like to record how Proposition 2 and Proposition 3 imply Theorem 1; the computations are utterly standard, and can be found in most analytic number theory texts, but I wanted to write them down for my own benefit (I always keep forgetting, in particular, how the third of Mertens’ theorems is proven).

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

 (Discrete) (Continuous) (Limit method) Ramsey theory Topological dynamics Compactness Density Ramsey theory Ergodic theory Furstenberg correspondence principle Graph/hypergraph regularity Measure theory Graph limits Polynomial regularity Linear algebra Ultralimits Structural decompositions Hilbert space geometry Ultralimits Fourier analysis Spectral theory Direct and inverse limits Quantitative algebraic geometry Algebraic geometry Schemes Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

• Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects ${x_n}$ in a common space ${X}$, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object ${\lim_{n \rightarrow \infty} x_n}$, which remains in the same space, and is “close” to many of the original objects ${x_n}$ with respect to the given metric or topology.
• Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects ${x_n}$ in a category ${X}$, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit ${\varinjlim x_n}$ or the inverse limit ${\varprojlim x_n}$ of these objects, which is another object in the same category ${X}$, and is connected to the original objects ${x_n}$ by various morphisms.
• Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects ${x_{\bf n}}$ or of spaces ${X_{\bf n}}$, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, ${X_{\bf n}}$ might be groups and ${x_{\bf n}}$ might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$ or a new space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$, which is still a model of the same language (e.g. if the spaces ${X_{\bf n}}$ were all groups, then the limiting space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ is an abelian group, then the ${X_{\bf n}}$ will also be abelian groups for many ${{\bf n}}$.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects ${x_{\bf n}}$ to all lie in a common space ${X}$ in order to form an ultralimit ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; they are permitted to lie in different spaces ${X_{\bf n}}$; this is more natural in many discrete contexts, e.g. when considering graphs on ${{\bf n}}$ vertices in the limit when ${{\bf n}}$ goes to infinity. Also, no convergence properties on the ${x_{\bf n}}$ are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces ${X_{\bf n}}$ involved are required in order to construct the ultraproduct.

With so few requirements on the objects ${x_{\bf n}}$ or spaces ${X_{\bf n}}$, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the ${x_{\bf n}}$, will be exactly obeyed by the limit object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

The classical foundations of probability theory (discussed for instance in this previous blog post) is founded on the notion of a probability space ${(\Omega, {\cal E}, {\bf P})}$ – a space ${\Omega}$ (the sample space) equipped with a ${\sigma}$-algebra ${{\cal E}}$ (the event space), together with a countably additive probability measure ${{\bf P}: {\cal E} \rightarrow [0,1]}$ that assigns a real number in the interval ${[0,1]}$ to each event.

One can generalise the concept of a probability space to a finitely additive probability space, in which the event space ${{\cal E}}$ is now only a Boolean algebra rather than a ${\sigma}$-algebra, and the measure ${\mu}$ is now only finitely additive instead of countably additive, thus ${{\bf P}( E \vee F ) = {\bf P}(E) + {\bf P}(F)}$ when ${E,F}$ are disjoint events. By giving up countable additivity, one loses a fair amount of measure and integration theory, and in particular the notion of the expectation of a random variable becomes problematic (unless the random variable takes only finitely many values). Nevertheless, one can still perform a fair amount of probability theory in this weaker setting.

In this post I would like to describe a further weakening of probability theory, which I will call qualitative probability theory, in which one does not assign a precise numerical probability value ${{\bf P}(E)}$ to each event, but instead merely records whether this probability is zero, one, or something in between. Thus ${{\bf P}}$ is now a function from ${{\cal E}}$ to the set ${\{0, I, 1\}}$, where ${I}$ is a new symbol that replaces all the elements of the open interval ${(0,1)}$. In this setting, one can no longer compute quantitative expressions, such as the mean or variance of a random variable; but one can still talk about whether an event holds almost surely, with positive probability, or with zero probability, and there are still usable notions of independence. (I will refer to classical probability theory as quantitative probability theory, to distinguish it from its qualitative counterpart.)

The main reason I want to introduce this weak notion of probability theory is that it becomes suited to talk about random variables living inside algebraic varieties, even if these varieties are defined over fields other than ${{\bf R}}$ or ${{\bf C}}$. In algebraic geometry one often talks about a “generic” element of a variety ${V}$ defined over a field ${k}$, which does not lie in any specified variety of lower dimension defined over ${k}$. Once ${V}$ has positive dimension, such generic elements do not exist as classical, deterministic ${k}$-points ${x}$ in ${V}$, since of course any such point lies in the ${0}$-dimensional subvariety ${\{x\}}$ of ${V}$. There are of course several established ways to deal with this problem. One way (which one might call the “Weil” approach to generic points) is to extend the field ${k}$ to a sufficiently transcendental extension ${\tilde k}$, in order to locate a sufficient number of generic points in ${V(\tilde k)}$. Another approach (which one might dub the “Zariski” approach to generic points) is to work scheme-theoretically, and interpret a generic point in ${V}$ as being associated to the zero ideal in the function ring of ${V}$. However I want to discuss a third perspective, in which one interprets a generic point not as a deterministic object, but rather as a random variable ${{\bf x}}$ taking values in ${V}$, but which lies in any given lower-dimensional subvariety of ${V}$ with probability zero. This interpretation is intuitive, but difficult to implement in classical probability theory (except perhaps when considering varieties over ${{\bf R}}$ or ${{\bf C}}$) due to the lack of a natural probability measure to place on algebraic varieties; however it works just fine in qualitative probability theory. In particular, the algebraic geometry notion of being “generically true” can now be interpreted probabilistically as an assertion that something is “almost surely true”.

It turns out that just as qualitative random variables may be used to interpret the concept of a generic point, they can also be used to interpret the concept of a type in model theory; the type of a random variable ${x}$ is the set of all predicates ${\phi(x)}$ that are almost surely obeyed by ${x}$. In contrast, model theorists often adopt a Weil-type approach to types, in which one works with deterministic representatives of a type, which often do not occur in the original structure of interest, but only in a sufficiently saturated extension of that structure (this is the analogue of working in a sufficiently transcendental extension of the base field). However, it seems that (in some cases at least) one can equivalently view types in terms of (qualitative) random variables on the original structure, avoiding the need to extend that structure. (Instead, one reserves the right to extend the sample space of one’s probability theory whenever necessary, as part of the “probabilistic way of thinking” discussed in this previous blog post.) We illustrate this below the fold with two related theorems that I will interpret through the probabilistic lens: the “group chunk theorem” of Weil (and later developed by Hrushovski), and the “group configuration theorem” of Zilber (and again later developed by Hrushovski). For sake of concreteness we will only consider these theorems in the theory of algebraically closed fields, although the results are quite general and can be applied to many other theories studied in model theory.