In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence ${G_n = (V_n, E_n)}$ of finite graphs, one can extract a subsequence ${G_{n_j} = (V_{n_j}, E_{n_j})}$ which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function ${p\colon [0,1] \times [0,1] \rightarrow [0,1]}$. What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon ${p}$. For instance, the edge density

$\displaystyle \frac{1}{|V_{n_j}|^2} |E_{n_j}|$

converge to the integral

$\displaystyle \int_0^1 \int_0^1 p(x,y)\ dx dy,$

the triangle density

$\displaystyle \frac{1}{|V_{n_j}|^3} \lvert \{ (v_1,v_2,v_3) \in V_{n_j}^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_{n_j} \} \rvert$

converges to the integral

$\displaystyle \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ dx_1 dx_2 dx_3,$

the four-cycle density

$\displaystyle \frac{1}{|V_{n_j}|^4} \lvert \{ (v_1,v_2,v_3,v_4) \in V_{n_j}^4: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_4\}, \{v_4,v_1\} \in E_{n_j} \} \rvert$

converges to the integral

$\displaystyle \int_0^1 \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_4) p(x_4,x_1)\ dx_1 dx_2 dx_3 dx_4,$

and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.

One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence ${G_n = (V_n,E_n)}$ of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter ${\alpha \in\beta {\bf N} \backslash {\bf N}}$) to obtain a nonstandard graph ${G_\alpha = (V_\alpha,E_\alpha)}$, where ${V_\alpha = \prod_{n\rightarrow \alpha} V_n}$ is the ultraproduct of the ${V_n}$, and similarly for the ${E_\alpha}$. The set ${E_\alpha}$ can then be viewed as a symmetric subset of ${V_\alpha \times V_\alpha}$ which is measurable with respect to the Loeb ${\sigma}$-algebra ${{\mathcal L}_{V_\alpha \times V_\alpha}}$ of the product ${V_\alpha \times V_\alpha}$ (see this previous blog post for the construction of Loeb measure). A crucial point is that this ${\sigma}$-algebra is larger than the product ${{\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha}}$ of the Loeb ${\sigma}$-algebra of the individual vertex set ${V_\alpha}$. This leads to a decomposition

$\displaystyle 1_{E_\alpha} = p + e$

where the “graphon” ${p}$ is the orthogonal projection of ${1_{E_\alpha}}$ onto ${L^2( {\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha} )}$, and the “regular error” ${e}$ is orthogonal to all product sets ${A \times B}$ for ${A, B \in {\mathcal L}_{V_\alpha}}$. The graphon ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ then captures the statistics of the nonstandard graph ${G_\alpha}$, in exact analogy with the more traditional graph limits: for instance, the edge density

$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^2} |E_\alpha|$

(or equivalently, the limit of the ${\frac{1}{|V_n|^2} |E_n|}$ along the ultrafilter ${\alpha}$) is equal to the integral

$\displaystyle \int_{V_\alpha} \int_{V_\alpha} p(x,y)\ d\mu_{V_\alpha}(x) d\mu_{V_\alpha}(y)$

where ${d\mu_V}$ denotes Loeb measure on a nonstandard finite set ${V}$; the triangle density

$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^3} \lvert \{ (v_1,v_2,v_3) \in V_\alpha^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_\alpha \} \rvert$

(or equivalently, the limit along ${\alpha}$ of the triangle densities of ${E_n}$) is equal to the integral

$\displaystyle \int_{V_\alpha} \int_{V_\alpha} \int_{V_\alpha} p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ d\mu_{V_\alpha}(x_1) d\mu_{V_\alpha}(x_2) d\mu_{V_\alpha}(x_3),$

and so forth. Note that with this construction, the graphon ${p}$ is living on the Cartesian square of an abstract probability space ${V_\alpha}$, which is likely to be inseparable; but it is possible to cut down the Loeb ${\sigma}$-algebra on ${V_\alpha}$ to minimal countable ${\sigma}$-algebra for which ${p}$ remains measurable (up to null sets), and then one can identify ${V_\alpha}$ with ${[0,1]}$, bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)

Additive combinatorics, which studies things like the additive structure of finite subsets ${A}$ of an abelian group ${G = (G,+)}$, has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.

It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group ${A_\alpha}$ in a nonstandard group ${G_\alpha = \prod_{n \rightarrow \alpha} G_n}$, defined as the ultraproduct of finite ${K}$-approximate groups ${A_n \subset G_n}$ for some standard ${K}$. (A ${K}$-approximate group ${A_n}$ is a symmetric set containing the origin such that ${A_n+A_n}$ can be covered by ${K}$ or fewer translates of ${A_n}$.) We then let ${O(A_\alpha)}$ be the external subgroup of ${G_\alpha}$ generated by ${A_\alpha}$; equivalently, ${A_\alpha}$ is the union of ${A_\alpha^m}$ over all standard ${m}$. This space has a Loeb measure ${\mu_{O(A_\alpha)}}$, defined by setting

$\displaystyle \mu_{O(A_\alpha)}(E_\alpha) := \hbox{st} \frac{|E_\alpha|}{|A_\alpha|}$

whenever ${E_\alpha}$ is an internal subset of ${A_\alpha^m}$ for any standard ${m}$, and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.

The Loeb measure ${\mu_{O(A_\alpha)}}$ is a translation invariant measure on ${O(A_{\alpha})}$, normalised so that ${A_\alpha}$ has Loeb measure one. As such, one should think of ${O(A_\alpha)}$ as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that ${O(A_\alpha)}$ is not actually a locally compact group with Haar measure, for two reasons:

• There is not an obvious topology on ${O(A_\alpha)}$ that makes it simultaneously locally compact, Hausdorff, and ${\sigma}$-compact. (One can get one or two out of three without difficulty, though.)
• The addition operation ${+\colon O(A_\alpha) \times O(A_\alpha) \rightarrow O(A_\alpha)}$ is not measurable from the product Loeb algebra ${{\mathcal L}_{O(A_\alpha)} \times {\mathcal L}_{O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$. Instead, it is measurable from the coarser Loeb algebra ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$ (compare with the analogous situation for nonstandard graphs).

Nevertheless, the analogy is a useful guide for the arguments that follow.

Let ${L(O(A_\alpha))}$ denote the space of bounded Loeb measurable functions ${f\colon O(A_\alpha) \rightarrow {\bf C}}$ (modulo almost everywhere equivalence) that are supported on ${A_\alpha^m}$ for some standard ${m}$; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation ${\star\colon L(O(A_\alpha)) \times L(O(A_\alpha)) \rightarrow L(O(A_\alpha))}$, defined by setting

$\displaystyle \hbox{st} f \star \hbox{st} g(x) := \hbox{st} \frac{1}{|A_\alpha|} \sum_{y \in A_\alpha^m} f(y) g(x-y)$

whenever ${f\colon A_\alpha^m \rightarrow {}^* {\bf C}}$, ${g\colon A_\alpha^l \rightarrow {}^* {\bf C}}$ are bounded nonstandard functions (extended by zero to all of ${O(A_\alpha)}$), and then extending to arbitrary elements of ${L(O(A_\alpha))}$ by density. Equivalently, ${f \star g}$ is the pushforward of the ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$-measurable function ${(x,y) \mapsto f(x) g(y)}$ under the map ${(x,y) \mapsto x+y}$.

The basic structural theorem is then as follows.

Theorem 1 (Kronecker factor) Let ${A_\alpha}$ be an ultra approximate group. Then there exists a (standard) locally compact abelian group ${G}$ of the form

$\displaystyle G = {\bf R}^d \times {\bf Z}^m \times T$

for some standard ${d,m}$ and some compact abelian group ${T}$, equipped with a Haar measure ${\mu_G}$ and a measurable homomorphism ${\pi\colon O(A_\alpha) \rightarrow G}$ (using the Loeb ${\sigma}$-algebra on ${O(A_\alpha)}$ and the Borel ${\sigma}$-algebra on ${G}$), with the following properties:

• (i) ${\pi}$ is surjective, and ${\mu_G}$ is the pushforward of Loeb measure ${\mu_{O(A_\alpha)}}$ by ${\pi}$.
• (ii) There exists sets ${\{0\} \subset U_0 \subset K_0 \subset G}$ with ${U_0}$ open and ${K_0}$ compact, such that

$\displaystyle \pi^{-1}(U_0) \subset 4A_\alpha \subset \pi^{-1}(K_0). \ \ \ \ \ (1)$

• (iii) Whenever ${K \subset U \subset G}$ with ${K}$ compact and ${U}$ open, there exists a nonstandard finite set ${B}$ such that

$\displaystyle \pi^{-1}(K) \subset B \subset \pi^{-1}(U). \ \ \ \ \ (2)$

• (iv) If ${f, g \in L}$, then we have the convolution formula

$\displaystyle f \star g = \pi^*( (\pi_* f) \star (\pi_* g) ) \ \ \ \ \ (3)$

where ${\pi_* f,\pi_* g}$ are the pushforwards of ${f,g}$ to ${L^2(G, \mu_G)}$, the convolution ${\star}$ on the right-hand side is convolution using ${\mu_G}$, and ${\pi^*}$ is the pullback map from ${L^2(G,\mu_G)}$ to ${L^2(O(A_\alpha), \mu_{O(A_\alpha)})}$. In particular, if ${\pi_* f = 0}$, then ${f*g=0}$ for all ${g \in L}$.

One can view the locally compact abelian group ${G}$ as a “model “or “Kronecker factor” for the ultra approximate group ${A_\alpha}$ (in close analogy with the Kronecker factor from ergodic theory). In the case that ${A_\alpha}$ is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components ${{\bf R}^d \times {\bf Z}^m}$ of the Kronecker group ${G}$ are trivial, and this theorem was implicitly established by Szegedy. The compact group ${T}$ is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions ${f}$, one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor ${G}$.

Given any sequence of uniformly bounded functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ for some fixed ${m}$, we can view the function ${f \in L}$ defined by

$\displaystyle f := \pi_* \hbox{st} \lim_{n \rightarrow \alpha} f_n \ \ \ \ \ (4)$

as an “additive limit” of the ${f_n}$, in much the same way that graphons ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ are limits of the indicator functions ${1_{E_n}\colon V_n \times V_n \rightarrow \{0,1\}}$. The additive limits capture some of the statistics of the ${f_n}$, for instance the normalised means

$\displaystyle \frac{1}{|A_n|} \sum_{x \in A_n^m} f_n(x)$

converge (along the ultrafilter ${\alpha}$) to the mean

$\displaystyle \int_G f(x)\ d\mu_G(x),$

and for three sequences ${f_n,g_n,h_n\colon A_n^m \rightarrow {\bf C}}$ of functions, the normalised correlation

$\displaystyle \frac{1}{|A_n|^2} \sum_{x,y \in A_n^m} f_n(x) g_n(y) h_n(x+y)$

converges along ${\alpha}$ to the correlation

$\displaystyle \int_G \int_G f(x) g(y) h(x+y)\ d\mu_G(x) d\mu_G(y),$

the normalised ${U^2}$ Gowers norm

$\displaystyle ( \frac{1}{|A_n|^3} \sum_{x,y,z,w \in A_n^m: x+w=y+z} f_n(x) \overline{f_n(y)} \overline{f_n(z)} f_n(w))^{1/4}$

converges along ${\alpha}$ to the ${U^2}$ Gowers norm

$\displaystyle ( \int_{G \times G \times G} f(x) \overline{f(y)} \overline{f(z)} f_n(x+y-z)\ d\mu_G(x) d\mu_G(y) d\mu_G(z))^{1/4}$

and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised ${\ell^2}$ norm

$\displaystyle (\frac{1}{|A_n|} \sum_{x \in A_n^m} |f_n(x)|^2)^{1/2}$

does not necessarily converge to the ${L^2}$ norm

$\displaystyle (\int_G |f(x)|^2\ d\mu_G(x))^{1/2},$

but can converge instead to a larger quantity, due to the presence of the orthogonal projection ${\pi_*}$ in the definition (4) of ${f}$.

An important special case of an additive limit occurs when the functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ involved are indicator functions ${f_n = 1_{E_n}}$ of some subsets ${E_n}$ of ${A_n^m}$. The additive limit ${f \in L}$ does not necessarily remain an indicator function, but instead takes values in ${[0,1]}$ (much as a graphon ${p}$ takes values in ${[0,1]}$ even though the original indicators ${1_{E_n}}$ take values in ${\{0,1\}}$). The convolution ${f \star f\colon G \rightarrow [0,1]}$ is then the ultralimit of the normalised convolutions ${\frac{1}{|A_n|} 1_{E_n} \star 1_{E_n}}$; in particular, the measure of the support of ${f \star f}$ provides a lower bound on the limiting normalised cardinality ${\frac{1}{|A_n|} |E_n + E_n|}$ of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset ${2E_n = E_n + E_n}$ could contain a large number of elements which have very few (${o(|A_n|)}$) representations as the sum of two elements of ${E_n}$, and in the limit these portions of the sumset fall outside of the support of ${f \star f}$. (One can think of the support of ${f \star f}$ as describing the “essential” sumset of ${2E_n = E_n + E_n}$, discarding those elements that have only very few representations.) Similarly for higher convolutions of ${f}$. Thus one can use additive limits to partially control the growth ${k E_n}$ of iterated sumsets of subsets ${E_n}$ of approximate groups ${A_n}$, in the regime where ${k}$ stays bounded and ${n}$ goes to infinity.

Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.

Example 1 (Bohr sets) We take ${A_n}$ to be the intervals ${A_n := \{ x \in {\bf Z}: |x| \leq N_n \}}$, where ${N_n}$ is a sequence going to infinity; these are ${2}$-approximate groups for all ${n}$. Let ${\theta}$ be an irrational real number, let ${I}$ be an interval in ${{\bf R}/{\bf Z}}$, and for each natural number ${n}$ let ${B_n}$ be the Bohr set

$\displaystyle B_n := \{ x \in A^{(n)}: \theta x \hbox{ mod } 1 \in I \}.$

In this case, the (reduced) Kronecker factor ${G}$ can be taken to be the infinite cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ with the usual Lebesgue measure ${\mu_G}$. The additive limits of ${1_{A_n}}$ and ${1_{B_n}}$ end up being ${1_A}$ and ${1_B}$, where ${A}$ is the finite cylinder

$\displaystyle A := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]\}$

and ${B}$ is the rectangle

$\displaystyle B := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]; t \in I \}.$

Geometrically, one should think of ${A_n}$ and ${B_n}$ as being wrapped around the cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ via the homomorphism ${x \mapsto (\frac{x}{N_n}, \theta x \hbox{ mod } 1)}$, and then one sees that ${B_n}$ is converging in some normalised weak sense to ${B}$, and similarly for ${A_n}$ and ${A}$. In particular, the additive limit predicts the growth rate of the iterated sumsets ${kB_n}$ to be quadratic in ${k}$ until ${k|I|}$ becomes comparable to ${1}$, at which point the growth transitions to linear growth, in the regime where ${k}$ is bounded and ${n}$ is large.

If ${\theta = \frac{p}{q}}$ were rational instead of irrational, then one would need to replace ${{\bf R}/{\bf Z}}$ by the finite subgroup ${\frac{1}{q}{\bf Z}/{\bf Z}}$ here.

Example 2 (Structured subsets of progressions) We take ${A_n}$ be the rank two progression

$\displaystyle A_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|, |b| \leq N_n \},$

where ${N_n}$ is a sequence going to infinity; these are ${4}$-approximate groups for all ${n}$. Let ${B_n}$ be the subset

$\displaystyle B_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|^2 + |b|^2 \leq N_n^2 \}.$

Then the (reduced) Kronecker factor can be taken to be ${G = {\bf R}^2}$ with Lebesgue measure ${\mu_G}$, and the additive limits of the ${1_{A_n}}$ and ${1_{B_n}}$ are then ${1_A}$ and ${1_B}$, where ${A}$ is the square

$\displaystyle A := \{ (a,b) \in {\bf R}^2: |a|, |b| \leq 1 \}$

and ${B}$ is the circle

$\displaystyle B := \{ (a,b) \in {\bf R}^2: a^2+b^2 \leq 1 \}.$

Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism ${a + b N_n^2 \mapsto (\frac{a}{N_n}, \frac{b}{N_n})}$ for ${a,b = O( N_n )}$ to embed the original sets ${A_n, B_n}$ into the plane ${{\bf R}^2}$. In particular, one now expects the growth rate of the iterated sumsets ${k A_n}$ and ${k B_n}$ to be quadratic in ${k}$, in the regime where ${k}$ is bounded and ${n}$ is large.

Example 3 (Dissociated sets) Let ${d}$ be a fixed natural number, and take

$\displaystyle A_n = \{0, v_1,\dots,v_d,-v_1,\dots,-v_d \}$

where ${v_1,\dots,v_d}$ are randomly chosen elements of a large cyclic group ${{\bf Z}/p_n{\bf Z}}$, where ${p_n}$ is a sequence of primes going to infinity. These are ${O(d)}$-approximate groups. The (reduced) Kronecker factor ${G}$ can (almost surely) then be taken to be ${{\bf Z}^d}$ with counting measure, and the additive limit of ${1_{A_n}}$ is ${1_A}$, where ${A = \{ 0, e_1,\dots,e_d,-e_1,\dots,-e_d\}}$ and ${e_1,\dots,e_d}$ is the standard basis of ${{\bf Z}^d}$. In particular, the growth rates of ${k A_n}$ should grow approximately like ${k^d}$ for ${k}$ bounded and ${n}$ large.

Example 4 (Random subsets of groups) Let ${A_n = G_n}$ be a sequence of finite additive groups whose order is going to infinity. Let ${B_n}$ be a random subset of ${G_n}$ of some fixed density ${0 \leq \lambda \leq 1}$. Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group ${\{0\}}$, and the additive limit of the ${1_{B_n}}$ is the constant function ${\lambda}$. The convolutions ${\frac{1}{|G_n|} 1_{B_n} * 1_{B_n}}$ then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of ${\lambda^2}$; this reflects the fact that ${(1-o(1))|G_n|}$ of the elements of ${G_n}$ can be represented as the sum of two elements of ${B_n}$ in ${(\lambda^2 + o(1)) |G_n|}$ ways. In particular, ${B_n+B_n}$ occupies a proportion ${1-o(1)}$ of ${G_n}$.

Example 5 (Trigonometric series) Take ${A_n = G_n = {\bf Z}/p_n {\bf C}}$ for a sequence ${p_n}$ of primes going to infinity, and for each ${n}$ let ${\xi_{n,1},\xi_{n,2},\dots}$ be an infinite sequence of frequencies chosen uniformly and independently from ${{\bf Z}/p_n{\bf Z}}$. Let ${f_n\colon {\bf Z}/p_n{\bf Z} \rightarrow {\bf C}}$ denote the random trigonometric series

$\displaystyle f_n(x) := \sum_{j=1}^\infty 2^{-j} e^{2\pi i \xi_{n,j} x / p_n }.$

Then (almost surely) we can take the reduced Kronecker factor ${G}$ to be the infinite torus ${({\bf R}/{\bf Z})^{\bf N}}$ (with the Haar probability measure ${\mu_G}$), and the additive limit of the ${f_n}$ then becomes the function ${f\colon ({\bf R}/{\bf Z})^{\bf N} \rightarrow {\bf R}}$ defined by the formula

$\displaystyle f( (x_j)_{j=1}^\infty ) := \sum_{j=1}^\infty e^{2\pi i x_j}.$

In fact, the pullback ${\pi^* f}$ is the ultralimit of the ${f_n}$. As such, for any standard exponent ${1 \leq q < \infty}$, the normalised ${l^q}$ norm

$\displaystyle (\frac{1}{p_n} \sum_{x \in {\bf Z}/p_n{\bf Z}} |f_n(x)|^q)^{1/q}$

can be seen to converge to the limit

$\displaystyle (\int_{({\bf R}/{\bf Z})^{\bf N}} |f(x)|^q\ d\mu_G(x))^{1/q}.$

The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.

It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.

Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.

One of the first basic theorems in group theory is Cayley’s theorem, which links abstract finite groups with concrete finite groups (otherwise known as permutation groups).

Theorem 1 (Cayley’s theorem) Let ${G}$ be a group of some finite order ${n}$. Then ${G}$ is isomorphic to a subgroup ${\tilde G}$ of the symmetric group ${S_n}$ on ${n}$ elements ${\{1,\dots,n\}}$. Furthermore, this subgroup is simply transitive: given two elements ${x,y}$ of ${\{1,\dots,n\}}$, there is precisely one element ${\sigma}$ of ${\tilde G}$ such that ${\sigma(x)=y}$.

One can therefore think of ${S_n}$ as a sort of “universal” group that contains (up to isomorphism) all the possible groups of order ${n}$.

Proof: The group ${G}$ acts on itself by multiplication on the left, thus each element ${g \in G}$ may be identified with a permutation ${\sigma_g: G \rightarrow G}$ on ${G}$ given by the map ${\sigma_g(h) := gh}$. This can be easily verified to identify ${G}$ with a simply transitive permutation group on ${G}$. The claim then follows by arbitrarily identifying ${G}$ with ${\{1,\dots,n\}}$. $\Box$

More explicitly, the permutation group ${\tilde G}$ arises by arbitrarily enumerating ${G}$ as ${\{s_1,\dots,s_n\}}$ and then associating to each group element ${g \in G}$ the permutation ${\sigma_g: \{1,\dots,n\} \rightarrow \{1,\dots,n\}}$ defined by the formula

$\displaystyle g s_i = s_{\sigma_g(i)}.$

The simply transitive group ${\tilde G}$ given by Cayley’s theorem is not unique, due to the arbitrary choice of identification of ${G}$ with ${\{1,\dots,n\}}$, but is unique up to conjugation by an element of ${S_n}$. On the other hand, it is easy to see that every simply transitive subgroup of ${S_n}$ is of order ${n}$, and that two such groups are isomorphic if and only if they are conjugate by an element of ${S_n}$. Thus Cayley’s theorem in fact identifies the moduli space of groups of order ${n}$ (up to isomorphism) with the simply transitive subgroups of ${S_n}$ (up to conjugacy by elements of ${S_n}$).

One can generalise Cayley’s theorem to groups of infinite order without much difficulty. But in this post, I would like to note an (easy) generalisation of Cayley’s theorem in a different direction, in which the group ${G}$ is no longer assumed to be of order ${n}$, but rather to have an index ${n}$ subgroup that is isomorphic to a fixed group ${H}$. The generalisation is:

Theorem 2 (Cayley’s theorem for ${H}$-sets) Let ${H}$ be a group, and let ${G}$ be a group that contains an index ${n}$ subgroup isomorphic to ${H}$. Then ${G}$ is isomorphic to a subgroup ${\tilde G}$ of the semidirect product ${S_n \ltimes H^n}$, defined explicitly as the set of tuples ${(\sigma, (h_i)_{i=1}^n)}$ with product

$\displaystyle (\sigma, (h_i)_{i=1}^n) (\rho, (k_i)_{i=1}^n) := (\sigma \circ \rho, (h_{\rho(i)} k_i)_{i=1}^n )$

and inverse

$\displaystyle (\sigma, (h_i)_{i=1}^n)^{-1} := (\sigma^{-1}, (h_{\sigma(i)}^{-1})_{i=1}^n).$

(This group is a wreath product of ${H}$ with ${S_n}$, and is sometimes denoted ${H \wr S_n}$, or more precisely ${H \wr_{\{1,\dots,n\}} S_n}$.) Furthermore, ${\tilde G}$ is simply transitive in the following sense: given any two elements ${x,y}$ of ${\{1,\dots,n\}}$ and ${h,k \in H}$, there is precisely one ${(\sigma, (h_i)_{i=1}^n)}$ in ${\tilde G}$ such that ${\sigma(x)=y}$ and ${k = h_x h}$.

Of course, Theorem 1 is the special case of Theorem 2 when ${H}$ is trivial. This theorem allows one to view ${S_n \ltimes H^n}$ as a “universal” group for modeling all groups containing a copy of ${H}$ as an index ${n}$ subgroup, in exactly the same way that ${S_n}$ is a universal group for modeling groups of order ${n}$. This observation is not at all deep, but I had not seen it before, so I thought I would record it here. (EDIT: as pointed out in comments, this is a slight variant of the universal embedding theorem of Krasner and Kaloujnine, which covers the case when ${H}$ is normal, in which case one can embed ${G}$ into the wreath product ${H \wr G/H}$, which is a subgroup of ${H \wr S_n}$.)

Proof: The basic idea here is to replace the category of sets in Theorem 1 by the category of ${H}$-sets, by which we mean sets ${X}$ with a right-action of the group ${H}$. A morphism between two ${H}$-sets ${X,Y}$ is a function ${f: X \rightarrow Y}$ which respects the right action of ${H}$, thus ${f(x)h = f(xh)}$ for all ${x \in X}$ and ${h \in H}$.

Observe that if ${G}$ contains a copy of ${H}$ as a subgroup, then one can view ${G}$ as an ${H}$-set, using the right-action of ${H}$ (which we identify with the indicated subgroup of ${G}$). The left action of ${G}$ on itself commutes with the right-action of ${H}$, and so we can represent ${G}$ by ${H}$-set automorphisms on the ${H}$-set ${G}$.

As ${H}$ has index ${n}$ in ${G}$, we see that ${G}$ is (non-canonically) isomorphic (as an ${H}$-set) to the ${H}$-set ${\{1,\dots,n\} \times H}$ with the obvious right action of ${H}$: ${(i,h) k := (i,hk)}$. It is easy to see that the group of ${H}$-set automorphisms of ${\{1,\dots,n\} \times H}$ can be identified with ${S^n \ltimes H}$, with the latter group acting on the former ${H}$-set by the rule

$\displaystyle (\sigma, (h_i)_{i=1}^n) (i,h) := (\sigma(i), h_i h)$

(it is routine to verify that this is indeed an action of ${S^n \ltimes H}$ by ${H}$-set automorphisms. It is then a routine matter to verify the claims (the simple transitivity of ${\tilde G}$ follows from the simple transitivity of the action of ${G}$ on itself). $\Box$

More explicitly, the group ${\tilde G}$ arises by arbitrarily enumerating the left-cosets of ${H}$ in ${G}$ as ${\{s_1H,\dots,s_nH\}}$ and then associating to each group element ${g \in G}$ the element ${(\sigma_g, (h_{g,i})_{i=1}^n )}$, where the permutation ${\sigma_g: \{1,\dots,n\} \rightarrow \{1,\dots,n\}}$ and the elements ${h_{g,i} \in H}$ are defined by the formula

$\displaystyle g s_i = s_{\sigma_g(i)} h_{g,i}.$

By noting that ${H^n}$ is an index ${n!}$ normal subgroup of ${S_n \ltimes H^n}$, we recover the classical result of Poincaré that any group ${G}$ that contains ${H}$ as an index ${n}$ subgroup, contains a normal subgroup ${N}$ of index dividing ${n!}$ that is contained in ${H}$. (Quotienting out the ${H}$ right-action, we recover also the classical proof of this result, as the action of ${G}$ on itself then collapses to the action of ${G}$ on the quotient space ${G/H}$, the stabiliser of which is ${N}$.)

Exercise 1 Show that a simply transitive subgroup ${\tilde G}$ of ${S_n \ltimes H^n}$ contains a copy of ${H}$ as an index ${n}$ subgroup; in particular, there is a canonical embedding of ${H}$ into ${\tilde G}$, and ${\tilde G}$ can be viewed as an ${H}$-set.

Exercise 2 Show that any two simply transitive subgroups ${\tilde G_1, \tilde G_2}$ of ${S_n \ltimes H^n}$ are isomorphic simultaneously as groups and as ${H}$-sets (that is, there is a bijection ${\phi: \tilde G_1 \rightarrow \tilde G_2}$ that is simultaneously a group isomorphism and an ${H}$-set isomorphism) if and only if they are conjugate by an element of ${S_n \times H_n}$.

The (presumably) final article arising from the Polymath8 project has now been uploaded to the arXiv as “The “bounded gaps between primes” Polymath project – a retrospective“.  This article, submitted to the Newsletter of the European Mathematical Society, consists of personal contributions from ten different participants (at varying levels of stage of career, and intensity of participation) on their own experiences with the project, and some thoughts as to what lessons to draw for any subsequent Polymath projects.  (At present, I do not know of any such projects being proposed, but from recent experience I would imagine that some opportunity suitable for a Polymath approach will present itself at some point in the near future.)

This post will also serve as the latest (and probably last) of the Polymath8 threads (rolling over this previous post), to wrap up any remaining discussion about any aspect of this project.

Analytic number theory is often concerned with the asymptotic behaviour of various arithmetic functions: functions ${f: {\bf N} \rightarrow {\bf R}}$ or ${f: {\bf N} \rightarrow {\bf C}}$ from the natural numbers ${{\bf N} = \{1,2,\dots\}}$ to the real numbers ${{\bf R}}$ or complex numbers ${{\bf C}}$. In this post, we will focus on the purely algebraic properties of these functions, and for reasons that will become clear later, it will be convenient to generalise the notion of an arithmetic function to functions ${f: {\bf N} \rightarrow R}$ taking values in some abstract commutative ring ${R}$. In this setting, we can add or multiply two arithmetic functions ${f,g: {\bf N} \rightarrow R}$ to obtain further arithmetic functions ${f+g, fg: {\bf N} \rightarrow R}$, and we can also form the Dirichlet convolution ${f*g: {\bf N} \rightarrow R}$ by the usual formula

$\displaystyle f*g(n) := \sum_{d|n} f(d) g(\frac{n}{d}).$

Regardless of what commutative ring ${R}$ is in used here, we observe that Dirichlet convolution is commutative, associative, and bilinear over ${R}$.

An important class of arithmetic functions in analytic number theory are the multiplicative functions, that is to say the arithmetic functions ${f: {\bf N} \rightarrow {\bf R}}$ such that ${f(1)=1}$ and

$\displaystyle f(nm) = f(n) f(m)$

for all coprime ${n,m \in {\bf N}}$. A subclass of these functions are the completely multiplicative functions, in which the restriction that ${n,m}$ be coprime is dropped. Basic examples of completely multiplicative functions (in the classical setting ${R={\bf C}}$) include

• the Kronecker delta ${\delta}$, defined by setting ${\delta(n)=1}$ for ${n=1}$ and ${\delta(n)=0}$ otherwise;
• the constant function ${1: n \mapsto 1}$ and the linear function ${n \mapsto n}$ (which by abuse of notation we denote by ${n}$);
• more generally monomials ${n \mapsto n^s}$ for any fixed complex number ${s}$ (in particular, the “Archimedean characters” ${n \mapsto n^{it}}$ for any fixed ${t \in {\bf R}}$), which by abuse of notation we denote by ${n^s}$;
• Dirichlet characters ${\chi}$;
• the Liouville function ${\lambda}$;
• the indicator function of the ${z}$-smooth numbers (numbers whose prime factors are all at most ${z}$), for some given ${z}$; and
• the indicator function of the ${z}$-rough numbers (numbers whose prime factors are all greater than ${z}$), for some given ${z}$.

Examples of multiplicative functions that are not completely multiplicative include

• the Möbius function ${\mu}$;
• the divisor function ${\tau}$ (also referred to as ${d}$);
• more generally, the higher order divisor functions ${\tau_k(n) = \sum_{d_1,\dots,d_k: d_1 \dots d_k = n} 1}$ for ${k \geq 1}$;
• the Euler totient function ${\phi}$;
• the number of roots ${n \mapsto \# \{ a \in {\bf Z}/n{\bf Z}: P(a) = 0\}}$ of a given polynomial ${P}$ defined over ${{\bf Z}}$;
• more generally, the point counting function ${n \mapsto V[{\bf Z}/n{\bf Z}]}$ of a given algebraic variety ${V}$ defined over ${{\bf Z}}$ (closely tied to the Hasse-Weil zeta function of ${V}$);
• the function ${r: n \mapsto r(n)}$ that counts the number of representations of ${n}$ as the sum of two squares;
• more generally, the function that maps a natural number ${n}$ to the number of ideals in a given number field ${K}$ of absolute norm ${n}$ (closely tied to the Dedekind zeta function of ${K}$).

These multiplicative functions interact well with the multiplication and convolution operations: if ${f,g: {\bf N} \rightarrow R}$ are multiplicative, then so are ${fg}$ and ${f * g}$, and if ${\psi}$ is completely multiplicative, then we also have

$\displaystyle \psi (f*g) = (\psi f) * (\psi g). \ \ \ \ \ (1)$

Finally, the product of completely multiplicative functions is again completely multiplicative. On the other hand, the sum of two multiplicative functions will never be multiplicative (just look at what happens at ${n=1}$), and the convolution of two completely multiplicative functions will usually just be multiplicative rather than completley multiplicative.

The specific multiplicative functions listed above are also related to each other by various important identities, for instance

$\displaystyle \delta * f = f; \quad \mu * 1 = \delta; \quad 1 * 1 = \tau; \quad \phi * 1 = n$

where ${f}$ is an arbitrary arithmetic function.

On the other hand, analytic number theory also is very interested in certain arithmetic functions that are not exactly multiplicative (and certainly not completely multiplicative). One particularly important such function is the von Mangoldt function ${\Lambda}$. This function is certainly not multiplicative, but is clearly closely related to such functions via such identities as ${\Lambda = \mu * L}$ and ${L = \Lambda * 1}$, where ${L: n\mapsto \log n}$ is the natural logarithm function. The purpose of this post is to point out that functions such as the von Mangoldt function lie in a class closely related to multiplicative functions, which I will call the derived multiplicative functions. More precisely:

Definition 1 A derived multiplicative function ${f: {\bf N} \rightarrow R}$ is an arithmetic function that can be expressed as the formal derivative

$\displaystyle f(n) = \frac{d}{d\epsilon} F_\epsilon(n) |_{\epsilon=0}$

at the origin of a family ${f_\epsilon: {\bf N}\rightarrow {\bf R}}$ of multiplicative functions ${F_\epsilon: {\bf N} \rightarrow R}$ parameterised by a formal parameter ${\epsilon}$. Equivalently, ${f: {\bf N} \rightarrow {\bf R}}$ is a derived multiplicative function if it is the ${\epsilon}$ coefficient of a multiplicative function in the extension ${R[\epsilon]/(\epsilon^2)}$ of ${R}$ by a nilpotent infinitesimal ${\epsilon}$; in other words, there exists an arithmetic function ${F: {\bf N} \rightarrow R}$ such that the arithmetic function ${F + \epsilon f: {\bf N} \rightarrow R[\epsilon]/(\epsilon^2)}$ is multiplicative, or equivalently that ${F}$ is multiplicative and one has the Leibniz rule

$\displaystyle f(nm) = f(n) F(m) + F(n) f(m) \ \ \ \ \ (2)$

for all coprime ${n,m \in {\bf N}}$.

More generally, for any ${k\geq 0}$, a ${k}$-derived multiplicative function ${f: {\bf N} \rightarrow R}$ is an arithmetic function that can be expressed as the formal derivative

$\displaystyle f(n) = \frac{d^k}{d\epsilon_1 \dots d\epsilon_k} F_{\epsilon_1,\dots,\epsilon_k}(n) |_{\epsilon_1,\dots,\epsilon_k=0}$

at the origin of a family ${f_{\epsilon_1,\dots,\epsilon_k}: {\bf N} \rightarrow {\bf R}}$ of multiplicative functions ${F_{\epsilon_1,\dots,\epsilon_k}: {\bf N} \rightarrow R}$ parameterised by formal parameters ${\epsilon_1,\dots,\epsilon_k}$. Equivalently, ${f}$ is the ${\epsilon_1 \dots \epsilon_k}$ coefficient of a multiplicative function in the extension ${R[\epsilon_1,\dots,\epsilon_k]/(\epsilon_1^2,\dots,\epsilon_k^2)}$ of ${R}$ by ${k}$ nilpotent infinitesimals ${\epsilon_1,\dots,\epsilon_k}$.

We define the notion of a ${k}$-derived completely multiplicative function similarly by replacing “multiplicative” with “completely multiplicative” in the above discussion.

There are Leibniz rules similar to (2) but they are harder to state; for instance, a doubly derived multiplicative function ${f: {\bf N} \rightarrow R}$ comes with singly derived multiplicative functions ${F_1, F_2: {\bf N} \rightarrow R}$ and a multiplicative function ${G: {\bf N} \rightarrow R}$ such that

$\displaystyle f(nm) = f(n) G(m) + F_1(n) F_2(m) + F_2(n) F_1(m) + G(n) f(m)$

for all coprime ${n,m \in {\bf N}}$.

One can then check that the von Mangoldt function ${\Lambda}$ is a derived multiplicative function, because ${\delta + \epsilon \Lambda}$ is multiplicative in the ring ${{\bf C}[\epsilon]/(\epsilon^2)}$ with one infinitesimal ${\epsilon}$. Similarly, the logarithm function ${L}$ is derived completely multiplicative because ${\exp( \epsilon L ) := 1 + \epsilon L}$ is completely multiplicative in ${{\bf C}[\epsilon]/(\epsilon^2)}$. More generally, any additive function ${\omega: {\bf N} \rightarrow R}$ is derived multiplicative because it is the top order coefficient of ${\exp(\epsilon \omega) := 1 + \epsilon \omega}$.

Remark 1 One can also phrase these concepts in terms of the formal Dirichlet series ${F(s) = \sum_n \frac{f(n)}{n^s}}$ associated to an arithmetic function ${f}$. A function ${f}$ is multiplicative if ${F}$ admits a (formal) Euler product; ${f}$ is derived multiplicative if ${F}$ is the (formal) first derivative of an Euler product with respect to some parameter (not necessarily ${s}$, although this is certainly an option); and so forth.

Using the definition of a ${k}$-derived multiplicative function as the top order coefficient of a multiplicative function of a ring with ${k}$ infinitesimals, it is easy to see that the product or convolution of a ${k}$-derived multiplicative function ${f: {\bf N} \rightarrow R}$ and a ${l}$-derived multiplicative function ${g: {\bf N} \rightarrow R}$ is necessarily a ${k+l}$-derived multiplicative function (again taking values in ${R}$). Thus, for instance, the higher-order von Mangoldt functions ${\Lambda_k := \mu * L^k}$ are ${k}$-derived multiplicative functions, because ${L^k}$ is a ${k}$-derived completely multiplicative function. More explicitly, ${L^k}$ is the top order coeffiicent of the completely multiplicative function ${\prod_{i=1}^k \exp(\epsilon_i L)}$, and ${\Lambda_k}$ is the top order coefficient of the multiplicative function ${\mu * \prod_{i=1}^k \exp(\epsilon_i L)}$, with both functions taking values in the ring ${C[\epsilon_1,\dots,\epsilon_k]/(\epsilon_1^2,\dots,\epsilon_k^2)}$ of complex numbers with ${k}$ infinitesimals ${\epsilon_1,\dots,\epsilon_k}$ attached.

It then turns out that most (if not all) of the basic identities used by analytic number theorists concerning derived multiplicative functions, can in fact be viewed as coefficients of identities involving purely multiplicative functions, with the latter identities being provable primarily from multiplicative identities, such as (1). This phenomenon is analogous to the one in linear algebra discussed in this previous blog post, in which many of the trace identities used there are derivatives of determinant identities. For instance, the Leibniz rule

$\displaystyle L (f * g) = (Lf)*g + f*(Lg)$

for any arithmetic functions ${f,g}$ can be viewed as the top order term in

$\displaystyle \exp(\epsilon L) (f*g) = (\exp(\epsilon L) f) * (\exp(\epsilon L) g)$

in the ring with one infinitesimal ${\epsilon}$, and then we see that the Leibniz rule is a special case (or a derivative) of (1), since ${\exp(\epsilon L)}$ is completely multiplicative. Similarly, the formulae

$\displaystyle \Lambda = \mu * L; \quad L = \Lambda * 1$

are top order terms of

$\displaystyle (\delta + \epsilon \Lambda) = \mu * \exp(\epsilon L); \quad \exp(\epsilon L) = (\delta + \epsilon \Lambda) * 1,$

and the variant formula ${\Lambda = - (L\mu) * 1}$ is the top order term of

$\displaystyle (\delta + \epsilon \Lambda) = (\exp(-\epsilon L)\mu) * 1,$

which can then be deduced from the previous identities by noting that the completely multiplicative function ${\exp(-\epsilon L)}$ inverts ${\exp(\epsilon L)}$ multiplicatively, and also noting that ${L}$ annihilates ${\mu*1=\delta}$. The Selberg symmetry formula

$\displaystyle \Lambda_2 = \Lambda*\Lambda + \Lambda L, \ \ \ \ \ (3)$

which plays a key role in the Erdös-Selberg elementary proof of the prime number theorem (as discussed in this previous blog post), is the top order term of the identity

$\displaystyle \delta + \epsilon_1 \Lambda + \epsilon_2 \Lambda + \epsilon_1\epsilon_2 \Lambda_2 = (\exp(\epsilon_2 L) (\delta + \epsilon_1 \Lambda)) * (\delta + \epsilon_2 \Lambda)$

involving the multiplicative functions ${\delta + \epsilon_1 \Lambda + \epsilon_2 \Lambda + \epsilon_1\epsilon_2 \Lambda_2}$, ${\exp(\epsilon_2 L)}$, ${\delta+\epsilon_1 \Lambda}$, ${\delta+\epsilon_2 \Lambda}$ with two infinitesimals ${\epsilon_1,\epsilon_2}$, and this identity can be proven while staying purely within the realm of multiplicative functions, by using the identities

$\displaystyle \delta + \epsilon_1 \Lambda + \epsilon_2 \Lambda + \epsilon_1\epsilon_2 \Lambda_2 = \mu * (\exp(\epsilon_1 L) \exp(\epsilon_2 L))$

$\displaystyle \exp(\epsilon_1 L) = 1 * (\delta + \epsilon_1 \Lambda)$

$\displaystyle \delta + \epsilon_2 \Lambda = \mu * \exp(\epsilon_2 L)$

and (1). Similarly for higher identities such as

$\displaystyle \Lambda_3 = \Lambda L^2 + 3 \Lambda L * \Lambda + \Lambda * \Lambda * \Lambda$

which arise from expanding out ${\mu * (\exp(\epsilon_1 L) \exp(\epsilon_2 L) \exp(\epsilon_3 L))}$ using (1) and the above identities; we leave this as an exercise to the interested reader.

An analogous phenomenon arises for identities that are not purely multiplicative in nature due to the presence of truncations, such as the Vaughan identity

$\displaystyle \Lambda_{> V} = \mu_{\leq U} * L - \mu_{\leq U} * \Lambda_{\leq V} * 1 + \mu_{>U} * \Lambda_{>V} * 1 \ \ \ \ \ (4)$

for any ${U,V \geq 1}$, where ${f_{>V} = f 1_{>V}}$ is the restriction of a multiplicative function ${f}$ to the natural numbers greater than ${V}$, and similarly for ${f_{\leq V}}$, ${f_{>U}}$, ${f_{\leq U}}$. In this particular case, (4) is the top order coefficient of the identity

$\displaystyle (\delta + \epsilon \Lambda)_{>V} = \mu_{\leq U} * \exp(\epsilon L) - \mu_{\leq U} * (\delta + \epsilon \Lambda)_{\leq V} * 1$

$\displaystyle + \mu_{>U} * (\delta+\epsilon \Lambda)_{>V} * 1$

which can be easily derived from the identities ${\delta = \mu_{\leq U} * 1 + \mu_{>U} * 1}$ and ${\exp(\epsilon L) = (\delta + \epsilon \Lambda)_{>V} * 1 + (\delta + \epsilon \Lambda)_{\leq V} + 1}$. Similarly for the Heath-Brown identity

$\displaystyle \Lambda = \sum_{j=1}^K (-1)^{j-1} \binom{K}{j} \mu_{\leq U}^{*j} * 1^{*j-1} * L \ \ \ \ \ (5)$

valid for natural numbers up to ${U^K}$, where ${U \geq 1}$ and ${K \geq 1}$ are arbitrary parameters and ${f^{*j}}$ denotes the ${j}$-fold convolution of ${f}$, and discussed in this previous blog post; this is the top order coefficient of

$\displaystyle \delta + \epsilon \Lambda = \sum_{j=1}^K (-1)^{j-1} \binom{K}{j} \mu_{\leq U}^{*j} * 1^{*j-1} * \exp( \epsilon L )$

and arises by first observing that

$\displaystyle (\mu - \mu_{\leq U})^{*K} * 1^{*K-1} * \exp(\epsilon L) = \mu_{>U}^{*K} * 1^{*K-1} * \exp( \epsilon L )$

vanishes up to ${U^K}$, and then expanding the left-hand side using the binomial formula and the identity ${\mu^{*K} * 1^{*K-1} * \exp(\epsilon L) = \delta + \epsilon \Lambda}$.

One consequence of this phenomenon is that identities involving derived multiplicative functions tend to have a dimensional consistency property: all terms in the identity have the same order of derivation in them. For instance, all the terms in the Selberg symmetry formula (3) are doubly derived functions, all the terms in the Vaughan identity (4) or the Heath-Brown identity (5) are singly derived functions, and so forth. One can then use dimensional analysis to help ensure that one has written down a key identity involving such functions correctly, much as is done in physics.

In addition to the dimensional analysis arising from the order of derivation, there is another dimensional analysis coming from the value of multiplicative functions at primes ${p}$ (which is more or less equivalent to the order of pole of the Dirichlet series at ${s=1}$). Let us say that a multiplicative function ${f: {\bf N} \rightarrow R}$ has a pole of order ${j}$ if one has ${f(p)=j}$ on the average for primes ${p}$, where we will be a bit vague as to what “on the average” means as it usually does not matter in applications. Thus for instance, ${1}$ or ${\exp(\epsilon L)}$ has a pole of order ${1}$ (a simple pole), ${\delta}$ or ${\delta + \epsilon \Lambda}$ has a pole of order ${0}$ (i.e. neither a zero or a pole), Dirichlet characters also have a pole of order ${0}$ (although this is slightly nontrivial, requiring Dirichlet’s theorem), ${\mu}$ has a pole of order ${-1}$ (a simple zero), ${\tau}$ has a pole of order ${2}$, and so forth. Note that the convolution of a multiplicative function with a pole of order ${j}$ with a multiplicative function with a pole of order ${j'}$ will be a multiplicative function with a pole of order ${j+j'}$. If there is no oscillation in the primes ${p}$ (e.g. if ${f(p)=j}$ for all primes ${p}$, rather than on the average), it is also true that the product of a multiplicative function with a pole of order ${j}$ with a multiplicative function with a pole of order ${j'}$ will be a multiplicative function with a pole of order ${jj'}$. The situation is significantly different though in the presence of oscillation; for instance, if ${\chi}$ is a quadratic character then ${\chi^2}$ has a pole of order ${1}$ even though ${\chi}$ has a pole of order ${0}$.

A ${k}$-derived multiplicative function will then be said to have an underived pole of order ${j}$ if it is the top order coefficient of a multiplicative function with a pole of order ${j}$; in terms of Dirichlet series, this roughly means that the Dirichlet series has a pole of order ${j+k}$ at ${s=1}$. For instance, the singly derived multiplicative function ${\Lambda}$ has an underived pole of order ${0}$, because it is the top order coefficient of ${\delta + \epsilon \Lambda}$, which has a pole of order ${0}$; similarly ${L}$ has an underived pole of order ${1}$, being the top order coefficient of ${\exp(\epsilon L)}$. More generally, ${\Lambda_k}$ and ${L^k}$ have underived poles of order ${0}$ and ${1}$ respectively for any ${k}$.

By taking top order coefficients, we then see that the convolution of a ${k}$-derived multiplicative function with underived pole of order ${j}$ and a ${k'}$-derived multiplicative function with underived pole of order ${j'}$ is a ${k+k'}$-derived multiplicative function with underived pole of order ${j+j'}$. If there is no oscillation in the primes, the product of these functions will similarly have an underived pole of order ${jj'}$, for instance ${\Lambda L}$ has an underived pole of order ${0}$. We then have the dimensional consistency property that in any of the standard identities involving derived multiplicative functions, all terms not only have the same derived order, but also the same underived pole order. For instance, in (3), (4), (5) all terms have underived pole order ${0}$ (with any Mobius function terms being counterbalanced by a matching term of ${1}$ or ${L}$). This gives a second way to use dimensional analysis as a consistency check. For instance, any identity that involves a linear combination of ${\mu_{\leq U} * L}$ and ${\Lambda_{>V} * 1}$ is suspect because the underived pole orders do not match (being ${0}$ and ${1}$ respectively), even though the derived orders match (both are ${1}$).

One caveat, though: this latter dimensional consistency breaks down for identities that involve infinitely many terms, such as Linnik’s identity

$\displaystyle \Lambda = \sum_{i=0}^\infty (-1)^{i} L * 1_{>1}^{*i}.$

In this case, one can still rewrite things in terms of multiplicative functions as

$\displaystyle \delta + \epsilon \Lambda = \sum_{i=0}^\infty (-1)^i \exp(\epsilon L) * 1_{>1}^{*i},$

so the former dimensional consistency is still maintained.

I thank Andrew Granville, Kannan Soundararajan, and Emmanuel Kowalski for helpful conversations on these topics.

Tamar Ziegler and I have just uploaded to the arXiv our paper “Narrow progressions in the primes“, submitted to the special issue “Analytic Number Theory” in honor of the 60th birthday of Helmut Maier. The results here are vaguely reminiscent of the recent progress on bounded gaps in the primes, but use different methods.

About a decade ago, Ben Green and I showed that the primes contained arbitrarily long arithmetic progressions: given any ${k}$, one could find a progression ${n, n+r, \dots, n+(k-1)r}$ with ${r>0}$ consisting entirely of primes. In fact we showed the same statement was true if the primes were replaced by any subset of the primes of positive relative density.

A little while later, Tamar Ziegler and I obtained the following generalisation: given any ${k}$ and any polynomials ${P_1,\dots,P_k: {\bf Z} \rightarrow {\bf Z}}$ with ${P_1(0)=\dots=P_k(0)}$, one could find a “polynomial progression” ${n+P_1(r),\dots,n+P_k(r)}$ with ${r>0}$ consisting entirely of primes. Furthermore, we could make this progression somewhat “narrow” by taking ${r = n^{o(1)}}$ (where ${o(1)}$ denotes a quantity that goes to zero as ${n}$ goes to infinity). Again, the same statement also applies if the primes were replaced by a subset of positive relative density. My previous result with Ben corresponds to the linear case ${P_i(r) = (i-1)r}$.

In this paper we were able to make the progressions a bit narrower still: given any ${k}$ and any polynomials ${P_1,\dots,P_k: {\bf Z} \rightarrow {\bf Z}}$ with ${P_1(0)=\dots=P_k(0)}$, one could find a “polynomial progression” ${n+P_1(r),\dots,n+P_k(r)}$ with ${r>0}$ consisting entirely of primes, and such that ${r \leq \log^L n}$, where ${L}$ depends only on ${k}$ and ${P_1,\dots,P_k}$ (in fact it depends only on ${k}$ and the degrees of ${P_1,\dots,P_k}$). The result is still true if the primes are replaced by a subset of positive density ${\delta}$, but unfortunately in our arguments we must then let ${L}$ depend on ${\delta}$. However, in the linear case ${P_i(r) = (i-1)r}$, we were able to make ${L}$ independent of ${\delta}$ (although it is still somewhat large, of the order of ${k 2^k}$).

The polylogarithmic factor is somewhat necessary: using an upper bound sieve, one can easily construct a subset of the primes of density, say, ${90\%}$, whose arithmetic progressions ${n,n+r,\dots,n+(k-1)r}$ of length ${k}$ all obey the lower bound ${r \gg \log^{k-1} n}$. On the other hand, the prime tuples conjecture predicts that if one works with the actual primes rather than dense subsets of the primes, then one should have infinitely many length ${k}$ arithmetic progressions of bounded width for any fixed ${k}$. The ${k=2}$ case of this is precisely the celebrated theorem of Yitang Zhang that was the focus of the recently concluded Polymath8 project here. The higher ${k}$ case is conjecturally true, but appears to be out of reach of known methods. (Using the multidimensional Selberg sieve of Maynard, one can get ${m}$ primes inside an interval of length ${O( \exp(O(m)) )}$, but this is such a sparse set of primes that one would not expect to find even a progression of length three within such an interval.)

The argument in the previous paper was unable to obtain a polylogarithmic bound on the width of the progressions, due to the reliance on a certain technical “correlation condition” on a certain Selberg sieve weight ${\nu}$. This correlation condition required one to control arbitrarily long correlations of ${\nu}$, which was not compatible with a bounded value of ${L}$ (particularly if one wanted to keep ${L}$ independent of ${\delta}$).

However, thanks to recent advances in this area by Conlon, Fox, and Zhao (who introduced a very nice “densification” technique), it is now possible (in principle, at least) to delete this correlation condition from the arguments. Conlon-Fox-Zhao did this for my original theorem with Ben; and in the current paper we apply the densification method to our previous argument to similarly remove the correlation condition. This method does not fully eliminate the need to control arbitrarily long correlations, but allows most of the factors in such a long correlation to be bounded, rather than merely controlled by an unbounded weight such as ${\nu}$. This turns out to be significantly easier to control, although in the non-linear case we still unfortunately had to make ${L}$ large compared to ${\delta}$ due to a certain “clearing denominators” step arising from the complicated nature of the Gowers-type uniformity norms that we were using to control polynomial averages. We believe though that this an artefact of our method, and one should be able to prove our theorem with an ${L}$ that is uniform in ${\delta}$.

Here is a simple instance of the densification trick in action. Suppose that one wishes to establish an estimate of the form

$\displaystyle {\bf E}_n {\bf E}_r f(n) g(n+r) h(n+r^2) = o(1) \ \ \ \ \ (1)$

for some real-valued functions ${f,g,h}$ which are bounded in magnitude by a weight function ${\nu}$, but which are not expected to be bounded; this average will naturally arise when trying to locate the pattern ${(n,n+r,n+r^2)}$ in a set such as the primes. Here I will be vague as to exactly what range the parameters ${n,r}$ are being averaged over. Suppose that the factor ${g}$ (say) has enough uniformity that one can already show a smallness bound

$\displaystyle {\bf E}_n {\bf E}_r F(n) g(n+r) H(n+r^2) = o(1) \ \ \ \ \ (2)$

whenever ${F, H}$ are bounded functions. (One should think of ${F,H}$ as being like the indicator functions of “dense” sets, in contrast to ${f,h}$ which are like the normalised indicator functions of “sparse” sets). The bound (2) cannot be directly applied to control (1) because of the unbounded (or “sparse”) nature of ${f}$ and ${h}$. However one can “densify” ${f}$ and ${h}$ as follows. Since ${f}$ is bounded in magnitude by ${\nu}$, we can bound the left-hand side of (1) as

$\displaystyle {\bf E}_n \nu(n) | {\bf E}_r g(n+r) h(n+r^2) |.$

The weight function ${\nu}$ will be normalised so that ${{\bf E}_n \nu(n) = O(1)}$, so by the Cauchy-Schwarz inequality it suffices to show that

$\displaystyle {\bf E}_n \nu(n) | {\bf E}_r g(n+r) h(n+r^2) |^2 = o(1).$

The left-hand side expands as

$\displaystyle {\bf E}_n {\bf E}_r {\bf E}_s \nu(n) g(n+r) h(n+r^2) g(n+s) h(n+s^2).$

Now, it turns out that after an enormous (but finite) number of applications of the Cauchy-Schwarz inequality to steadily eliminate the ${g,h}$ factors, as well as a certain “polynomial forms condition” hypothesis on ${\nu}$, one can show that

$\displaystyle {\bf E}_n {\bf E}_r {\bf E}_s (\nu-1)(n) g(n+r) h(n+r^2) g(n+s) h(n+s^2) = o(1).$

(Because of the polynomial shifts, this requires a method known as “PET induction”, but let me skip over this point here.) In view of this estimate, we now just need to show that

$\displaystyle {\bf E}_n {\bf E}_r {\bf E}_s g(n+r) h(n+r^2) g(n+s) h(n+s^2) = o(1).$

Now we can reverse the previous steps. First, we collapse back to

$\displaystyle {\bf E}_n | {\bf E}_r g(n+r) h(n+r^2) |^2 = o(1).$

One can bound ${|{\bf E}_r g(n+r) h(n+r^2)|}$ by ${{\bf E}_r \nu(n+r) \nu(n+r^2)}$, which can be shown to be “bounded on average” in a suitable sense (e.g. bounded ${L^4}$ norm) via the aforementioned polynomial forms condition. Because of this and the Hölder inequality, the above estimate is equivalent to

$\displaystyle {\bf E}_n | {\bf E}_r g(n+r) h(n+r^2) | = o(1).$

By setting ${F}$ to be the signum of ${{\bf E}_r g(n+r) h(n+r^2)}$, this is equivalent to

$\displaystyle {\bf E}_n {\bf E}_r F(n) g(n+r) h(n+r^2) = o(1).$

This is halfway between (1) and (2); the sparsely supported function ${f}$ has been replaced by its “densification” ${F}$, but we have not yet densified ${h}$ to ${H}$. However, one can shift ${n}$ by ${r^2}$ and repeat the above arguments to achieve a similar densificiation of ${h}$, at which point one has reduced (1) to (2).

Kevin Ford, Ben Green, Sergei Konyagin, and myself have just posted to the arXiv our preprint “Large gaps between consecutive prime numbers“. This paper concerns the “opposite” problem to that considered by the recently concluded Polymath8 project, which was concerned with very small values of the prime gap ${p_{n+1}-p_n}$. Here, we wish to consider the largest prime gap ${G(X) = p_{n+1}-p_n}$ that one can find in the interval ${[X] = \{1,\dots,X\}}$ as ${X}$ goes to infinity.

Finding lower bounds on ${G(X)}$ is more or less equivalent to locating long strings of consecutive composite numbers that are not too large compared to the length of the string. A classic (and quite well known) construction here starts with the observation that for any natural number ${n}$, the consecutive numbers ${n!+2, n!+3,\dots,n!+n}$ are all composite, because each ${n!+i}$, ${i=2,\dots,n}$ is divisible by some prime ${p \leq n}$, while being strictly larger than that prime ${p}$. From this and Stirling’s formula, it is not difficult to obtain the bound

$\displaystyle G(X) \gg \frac{\log X}{\log\log X}. \ \ \ \ \ (1)$

A more efficient bound comes from the prime number theorem: there are only ${(1+o(1)) \frac{X}{\log X}}$ primes up to ${X}$, so just from the pigeonhole principle one can locate a string of consecutive composite numbers up to ${X}$ of length at least ${(1-o(1)) \log X}$, thus

$\displaystyle G(X) \gtrsim \log X \ \ \ \ \ (2)$

where we use ${X \gtrsim Y}$ or ${Y \lesssim X}$ as shorthand for ${X \geq (1-o(1)) Y}$ or ${Y \leq (1+o(1)) X}$.

What about upper bounds? The Cramér random model predicts that the primes up to ${X}$ are distributed like a random subset ${\{1,\dots,X\}}$ of density ${1/\log X}$. Using this model, Cramér arrived at the conjecture

$\displaystyle G(X) \ll \log^2 X.$

In fact, if one makes the extremely optimistic assumption that the random model perfectly describes the behaviour of the primes, one would arrive at the even more precise prediction

$\displaystyle G(X) \sim \log^2 X.$

However, it is no longer widely believed that this optimistic version of the conjecture is true, due to some additional irregularities in the primes coming from the basic fact that large primes cannot be divisible by very small primes. Using the Maier matrix method to capture some of this irregularity, Granville was led to the conjecture that

$\displaystyle G(X) \gtrsim 2e^{-\gamma} \log^2 X$

(note that ${2e^{-\gamma} = 1.1229\dots}$ is slightly larger than ${1}$). For comparison, the known upper bounds on ${G(X)}$ are quite weak; unconditionally one has ${G(X) \ll X^{0.525}}$ by the work of Baker, Harman, and Pintz, and even on the Riemann hypothesis one only gets down to ${G(X) \ll X^{1/2} \log X}$, as shown by Cramér (a slight improvement is also possible if one additionally assumes the pair correlation conjecture; see this article of Heath-Brown and the references therein).

This conjecture remains out of reach of current methods. In 1931, Westzynthius managed to improve the bound (2) slightly to

$\displaystyle G(X) \gg \frac{\log\log\log X}{\log\log\log\log X} \log X ,$

which Erdös in 1935 improved to

$\displaystyle G(X) \gg \frac{\log\log X}{(\log\log\log X)^2} \log X$

and Rankin in 1938 improved slightly further to

$\displaystyle G(X) \gtrsim c \frac{\log\log X (\log\log\log\log X)}{(\log\log\log X)^2} \log X \ \ \ \ \ (3)$

with ${c=1/3}$. Remarkably, this rather strange bound then proved extremely difficult to advance further on; until recently, the only improvements were to the constant ${c}$, which was raised to ${c=\frac{1}{2} e^\gamma}$ in 1963 by Schönhage, to ${c= e^\gamma}$ in 1963 by Rankin, to ${c = 1.31256 e^\gamma}$ by Maier and Pomerance, and finally to ${c = 2e^\gamma}$ in 1997 by Pintz.

Erdös listed the problem of making ${c}$ arbitrarily large one of his favourite open problems, even offering (“somewhat rashly”, in his words) a cash prize for the solution. Our main result answers this question in the affirmative:

Theorem 1 The bound (3) holds for arbitrarily large ${c>0}$.

In principle, we thus have a bound of the form

$\displaystyle G(X) \geq f(X) \frac{\log\log X (\log\log\log\log X)}{(\log\log\log X)^2} \log X$

for some ${f(X)}$ that grows to infinity. Unfortunately, due to various sources of ineffectivity in our methods, we cannot provide any explicit rate of growth on ${f(X)}$ at all.

We decided to announce this result the old-fashioned way, as part of a research lecture; more precisely, Ben Green announced the result in his ICM lecture this Tuesday. (The ICM staff have very efficiently put up video of his talks (and most of the other plenary and prize talks) online; Ben’s talk is here, with the announcement beginning at about 0:48. Note a slight typo in his slides, in that the exponent of ${\log\log\log X}$ in the denominator is ${3}$ instead of ${2}$.) Ben’s lecture slides may be found here.

By coincidence, an independent proof of this theorem has also been obtained very recently by James Maynard.

I discuss our proof method below the fold.

[This guest post is authored by Matilde Lalin, an Associate Professor in the Département de mathématiques et de statistique at the Université de Montréal.  I have lightly edited the text, mostly by adding some HTML formatting. -T.]

Mathematicians (and likely other academics!) with small children face some unique challenges when traveling to conferences and workshops. The goal of this post is to reflect on these, and to start a constructive discussion what institutions and event organizers could do to improve the experiences of such participants.

The first necessary step is to recognize that different families have different needs. While it is hard to completely address everybody’s needs, there are some general measures that have a good chance to help most of the people traveling with young children. In this post, I will mostly focus on nursing mothers with infants ($\leq 24$ months old) because that is my personal experience. Many of the suggestions will apply to other cases such as non-nursing babies, children of single parents, children of couples of mathematicians who are interested in attending the same conference, etc..

The mother of a nursing infant that wishes to attend a conference has three options:

1. Bring the infant and a relative/friend to help caring for the infant. The main challenge in this case is to fund the trip expenses of the relative. This involves trip costs, lodging, and food. The family may need a hotel room with some special amenities such as crib, fridge, microwave, etc. Location is also important, with easy access to facilities such as a grocery store, pharmacy, etc. The mother will need to take regular breaks from the conference in order to nurse the baby (this could be as often as every three hours or so). Depending on personal preferences, she may need to nurse privately. It is convenient, thus, to make a private room available, located as close to the conference venue as possible. The relative may need to have a place to stay with the baby near the conference such as a playground or a room with toys, particularly if the hotel room is far.
2. Bring the infant and hire someone local (a nanny) to help caring for the infant. The main challenges in this case are two: finding the caregiver and paying for such services. Finding a caregiver in a place where one does not live is hard, as it is difficult to conduct interviews or get references. There are agencies that can do this for a (quite expensive) fee: they will find a professional caregiver with background checks, CPR certification, many references, etc. It may be worth it, though, as professional caregivers tend to provide high-quality services and peace of mind is priceless for the mother mathematician attending a conference. As in the previous case, the mother may have particular needs regarding the hotel room, location, and all the other facilities mentioned for Option 1.
3. Travel without the infant and pump milk regularly. This can be very challenging for the mother, the baby, and the person that stays behind taking care of the baby, but the costs of this arrangement are much lower than in Option 1 or 2 (I am ignoring the possibility that the family needs to hire help at home, which is necessary in some cases). A nursing mother away from her baby has no option but to pump her milk to prevent her from pain and serious health complications. This mother may have to pump milk very often. Pumping is less efficient than nursing, so she will be gone for longer in each break or she will have more breaks compared to a mother that travels with her baby. For pumping, people need a room which should ideally be private, with a sink, and located as close to the conference venue as possible. It is often impossible for these three conditions to be met at the same time, so different mothers give priority to different features. Some people pump milk in washrooms, to have easy access to water. Other people might prefer to pump in a more comfortable setting, such as an office, and go to the washroom to wash the breast pump accessories after. If the mother expects that the baby will drink breastmilk while she is away, then she will also have to pump milk in advance of her trip. This requires some careful planning.

Many pumping mothers try to store the pumped milk and bring it back home. In this case the mother needs a hotel room with a fridge which (ideally, but hard to find) has a freezer. In a perfect world there would also be a fridge in the place where she pumps/where the conference is held.

It is important to keep in mind that each option has its own set of challenges (even when expenses and facilities are all covered) and that different families may be restricted in their choice of options for a variety of reasons. It is therefore important that all these three options be facilitated.

As for the effect these choices have on the conference experience for the mother, Option 1 means that she has to balance her time between the conference and spending time with her relative/friend. This pressure disappears when we consider Option 2, so this option may lead to more participation in the conferences activities. In Option 3, the mother is in principle free to participate in all the conference activities, but the frequent breaks may limit the type of activity. A mother may choose different options depending on the nature of the conference.

I want to stress, for the three options, that having to make choices about what to miss in the conference is very hard. While talks are important, so are the opportunities to meet people and discuss mathematics that happen during breaks and social events. It is very difficult to balance all of this. This is particularly difficult for the pumping mother in Option 3: because she travels without her baby, she is not perceived to be a in special situation or in need of accommodation. However, this mother is probably choosing between going to the last lecture in the morning or having lunch alone, because if she goes to pump right after the last lecture, by the time she is back, everybody has left for lunch.

Here is the Hall of Fame for those organizations that are already supporting nursing mothers’ travels in mathematics:

• The Natural Sciences and Engineering Research Council of Canada (NSERC) (search for “child care”) allows to reimburse the costs of child care with Option 2 out of the mother’s grants. They will also reimburse the travel expenses of a relative with Option 1 up to the amount that would cost to hire a local caregiver.
• The ENFANT/ELEFANT conference (co-organized by Lillian Pierce and Damaris Schindler) provided a good model to follow regarding accommodation for parents with children during conferences that included funding for covering the travel costs of accompanying caretakers (the funding was provided by the Deutsche Forschungsgemeinschaft, and lactation rooms and play rooms near the conference venue (the facilities were provided by the Hausdorff Center for Mathematics).

Additional information (where to go with kids, etc) was provided on site by the organizers and was made available to all participants all the time, by means of a display board that was left standing during the whole week of the conference.
• The American Institute of Mathematics (AIM) reimburses up to 500 dollars on childcare for visitors and they have some online resources that assist in finding childcare and nannies.

[UPDATED] Added a few more things to the Hall of Fame

In closing, here is a (possibly incomplete) list of resources that institutes, funding agencies, and conferences could consider providing for nursing mother mathematicians:

1. Funding (for cost associated to child care either professional or by an accompanying relative).
2. List of childcare resources (nannies, nanny agencies, drop-in childcare centre, etc).
3. Nursing rooms and playrooms near the conference venue. Nearby fridge.
4. Breaks of at least 20 minutes every 2-3 hours.
5. Information about transportation with infants. More specific, taxi and/or shuttle companies that provide infant car seats. Information regarding the law on infant seats in taxis and other public transportation.
6. Accessibility for strollers.
7. [UPDATED] A nearby playground location. (comment from Peter).

I also find it important that these resources be listed publicly in the institute/conference website. This serves a double purpose: first, it helps those in need of the resources to access them easily, and second, it contributes to make these accommodations normal, setting a good model for future events, and inspiring organizers of future events.

Finally, I am pretty sure that the options and solutions I described do not cover all cases. I would like to finish this note by inviting readers to make suggestions, share experiences, and/or pose questions about this topic.

In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).

Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the ${P \neq NP}$ problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The ${P \neq NP}$ problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as ${k}$-SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and ${k}$-SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.

My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector ${x \in {\bf R}^n}$ in an affine space ${\{ x \in {\bf R}^n: Ax = b \}}$ (where ${b \in {\bf R}^m}$ and ${A \in {\bf R}^{m\times n}}$ are given, and ${m}$ is typically somewhat smaller than ${n}$) which minimises the ${\ell^1}$-norm ${\|x\|_{\ell^1} := \sum_{i=1}^n |x_i|}$ of the vector ${x}$. This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once ${n}$ and ${m}$ get moderately large (e.g. of the order of ${10^6}$), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix ${A}$. Much of the problem comes from the fact that the functional ${x \mapsto \|x\|_1}$ is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint ${Ax=b}$ with a convex penalty term ${\frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2}$, thus one is now trying to minimise the unconstrained functional

$\displaystyle \|x\|_1 + \frac{1}{2\epsilon} \|Ax-b\|_{\ell^2}^2.$

This functional is more convex, and is over a computationally simpler domain ${{\bf R}^n}$ than the affine space ${\{x \in {\bf R}^n: Ax=b\}}$, so is easier (though still not entirely trivial) to optimise over. However, the minimiser ${x^\epsilon}$ to this problem need not match the minimiser ${x^0}$ to the original problem, particularly if the (sub-)gradient ${\partial \|x\|_1}$ of the original functional ${\|x\|_1}$ is large at ${x^0}$, and if ${\epsilon}$ is not set to be small. (And setting ${\epsilon}$ too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term

$\displaystyle \|x\|_1 - \langle p, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2$

then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser ${x^0}$ to the original problem, provided that ${p}$ is (or more precisely, lies in) the (sub-)gradient ${\partial \|x\|_1}$ of ${\|x\|_1}$ at ${x^0}$ – even if ${\epsilon}$ is not small. But, one does not know in advance what the correct value of ${p}$ should be, because one does not know what the minimiser ${x^0}$ is.

With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for ${x}$ and ${p}$ simultaneously; given an initial guess ${x^k, p^k}$ for both ${x^k}$ and ${p^k}$, one first updates ${x^k}$ to the minimiser ${x^{k+1} \in {\bf R}^n}$ of the convex functional

$\displaystyle \|x\|_1 - \langle p^k, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2 \ \ \ \ \ (1)$

and then updates ${p^{k+1}}$ to the natural value of the subgradient ${\partial \|x\|_1}$ at ${x^{k+1}}$, namely

$\displaystyle p^{k+1} := p^k - \nabla \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2|_{x=x^{k+1}} = p_k - \frac{1}{\epsilon} (Ax^k - b)$

(note upon taking the first variation of (1) that ${p^{k+1}}$ is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser ${x^0}$ even for non-small values of ${\epsilon}$, and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).

Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth ${n}$-dimensional Riemannian manifold that one wants to embed locally into a Euclidean space ${{\bf R}^m}$. The Nash embedding theorem guarantees that one can do this if ${m}$ is large enough depending on ${n}$, but what is the minimal value of ${m}$ one can get away with? Many years ago, my colleague Robert Greene showed that ${m = \frac{n(n+1)}{2} + n}$ sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that ${m = \frac{n(n+1)}{2}}$ is possible, and no better. So this suggests that ${m = \frac{n(n+1)}{2}}$ is the threshold for the smooth problem also, but this remains open in general. The cases ${n=1}$ is trivial, and the ${n=2}$ case is not too difficult (if the curvature is non-zero) as the codimension ${m-n}$ is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the ${n=3}$ case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.

The 2014 Fields medallists have just been announced as (in alphabetical order of surname) Artur Avila, Manjul Bhargava, Martin Hairer, and Maryam Mirzakhani (see also these nice video profiles for the winners, which is a new initiative of the IMU and the Simons foundation). This time four years ago, I wrote a blog post discussing one result from each of the 2010 medallists; I thought I would try to repeat the exercise here, although the work of the medallists this time around is a little bit further away from my own direct area of expertise than last time, and so my discussion will unfortunately be a bit superficial (and possibly not completely accurate) in places. As before, I am picking these results based on my own idiosyncratic tastes, and they should not be viewed as necessarily being the “best” work of these medallists. (See also the press releases for Avila, Bhargava, Hairer, and Mirzakhani.)

Artur Avila works in dynamical systems and in the study of Schrödinger operators. The work of Avila that I am most familiar with is his solution with Svetlana Jitormiskaya of the ten martini problem of Kac, the solution to which (according to Barry Simon) he offered ten martinis for, hence the name. The problem involves perhaps the simplest example of a Schrödinger operator with non-trivial spectral properties, namely the almost Mathieu operator ${H^{\lambda,\alpha}_\omega: \ell^2({\bf Z}) \rightarrow \ell^2({\bf Z})}$ defined for parameters ${\alpha,\omega \in {\bf R}/{\bf Z}}$ and ${\lambda>0}$ by a discrete one-dimensional Schrödinger operator with cosine potential:

$\displaystyle (H^{\lambda,\alpha}_\omega u)_n := u_{n+1} + u_{n-1} + 2\lambda (\cos 2\pi(\theta+n\alpha)) u_n.$

This is a bounded self-adjoint operator and thus has a spectrum ${\sigma( H^{\lambda,\alpha}_\omega )}$ that is a compact subset of the real line; it arises in a number of physical contexts, most notably in the theory of the integer quantum Hall effect, though I will not discuss these applications here. Remarkably, the structure of this spectrum depends crucially on the Diophantine properties of the frequency ${\alpha}$. For instance, if ${\alpha = p/q}$ is a rational number, then the operator is periodic with period ${q}$, and then basic (discrete) Floquet theory tells us that the spectrum is simply the union of ${q}$ (possibly touching) intervals. But for irrational ${\alpha}$ (in which case the spectrum is independent of the phase ${\theta}$), the situation is much more fractal in nature, for instance in the critical case ${\lambda=1}$ the spectrum (as a function of ${\alpha}$) gives rise to the Hofstadter butterfly. The “ten martini problem” asserts that for every irrational ${\alpha}$ and every choice of coupling constant ${\lambda > 0}$, the spectrum is homeomorphic to a Cantor set. Prior to the work of Avila and Jitormiskaya, there were a number of partial results on this problem, notably the result of Puig establishing Cantor spectrum for a full measure set of parameters ${(\lambda,\alpha)}$, as well as results requiring a perturbative hypothesis, such as ${\lambda}$ being very small or very large. The result was also already known for ${\alpha}$ being either very close to rational (i.e. a Liouville number) or very far from rational (a Diophantine number), although the analyses for these two cases failed to meet in the middle, leaving some cases untreated. The argument uses a wide variety of existing techniques, both perturbative and non-perturbative, to attack this problem, as well as an amusing argument by contradiction: they assume (in certain regimes) that the spectrum fails to be a Cantor set, and use this hypothesis to obtain additional Lipschitz control on the spectrum (as a function of the frequency ${\alpha}$), which they can then use (after much effort) to improve existing arguments and conclude that the spectrum was in fact Cantor after all!

Manjul Bhargava produces amazingly beautiful mathematics, though most of it is outside of my own area of expertise. One part of his work that touches on an area of my own interest (namely, random matrix theory) is his ongoing work with many co-authors on modeling (both conjecturally and rigorously) the statistics of various key number-theoretic features of elliptic curves (such as their rank, their Selmer group, or their Tate-Shafarevich groups). For instance, with Kane, Lenstra, Poonen, and Rains, Manjul has proposed a very general random matrix model that predicts all of these statistics (for instance, predicting that the ${p}$-component of the Tate-Shafarevich group is distributed like the cokernel of a certain random ${p}$-adic matrix, very much in the spirit of the Cohen-Lenstra heuristics discussed in this previous post). But what is even more impressive is that Manjul and his coauthors have been able to verify several non-trivial fragments of this model (e.g. showing that certain moments have the predicted asymptotics), giving for the first time non-trivial upper and lower bounds for various statistics, for instance obtaining lower bounds on how often an elliptic curve has rank ${0}$ or rank ${1}$, leading most recently (in combination with existing work of Gross-Zagier and of Kolyvagin, among others) to his amazing result with Skinner and Zhang that at least ${66\%}$ of all elliptic curves over ${{\bf Q}}$ (ordered by height) obey the Birch and Swinnerton-Dyer conjecture. Previously it was not even known that a positive proportion of curves obeyed the conjecture. This is still a fair ways from resolving the conjecture fully (in particular, the situation with the presumably small number of curves of rank ${2}$ and higher is still very poorly understood, and the theory of Gross-Zagier and Kolyvagin that this work relies on, which was initially only available for ${{\bf Q}}$, has only been extended to totally real number fields thus far, by the work of Zhang), but it certainly does provide hope that the conjecture could be within reach in a statistical sense at least.

Martin Hairer works in at the interface between probability and partial differential equations, and in particular in the theory of stochastic differential equations (SDEs). The result of his that is closest to my own interests is his remarkable demonstration with Jonathan Mattingly of unique invariant measure for the two-dimensional stochastically forced Navier-Stokes equation

$\displaystyle \partial_t u + (u \cdot \nabla u) = \nu \Delta u - \nabla p + \xi$

$\displaystyle \nabla \cdot u = 0$

on the two-torus ${({\bf R}/{\bf Z})^2}$, where ${\xi}$ is a Gaussian field that forces a fixed set of frequencies. It is expected that for any reasonable choice of initial data, the solution to this equation should asymptotically be distributed according to Kolmogorov’s power law, as discussed in this previous post. This is still far from established rigorously (although there are some results in this direction for dyadic models, see e.g. this paper of Cheskidov, Shvydkoy, and Friedlander). However, Hairer and Mattingly were able to show that there was a unique probability distribution to almost every initial data would converge to asymptotically; by the ergodic theorem, this is equivalent to demonstrating the existence and uniqueness of an invariant measure for the flow. Existence can be established using standard methods, but uniqueness is much more difficult. One of the standard routes to uniqueness is to establish a “strong Feller property” that enforces some continuity on the transition operators; among other things, this would mean that two ergodic probability measures with intersecting supports would in fact have a non-trivial common component, contradicting the ergodic theorem (which forces different ergodic measures to be mutually singular). Since all ergodic measures for Navier-Stokes can be seen to contain the origin in their support, this would give uniqueness. Unfortunately, the strong Feller property is unlikely to hold in the infinite-dimensional phase space for Navier-Stokes; but Hairer and Mattingly develop a clean abstract substitute for this property, which they call the asymptotic strong Feller property, which is again a regularity property on the transition operator; this in turn is then demonstrated by a careful application of Malliavin calculus.

Maryam Mirzakhani has mostly focused on the geometry and dynamics of Teichmuller-type moduli spaces, such as the moduli space of Riemann surfaces with a fixed genus and a fixed number of cusps (or with a fixed number of boundaries that are geodesics of a prescribed length). These spaces have an incredibly rich structure, ranging from geometric structure (such as the Kahler geometry given by the Weil-Petersson metric), to dynamical structure (through the action of the mapping class group on this and related spaces), to algebraic structure (viewing these spaces as algebraic varieties), and are thus connected to many other objects of interest in geometry and dynamics. For instance, by developing a new recursive formula for the Weil-Petersson volume of this space, Mirzakhani was able to asymptotically count the number of simple prime geodesics of length up to some threshold ${L}$ in a hyperbolic surface (or more precisely, she obtained asymptotics for the number of such geodesics in a given orbit of the mapping class group); the answer turns out to be polynomial in ${L}$, in contrast to the much larger class of non-simple prime geodesics, whose asymptotics are exponential in ${L}$ (the “prime number theorem for geodesics”, developed in a classic series of works by Delsart, Huber, Selberg, and Margulis); she also used this formula to establish a new proof of a conjecture of Witten on intersection numbers that was first proven by Kontsevich. More recently, in two lengthy papers with Eskin and with Eskin-Mohammadi, Mirzakhani established rigidity theorems for the action of ${SL_2({\bf R})}$ on such moduli spaces that are close analogues of Ratner’s celebrated rigidity theorems for unipotently generated groups (discussed in this previous blog post). Ratner’s theorems are already notoriously difficult to prove, and rely very much on the polynomial stability properties of unipotent flows; in this even more complicated setting, the unipotent flows are no longer tractable, and Mirzakhani instead uses a recent “exponential drift” method of Benoist and Quint with as a substitute. Ratner’s theorems are incredibly useful for all sorts of problems connected to homogeneous dynamics, and the analogous theorems established by Mirzakhani, Eskin, and Mohammadi have a similarly broad range of applications, for instance in counting periodic billiard trajectories in rational polygons.

I’ve just uploaded to the arXiv the D.H.J. Polymath paper “Variants of the Selberg sieve, and bounded intervals containing many primes“, which is the second paper to be produced from the Polymath8 project (the first one being discussed here). We’ll refer to this latter paper here as the Polymath8b paper, and the former as the Polymath8a paper. As with Polymath8a, the Polymath8b paper is concerned with the smallest asymptotic prime gap

$\displaystyle H_1 := \liminf_{n \rightarrow \infty}(p_{n+1}-p_n),$

where ${p_n}$ denotes the ${n^{th}}$ prime, as well as the more general quantities

$\displaystyle H_m := \liminf_{n \rightarrow \infty}(p_{n+m}-p_n).$

In the breakthrough paper of Goldston, Pintz, and Yildirim, the bound ${H_1 \leq 16}$ was obtained under the strong hypothesis of the Elliott-Halberstam conjecture. An unconditional bound on ${H_1}$, however, remained elusive until the celebrated work of Zhang last year, who showed that

$\displaystyle H_1 \leq 70{,}000{,}000.$

The Polymath8a paper then improved this to ${H_1 \leq 4{,}680}$. After that, Maynard introduced a new multidimensional Selberg sieve argument that gave the substantial improvement

$\displaystyle H_1 \leq 600$

unconditionally, and ${H_1 \leq 12}$ on the Elliott-Halberstam conjecture; furthermore, bounds on ${H_m}$ for higher ${m}$ were obtained for the first time, and specifically that ${H_m \ll m^3 e^{4m}}$ for all ${m \geq 1}$, with the improvements ${H_2 \leq 600}$ and ${H_m \ll m^3 e^{2m}}$ on the Elliott-Halberstam conjecture. (I had independently discovered the multidimensional sieve idea, although I did not obtain Maynard’s specific numerical results, and my asymptotic bounds were a bit weaker.)

In Polymath8b, we obtain some further improvements. Unconditionally, we have ${H_1 \leq 246}$ and ${H_m \ll m e^{(4 - \frac{28}{157}) m}}$, together with some explicit bounds on ${H_2,H_3,H_4,H_5}$; on the Elliott-Halberstam conjecture we have ${H_m \ll m e^{2m}}$ and some numerical improvements to the ${H_2,H_3,H_4,H_5}$ bounds; and assuming the generalised Elliott-Halberstam conjecture we have the bound ${H_1 \leq 6}$, which is best possible from sieve-theoretic methods thanks to the parity problem obstruction.

There were a variety of methods used to establish these results. Maynard’s paper obtained a criterion for bounding ${H_m}$ which reduced to finding a good solution to a certain multidimensional variational problem. When the dimension parameter ${k}$ was relatively small (e.g. ${k \leq 100}$), we were able to obtain good numerical solutions both by continuing the method of Maynard (using a basis of symmetric polynomials), or by using a Krylov iteration scheme. For large ${k}$, we refined the asymptotics and obtained near-optimal solutions of the variational problem. For the ${H_1}$ bounds, we extended the reach of the multidimensional Selberg sieve (particularly under the assumption of the generalised Elliott-Halberstam conjecture) by allowing the function ${F}$ in the multidimensional variational problem to extend to a larger region of space than was previously admissible, albeit with some tricky new constraints on ${F}$ (and penalties in the variational problem). This required some unusual sieve-theoretic manipulations, notably an “epsilon trick”, ultimately relying on the elementary inequality ${(a+b)^2 \geq a^2 + 2ab}$, that allowed one to get non-trivial lower bounds for sums such as ${\sum_n (a(n)+b(n))^2}$ even if the sum ${\sum_n b(n)^2}$ had no non-trivial estimates available; and a way to estimate divisor sums such as ${\sum_{n\leq x} \sum_{d|n} \lambda_d}$ even if ${d}$ was permitted to be comparable to or even exceed ${x}$, by using the fundamental theorem of arithmetic to factorise ${n}$ (after restricting to the case when ${n}$ is almost prime). I hope that these sieve-theoretic tricks will be useful in future work in the subject.

With this paper, the Polymath8 project is almost complete; there is still a little bit of scope to push our methods further and get some modest improvement for instance to the ${H_1 \leq 246}$ bound, but this would require a substantial amount of effort, and it is probably best to instead wait for some new breakthrough in the subject to come along. One final task we are performing is to write up a retrospective article on both the 8a and 8b experiences, an incomplete writeup of which can be found here. If anyone wishes to contribute some commentary on these projects (whether you were an active contributor, an occasional contributor, or a silent “lurker” in the online discussion), please feel free to do so in the comments to this post.