You are currently browsing the tag archive for the ‘randomness’ tag.

In the modern theory of higher order Fourier analysis, a key role are played by the Gowers uniformity norms ${\| \|_{U^k}}$ for ${k=1,2,\dots}$. For finitely supported functions ${f: {\bf Z} \rightarrow {\bf C}}$, one can define the (non-normalised) Gowers norm ${\|f\|_{\tilde U^k({\bf Z})}}$ by the formula

$\displaystyle \|f\|_{\tilde U^k({\bf Z})}^{2^k} := \sum_{n,h_1,\dots,h_k \in {\bf Z}} \prod_{\omega_1,\dots,\omega_k \in \{0,1\}} {\mathcal C}^{\omega_1+\dots+\omega_k} f(x+\omega_1 h_1 + \dots + \omega_k h_k)$

where ${{\mathcal C}}$ denotes complex conjugation, and then on any discrete interval ${[N] = \{1,\dots,N\}}$ and any function ${f: [N] \rightarrow {\bf C}}$ we can then define the (normalised) Gowers norm

$\displaystyle \|f\|_{U^k([N])} := \| f 1_{[N]} \|_{\tilde U^k({\bf Z})} / \|1_{[N]} \|_{\tilde U^k({\bf Z})}$

where ${f 1_{[N]}: {\bf Z} \rightarrow {\bf C}}$ is the extension of ${f}$ by zero to all of ${{\bf Z}}$. Thus for instance

$\displaystyle \|f\|_{U^1([N])} = |\mathop{\bf E}_{n \in [N]} f(n)|$

(which technically makes ${\| \|_{U^1([N])}}$ a seminorm rather than a norm), and one can calculate

$\displaystyle \|f\|_{U^2([N])} \asymp (N \int_0^1 |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)|^4\ d\alpha)^{1/4} \ \ \ \ \ (1)$

where ${e(\theta) := e^{2\pi i \alpha}}$, and we use the averaging notation ${\mathop{\bf E}_{n \in A} f(n) = \frac{1}{|A|} \sum_{n \in A} f(n)}$.

The significance of the Gowers norms is that they control other multilinear forms that show up in additive combinatorics. Given any polynomials ${P_1,\dots,P_m: {\bf Z}^d \rightarrow {\bf Z}}$ and functions ${f_1,\dots,f_m: [N] \rightarrow {\bf C}}$, we define the multilinear form

$\displaystyle \Lambda^{P_1,\dots,P_m}(f_1,\dots,f_m) := \sum_{n \in {\bf Z}^d} \prod_{j=1}^m f_j 1_{[N]}(P_j(n)) / \sum_{n \in {\bf Z}^d} \prod_{j=1}^m 1_{[N]}(P_j(n))$

(assuming that the denominator is finite and non-zero). Thus for instance

$\displaystyle \Lambda^{\mathrm{n}}(f) = \mathop{\bf E}_{n \in [N]} f(n)$

$\displaystyle \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}}(f,g) = (\mathop{\bf E}_{n \in [N]} f(n)) (\mathop{\bf E}_{n \in [N]} g(n))$

$\displaystyle \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+2\mathrm{r}}(f,g,h) \asymp \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [-N,N]} f(n) g(n+r) h(n+2r)$

$\displaystyle \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+\mathrm{r}^2}(f,g,h) \asymp \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [-N^{1/2},N^{1/2}]} f(n) g(n+r) h(n+r^2)$

where we view ${\mathrm{n}, \mathrm{r}}$ as formal (indeterminate) variables, and ${f,g,h: [N] \rightarrow {\bf C}}$ are understood to be extended by zero to all of ${{\bf Z}}$. These forms are used to count patterns in various sets; for instance, the quantity ${\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+2\mathrm{r}}(1_A,1_A,1_A)}$ is closely related to the number of length three arithmetic progressions contained in ${A}$. Let us informally say that a form ${\Lambda^{P_1,\dots,P_m}(f_1,\dots,f_m)}$ is controlled by the ${U^k[N]}$ norm if the form is small whenever ${f_1,\dots,f_m: [N] \rightarrow {\bf C}}$ are ${1}$-bounded functions with at least one of the ${f_j}$ small in ${U^k[N]}$ norm. This definition was made more precise by Gowers and Wolf, who then defined the true complexity of a form ${\Lambda^{P_1,\dots,P_m}}$ to be the least ${s}$ such that ${\Lambda^{P_1,\dots,P_m}}$ is controlled by the ${U^{s+1}[N]}$ norm. For instance,
• ${\Lambda^{\mathrm{n}}}$ and ${\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}}}$ have true complexity ${0}$;
• ${\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}, \mathrm{n} + \mathrm{2r}}}$ has true complexity ${1}$;
• ${\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}, \mathrm{n} + \mathrm{2r}, \mathrm{n} + \mathrm{3r}}}$ has true complexity ${2}$;
• The form ${\Lambda^{\mathrm{n}, \mathrm{n}+2}}$ (which among other things could be used to count twin primes) has infinite true complexity (which is quite unfortunate for applications).
Roughly speaking, patterns of complexity ${1}$ or less are amenable to being studied by classical Fourier analytic tools (the Hardy-Littlewood circle method); patterns of higher complexity can be handled (in principle, at least) by the methods of higher order Fourier analysis; and patterns of infinite complexity are out of range of both methods and are generally quite difficult to study. See these recent slides of myself (or this video of the lecture) for some further discussion.

Gowers and Wolf formulated a conjecture on what this complexity should be, at least for linear polynomials ${P_1,\dots,P_m}$; Ben Green and I thought we had resolved this conjecture back in 2010, though it turned out there was a subtle gap in our arguments and we were only able to resolve the conjecture in a partial range of cases. However, the full conjecture was recently resolved by Daniel Altman.

The ${U^1}$ (semi-)norm is so weak that it barely controls any averages at all. For instance the average

$\displaystyle \Lambda^{2\mathrm{n}}(f) = \mathop{\bf E}_{n \in [N], \hbox{ even}} f(n)$

is not controlled by the ${U^1[N]}$ semi-norm: it is perfectly possible for a ${1}$-bounded function ${f: [N] \rightarrow {\bf C}}$ to even have vanishing ${U^1([N])}$ norm but have large value of ${\Lambda^{2\mathrm{n}}(f)}$ (consider for instance the parity function ${f(n) := (-1)^n}$).

Because of this, I propose inserting an additional norm in the Gowers uniformity norm hierarchy between the ${U^1}$ and ${U^2}$ norms, which I will call the ${U^{1^+}}$ (or “profinite ${U^1}$“) norm:

$\displaystyle \| f\|_{U^{1^+}[N]} := \frac{1}{N} \sup_P |\sum_{n \in P} f(n)| = \sup_P | \mathop{\bf E}_{n \in [N]} f 1_P(n)|$

where ${P}$ ranges over all arithmetic progressions in ${[N]}$. This can easily be seen to be a norm on functions ${f: [N] \rightarrow {\bf C}}$ that controls the ${U^1[N]}$ norm. It is also basically controlled by the ${U^2[N]}$ norm for ${1}$-bounded functions ${f}$; indeed, if ${P}$ is an arithmetic progression in ${[N]}$ of some spacing ${q \geq 1}$, then we can write ${P}$ as the intersection of an interval ${I}$ with a residue class modulo ${q}$, and from Fourier expansion we have

$\displaystyle \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \sup_\alpha |\mathop{\bf E}_{n \in [N]} f 1_I(n) e(\alpha n)|.$

If we let ${\psi}$ be a standard bump function supported on ${[-1,1]}$ with total mass and ${\delta>0}$ is a parameter then

$\displaystyle \mathop{\bf E}_{n \in [N]} f 1_I(n) e(\alpha n)$

$\displaystyle \ll |\mathop{\bf E}_{n \in [-N,2N]; h, k \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N})$

$\displaystyle 1_I(n+h+k) f(n+h+k) e(\alpha(n+h+k))|$

$\displaystyle \ll |\mathop{\bf E}_{n \in [-N,2N]; h, k \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+k) f(n+h+k) e(\alpha(n+h+k))|$

$\displaystyle + \delta$

(extending ${f}$ by zero outside of ${[N]}$), as can be seen by using the triangle inequality and the estimate

$\displaystyle \mathop{\bf E}_{h \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+h+k) - \mathop{\bf E}_{h \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+k)$

$\displaystyle \ll (1 + \mathrm{dist}(n+k, I) / \delta N)^{-2}.$

After some Fourier expansion of ${\delta \psi(\frac{h}{\delta N})}$ we now have

$\displaystyle \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \frac{1}{\delta} \sup_{\alpha,\beta} |\mathop{\bf E}_{n \in [N]; h, k \in [-N,N]} e(\beta h + \alpha (n+h+k))$

$\displaystyle 1_P(n+k) f(n+h+k)| + \delta.$

Writing ${\alpha h + \alpha(n+h+k)}$ as a linear combination of ${n, n+h, n+k}$ and using the Gowers–Cauchy–Schwarz inequality, we conclude

$\displaystyle \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \frac{1}{\delta} \|f\|_{U^2([N])} + \delta$

hence on optimising in ${\delta}$ we have

$\displaystyle \| f\|_{U^{1^+}[N]} \ll \|f\|_{U^2[N]}^{1/2}.$

Forms which are controlled by the ${U^{1^+}}$ norm (but not ${U^1}$) would then have their true complexity adjusted to ${0^+}$ with this insertion.

The ${U^{1^+}}$ norm recently appeared implicitly in work of Peluse and Prendiville, who showed that the form ${\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+\mathrm{r}^2}(f,g,h)}$ had true complexity ${0^+}$ in this notation (with polynomially strong bounds). [Actually, strictly speaking this control was only shown for the third function ${h}$; for the first two functions ${f,g}$ one needs to localize the ${U^{1^+}}$ norm to intervals of length ${\sim \sqrt{N}}$. But I will ignore this technical point to keep the exposition simple.] The weaker claim that ${\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}^2}(f,g)}$ has true complexity ${0^+}$ is substantially easier to prove (one can apply the circle method together with Gauss sum estimates).

The well known inverse theorem for the ${U^2}$ norm tells us that if a ${1}$-bounded function ${f}$ has ${U^2[N]}$ norm at least ${\eta}$ for some ${0 < \eta < 1}$, then there is a Fourier phase ${n \mapsto e(\alpha n)}$ such that

$\displaystyle |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \gg \eta^2;$

this follows easily from (1) and Plancherel’s theorem. Conversely, from the Gowers–Cauchy–Schwarz inequality one has

$\displaystyle |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \ll \|f\|_{U^2[N]}.$

For ${U^1[N]}$ one has a trivial inverse theorem; by definition, the ${U^1[N]}$ norm of ${f}$ is at least ${\eta}$ if and only if

$\displaystyle |\mathop{\bf E}_{n \in [N]} f(n)| \geq \eta.$

Thus the frequency ${\alpha}$ appearing in the ${U^2}$ inverse theorem can be taken to be zero when working instead with the ${U^1}$ norm.

For ${U^{1^+}}$ one has the intermediate situation in which the frequency ${\alpha}$ is not taken to be zero, but is instead major arc. Indeed, suppose that ${f}$ is ${1}$-bounded with ${\|f\|_{U^{1^+}[N]} \geq \eta}$, thus

$\displaystyle |\mathop{\bf E}_{n \in [N]} 1_P(n) f(n)| \geq \eta$

for some progression ${P}$. This forces the spacing ${q}$ of this progression to be ${\ll 1/\eta}$. We write the above inequality as

$\displaystyle |\mathop{\bf E}_{n \in [N]} 1_{n=b\ (q)} 1_I(n) f(n)| \geq \eta$

for some residue class ${b\ (q)}$ and some interval ${I}$. By Fourier expansion and the triangle inequality we then have

$\displaystyle |\mathop{\bf E}_{n \in [N]} e(-an/q) 1_I(n) f(n)| \geq \eta$

for some integer ${a}$. Convolving ${1_I}$ by ${\psi_\delta: n \mapsto \frac{1}{N\delta} \psi(\frac{n}{N\delta})}$ for ${\delta}$ a small multiple of ${\eta}$ and ${\psi}$ a Schwartz function of unit mass with Fourier transform supported on ${[-1,1]}$, we have

$\displaystyle |\mathop{\bf E}_{n \in [N]} e(-an/q) (1_I * \psi_\delta)(n) f(n)| \gg \eta.$

The Fourier transform ${\xi \mapsto \sum_n 1_I * \psi_\delta(n) e(- \xi n)}$ of ${1_I * \psi_\delta}$ is bounded by ${O(N)}$ and supported on ${[-\frac{1}{\delta N},\frac{1}{\delta N}]}$, thus by Fourier expansion and the triangle inequality we have

$\displaystyle |\mathop{\bf E}_{n \in [N]} e(-an/q) e(-\xi n) f(n)| \gg \eta^2$

for some ${\xi \in [-\frac{1}{\delta N},\frac{1}{\delta N}]}$, so in particular ${\xi = O(\frac{1}{\eta N})}$. Thus we have

$\displaystyle |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \gg \eta^2 \ \ \ \ \ (2)$

for some ${\alpha}$ of the major arc form ${\alpha = \frac{a}{q} + O(1/\eta)}$ with ${1 \leq q \leq 1/\eta}$. Conversely, for ${\alpha}$ of this form, some routine summation by parts gives the bound

$\displaystyle |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \ll \frac{q}{\eta} \|f\|_{U^{1^+}[N]} \ll \frac{1}{\eta^2} \|f\|_{U^{1^+}[N]}$

so if (2) holds for a ${1}$-bounded ${f}$ then one must have ${\|f\|_{U^{1^+}[N]} \gg \eta^4}$.

Here is a diagram showing some of the control relationships between various Gowers norms, multilinear forms, and duals of classes ${{\mathcal F}}$ of functions (where each class of functions ${{\mathcal F}}$ induces a dual norm ${\| f \|_{{\mathcal F}^*} := \sup_{\phi \in {\mathcal F}} \mathop{\bf E}_{n \in[N]} f(n) \overline{\phi(n)}}$:

Here I have included the three classes of functions that one can choose from for the ${U^3}$ inverse theorem, namely degree two nilsequences, bracket quadratic phases, and local quadratic phases, as well as the more narrow class of globally quadratic phases.

The Gowers norms have counterparts for measure-preserving systems ${(X,T,\mu)}$, known as Host-Kra seminorms. The ${U^1(X)}$ norm can be defined for ${f \in L^\infty(X)}$ as

$\displaystyle \|f\|_{U^1(X)} := \lim_{N \rightarrow \infty} \int_X |\mathop{\bf E}_{n \in [N]} T^n f|\ d\mu$

and the ${U^2}$ norm can be defined as

$\displaystyle \|f\|_{U^2(X)}^4 := \lim_{N \rightarrow \infty} \mathop{\bf E}_{n \in [N]} \| T^n f \overline{f} \|_{U^1(X)}^2.$

The ${U^1(X)}$ seminorm is orthogonal to the invariant factor ${Z^0(X)}$ (generated by the (almost everywhere) invariant measurable subsets of ${X}$) in the sense that a function ${f \in L^\infty(X)}$ has vanishing ${U^1(X)}$ seminorm if and only if it is orthogonal to all ${Z^0(X)}$-measurable (bounded) functions. Similarly, the ${U^2(X)}$ norm is orthogonal to the Kronecker factor ${Z^1(X)}$, generated by the eigenfunctions of ${X}$ (that is to say, those ${f}$ obeying an identity ${Tf = \lambda f}$ for some ${T}$-invariant ${\lambda}$); for ergodic systems, it is the largest factor isomorphic to rotation on a compact abelian group. In analogy to the Gowers ${U^{1^+}[N]}$ norm, one can then define the Host-Kra ${U^{1^+}(X)}$ seminorm by

$\displaystyle \|f\|_{U^{1^+}(X)} := \sup_{q \geq 1} \frac{1}{q} \lim_{N \rightarrow \infty} \int_X |\mathop{\bf E}_{n \in [N]} T^{qn} f|\ d\mu;$

it is orthogonal to the profinite factor ${Z^{0^+}(X)}$, generated by the periodic sets of ${X}$ (or equivalently, by those eigenfunctions whose eigenvalue is a root of unity); for ergodic systems, it is the largest factor isomorphic to rotation on a profinite abelian group.

Given a function ${f: {\bf N} \rightarrow \{-1,+1\}}$ on the natural numbers taking values in ${+1, -1}$, one can invoke the Furstenberg correspondence principle to locate a measure preserving system ${T \circlearrowright (X, \mu)}$ – a probability space ${(X,\mu)}$ together with a measure-preserving shift ${T: X \rightarrow X}$ (or equivalently, a measure-preserving ${{\bf Z}}$-action on ${(X,\mu)}$) – together with a measurable function (or “observable”) ${F: X \rightarrow \{-1,+1\}}$ that has essentially the same statistics as ${f}$ in the sense that

$\displaystyle \lim \inf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k)$

$\displaystyle \leq \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x)$

$\displaystyle \leq \lim \sup_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k)$

for any integers ${h_1,\dots,h_k}$. In particular, one has

$\displaystyle \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x) = \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k) \ \ \ \ \ (1)$

whenever the limit on the right-hand side exists. We will refer to the system ${T \circlearrowright (X,\mu)}$ together with the designated function ${F}$ as a Furstenberg limit ot the sequence ${f}$. These Furstenberg limits capture some, but not all, of the asymptotic behaviour of ${f}$; roughly speaking, they control the typical “local” behaviour of ${f}$, involving correlations such as ${\frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k)}$ in the regime where ${h_1,\dots,h_k}$ are much smaller than ${N}$. However, the control on error terms here is usually only qualitative at best, and one usually does not obtain non-trivial control on correlations in which the ${h_1,\dots,h_k}$ are allowed to grow at some significant rate with ${N}$ (e.g. like some power ${N^\theta}$ of ${N}$).

The correspondence principle is discussed in these previous blog posts. One way to establish the principle is by introducing a Banach limit ${p\!-\!\lim: \ell^\infty({\bf N}) \rightarrow {\bf R}}$ that extends the usual limit functional on the subspace of ${\ell^\infty({\bf N})}$ consisting of convergent sequences while still having operator norm one. Such functionals cannot be constructed explicitly, but can be proven to exist (non-constructively and non-uniquely) using the Hahn-Banach theorem; one can also use a non-principal ultrafilter here if desired. One can then seek to construct a system ${T \circlearrowright (X,\mu)}$ and a measurable function ${F: X \rightarrow \{-1,+1\}}$ for which one has the statistics

$\displaystyle \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x) = p\!-\!\lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n+h_1) \dots f(n+h_k) \ \ \ \ \ (2)$

for all ${h_1,\dots,h_k}$. One can explicitly construct such a system as follows. One can take ${X}$ to be the Cantor space ${\{-1,+1\}^{\bf Z}}$ with the product ${\sigma}$-algebra and the shift

$\displaystyle T ( (x_n)_{n \in {\bf Z}} ) := (x_{n+1})_{n \in {\bf Z}}$

with the function ${F: X \rightarrow \{-1,+1\}}$ being the coordinate function at zero:

$\displaystyle F( (x_n)_{n \in {\bf Z}} ) := x_0$

(so in particular ${F( T^h (x_n)_{n \in {\bf Z}} ) = x_h}$ for any ${h \in {\bf Z}}$). The only thing remaining is to construct the invariant measure ${\mu}$. In order to be consistent with (2), one must have

$\displaystyle \mu( \{ (x_n)_{n \in {\bf Z}}: x_{h_j} = \epsilon_j \forall 1 \leq j \leq k \} )$

$\displaystyle = p\!-\!\lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N 1_{f(n+h_1)=\epsilon_1} \dots 1_{f(n+h_k)=\epsilon_k}$

for any distinct integers ${h_1,\dots,h_k}$ and signs ${\epsilon_1,\dots,\epsilon_k}$. One can check that this defines a premeasure on the Boolean algebra of ${\{-1,+1\}^{\bf Z}}$ defined by cylinder sets, and the existence of ${\mu}$ then follows from the Hahn-Kolmogorov extension theorem (or the closely related Kolmogorov extension theorem). One can then check that the correspondence (2) holds, and that ${\mu}$ is translation-invariant; the latter comes from the translation invariance of the (Banach-)Césaro averaging operation ${f \mapsto p\!-\!\lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n)}$. A variant of this construction shows that the Furstenberg limit is unique up to equivalence if and only if all the limits appearing in (1) actually exist.

One can obtain a slightly tighter correspondence by using a smoother average than the Césaro average. For instance, one can use the logarithmic Césaro averages ${\lim_{N \rightarrow \infty} \frac{1}{\log N}\sum_{n=1}^N \frac{f(n)}{n}}$ in place of the Césaro average ${\sum_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(n)}$, thus one replaces (2) by

$\displaystyle \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x)$

$\displaystyle = p\!-\!\lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{f(n+h_1) \dots f(n+h_k)}{n}.$

Whenever the Césaro average of a bounded sequence ${f: {\bf N} \rightarrow {\bf R}}$ exists, then the logarithmic Césaro average exists and is equal to the Césaro average. Thus, a Furstenberg limit constructed using logarithmic Banach-Césaro averaging still obeys (1) for all ${h_1,\dots,h_k}$ when the right-hand side limit exists, but also obeys the more general assertion

$\displaystyle \int_X F(T^{h_1} x) \dots F(T^{h_k} x)\ d\mu(x)$

$\displaystyle = \lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{f(n+h_1) \dots f(n+h_k)}{n}$

whenever the limit of the right-hand side exists.

In a recent paper of Frantizinakis, the Furstenberg limits of the Liouville function ${\lambda}$ (with logarithmic averaging) were studied. Some (but not all) of the known facts and conjectures about the Liouville function can be interpreted in the Furstenberg limit. For instance, in a recent breakthrough result of Matomaki and Radziwill (discussed previously here), it was shown that the Liouville function exhibited cancellation on short intervals in the sense that

$\displaystyle \lim_{H \rightarrow \infty} \limsup_{X \rightarrow \infty} \frac{1}{X} \int_X^{2X} \frac{1}{H} |\sum_{x \leq n \leq x+H} \lambda(n)|\ dx = 0.$

In terms of Furstenberg limits of the Liouville function, this assertion is equivalent to the assertion that

$\displaystyle \lim_{H \rightarrow \infty} \int_X |\frac{1}{H} \sum_{h=1}^H F(T^h x)|\ d\mu(x) = 0$

for all Furstenberg limits ${T \circlearrowright (X,\mu), F}$ of Liouville (including those without logarithmic averaging). Invoking the mean ergodic theorem (discussed in this previous post), this assertion is in turn equivalent to the observable ${F}$ that corresponds to the Liouville function being orthogonal to the invariant factor ${L^\infty(X,\mu)^{\bf Z} = \{ g \in L^\infty(X,\mu): g \circ T = g \}}$ of ${X}$; equivalently, the first Gowers-Host-Kra seminorm ${\|F\|_{U^1(X)}}$ of ${F}$ (as defined for instance in this previous post) vanishes. The Chowla conjecture, which asserts that

$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \lambda(n+h_1) \dots \lambda(n+h_k) = 0$

for all distinct integers ${h_1,\dots,h_k}$, is equivalent to the assertion that all the Furstenberg limits of Liouville are equivalent to the Bernoulli system (${\{-1,+1\}^{\bf Z}}$ with the product measure arising from the uniform distribution on ${\{-1,+1\}}$, with the shift ${T}$ and observable ${F}$ as before). Similarly, the logarithmically averaged Chowla conjecture

$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{\lambda(n+h_1) \dots \lambda(n+h_k)}{n} = 0$

is equivalent to the assertion that all the Furstenberg limits of Liouville with logarithmic averaging are equivalent to the Bernoulli system. Recently, I was able to prove the two-point version

$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{\lambda(n) \lambda(n+h)}{n} = 0 \ \ \ \ \ (3)$

of the logarithmically averaged Chowla conjecture, for any non-zero integer ${h}$; this is equivalent to the perfect strong mixing property

$\displaystyle \int_X F(x) F(T^h x)\ d\mu(x) = 0$

for any Furstenberg limit of Liouville with logarithmic averaging, and any ${h \neq 0}$.

The situation is more delicate with regards to the Sarnak conjecture, which is equivalent to the assertion that

$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \lambda(n) f(n) = 0$

for any zero-entropy sequence ${f: {\bf N} \rightarrow {\bf R}}$ (see this previous blog post for more discussion). Morally speaking, this conjecture should be equivalent to the assertion that any Furstenberg limit of Liouville is disjoint from any zero entropy system, but I was not able to formally establish an implication in either direction due to some technical issues regarding the fact that the Furstenberg limit does not directly control long-range correlations, only short-range ones. (There are however ergodic theoretic interpretations of the Sarnak conjecture that involve the notion of generic points; see this paper of El Abdalaoui, Lemancyk, and de la Rue.) But the situation is currently better with the logarithmically averaged Sarnak conjecture

$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{\lambda(n) f(n)}{n} = 0,$

as I was able to show that this conjecture was equivalent to the logarithmically averaged Chowla conjecture, and hence to all Furstenberg limits of Liouville with logarithmic averaging being Bernoulli; I also showed the conjecture was equivalent to local Gowers uniformity of the Liouville function, which is in turn equivalent to the function ${F}$ having all Gowers-Host-Kra seminorms vanishing in every Furstenberg limit with logarithmic averaging. In this recent paper of Frantzikinakis, this analysis was taken further, showing that the logarithmically averaged Chowla and Sarnak conjectures were in fact equivalent to the much milder seeming assertion that all Furstenberg limits with logarithmic averaging were ergodic.

Actually, the logarithmically averaged Furstenberg limits have more structure than just a ${{\bf Z}}$-action on a measure preserving system ${(X,\mu)}$ with a single observable ${F}$. Let ${Aff_+({\bf Z})}$ denote the semigroup of affine maps ${n \mapsto an+b}$ on the integers with ${a,b \in {\bf Z}}$ and ${a}$ positive. Also, let ${\hat {\bf Z}}$ denote the profinite integers (the inverse limit of the cyclic groups ${{\bf Z}/q{\bf Z}}$). Observe that ${Aff_+({\bf Z})}$ acts on ${\hat {\bf Z}}$ by taking the inverse limit of the obvious actions of ${Aff_+({\bf Z})}$ on ${{\bf Z}/q{\bf Z}}$.

Proposition 1 (Enriched logarithmically averaged Furstenberg limit of Liouville) Let ${p\!-\!\lim}$ be a Banach limit. Then there exists a probability space ${(X,\mu)}$ with an action ${\phi \mapsto T^\phi}$ of the affine semigroup ${Aff_+({\bf Z})}$, as well as measurable functions ${F: X \rightarrow \{-1,+1\}}$ and ${M: X \rightarrow \hat {\bf Z}}$, with the following properties:

• (i) (Affine Furstenberg limit) For any ${\phi_1,\dots,\phi_k \in Aff_+({\bf Z})}$, and any congruence class ${a\ (q)}$, one has

$\displaystyle p\!-\!\lim_{N \rightarrow \infty} \frac{1}{\log N} \sum_{n=1}^N \frac{\lambda(\phi_1(n)) \dots \lambda(\phi_k(n)) 1_{n = a\ (q)}}{n}$

$\displaystyle = \int_X F( T^{\phi_1}(x) ) \dots F( T^{\phi_k}(x) ) 1_{M(x) = a\ (q)}\ d\mu(x).$

• (ii) (Equivariance of ${M}$) For any ${\phi \in Aff_+({\bf Z})}$, one has

$\displaystyle M( T^\phi(x) ) = \phi( M(x) )$

for ${\mu}$-almost every ${x \in X}$.

• (iii) (Multiplicativity at fixed primes) For any prime ${p}$, one has

$\displaystyle F( T^{p\cdot} x ) = - F(x)$

for ${\mu}$-almost every ${x \in X}$, where ${p \cdot \in Aff_+({\bf Z})}$ is the dilation map ${n \mapsto pn}$.

• (iv) (Measure pushforward) If ${\phi \in Aff_+({\bf Z})}$ is of the form ${\phi(n) = an+b}$ and ${S_\phi \subset X}$ is the set ${S_\phi = \{ x \in X: M(x) \in \phi(\hat {\bf Z}) \}}$, then the pushforward ${T^\phi_* \mu}$ of ${\mu}$ by ${\phi}$ is equal to ${a \mu\downharpoonright_{S_\phi}}$, that is to say one has

$\displaystyle \mu( (T^\phi)^{-1}(E) ) = a \mu( E \cap S_\phi )$

for every measurable ${E \subset X}$.

Note that ${{\bf Z}}$ can be viewed as the subgroup of ${Aff_+({\bf Z})}$ consisting of the translations ${n \mapsto n + b}$. If one only keeps the ${{\bf Z}}$-portion of the ${Aff_+({\bf Z})}$ action and forgets the rest (as well as the function ${M}$) then the action becomes measure-preserving, and we recover an ordinary Furstenberg limit with logarithmic averaging. However, the additional structure here can be quite useful; for instance, one can transfer the proof of (3) to this setting, which we sketch below the fold, after proving the proposition.

The observable ${M}$, roughly speaking, means that points ${x}$ in the Furstenberg limit ${X}$ constructed by this proposition are still “virtual integers” in the sense that one can meaningfully compute the residue class of ${x}$ modulo any natural number modulus ${q}$, by first applying ${M}$ and then reducing mod ${q}$. The action of ${Aff_+({\bf Z})}$ means that one can also meaningfully multiply ${x}$ by any natural number, and translate it by any integer. As with other applications of the correspondence principle, the main advantage of moving to this more “virtual” setting is that one now acquires a probability measure ${\mu}$, so that the tools of ergodic theory can be readily applied.

Given a random variable ${X}$ that takes on only finitely many values, we can define its Shannon entropy by the formula

$\displaystyle H(X) := \sum_x \mathbf{P}(X=x) \log \frac{1}{\mathbf{P}(X=x)}$

with the convention that ${0 \log \frac{1}{0} = 0}$. (In some texts, one uses the logarithm to base ${2}$ rather than the natural logarithm, but the choice of base will not be relevant for this discussion.) This is clearly a nonnegative quantity. Given two random variables ${X,Y}$ taking on finitely many values, the joint variable ${(X,Y)}$ is also a random variable taking on finitely many values, and also has an entropy ${H(X,Y)}$. It obeys the Shannon inequalities

$\displaystyle H(X), H(Y) \leq H(X,Y) \leq H(X) + H(Y)$

so we can define some further nonnegative quantities, the mutual information

$\displaystyle I(X:Y) := H(X) + H(Y) - H(X,Y)$

and the conditional entropies

$\displaystyle H(X|Y) := H(X,Y) - H(Y); \quad H(Y|X) := H(X,Y) - H(X).$

More generally, given three random variables ${X,Y,Z}$, one can define the conditional mutual information

$\displaystyle I(X:Y|Z) := H(X|Z) + H(Y|Z) - H(X,Y|Z)$

and the final of the Shannon entropy inequalities asserts that this quantity is also non-negative.

The mutual information ${I(X:Y)}$ is a measure of the extent to which ${X}$ and ${Y}$ fail to be independent; indeed, it is not difficult to show that ${I(X:Y)}$ vanishes if and only if ${X}$ and ${Y}$ are independent. Similarly, ${I(X:Y|Z)}$ vanishes if and only if ${X}$ and ${Y}$ are conditionally independent relative to ${Z}$. At the other extreme, ${H(X|Y)}$ is a measure of the extent to which ${X}$ fails to depend on ${Y}$; indeed, it is not difficult to show that ${H(X|Y)=0}$ if and only if ${X}$ is determined by ${Y}$ in the sense that there is a deterministic function ${f}$ such that ${X = f(Y)}$. In a related vein, if ${X}$ and ${X'}$ are equivalent in the sense that there are deterministic functional relationships ${X = f(X')}$, ${X' = g(X)}$ between the two variables, then ${X}$ is interchangeable with ${X'}$ for the purposes of computing the above quantities, thus for instance ${H(X) = H(X')}$, ${H(X,Y) = H(X',Y)}$, ${I(X:Y) = I(X':Y)}$, ${I(X:Y|Z) = I(X':Y|Z)}$, etc..

One can get some initial intuition for these information-theoretic quantities by specialising to a simple situation in which all the random variables ${X}$ being considered come from restricting a single random (and uniformly distributed) boolean function ${F: \Omega \rightarrow \{0,1\}}$ on a given finite domain ${\Omega}$ to some subset ${A}$ of ${\Omega}$:

$\displaystyle X = F \downharpoonright_A.$

In this case, ${X}$ has the law of a random uniformly distributed boolean function from ${A}$ to ${\{0,1\}}$, and the entropy here can be easily computed to be ${|A| \log 2}$, where ${|A|}$ denotes the cardinality of ${A}$. If ${X}$ is the restriction of ${F}$ to ${A}$, and ${Y}$ is the restriction of ${F}$ to ${B}$, then the joint variable ${(X,Y)}$ is equivalent to the restriction of ${F}$ to ${A \cup B}$. If one discards the normalisation factor ${\log 2}$, one then obtains the following dictionary between entropy and the combinatorics of finite sets:

 Random variables ${X,Y,Z}$ Finite sets ${A,B,C}$ Entropy ${H(X)}$ Cardinality ${|A|}$ Joint variable ${(X,Y)}$ Union ${A \cup B}$ Mutual information ${I(X:Y)}$ Intersection cardinality ${|A \cap B|}$ Conditional entropy ${H(X|Y)}$ Set difference cardinality ${|A \backslash B|}$ Conditional mutual information ${I(X:Y|Z)}$ ${|(A \cap B) \backslash C|}$ ${X, Y}$ independent ${A, B}$ disjoint ${X}$ determined by ${Y}$ ${A}$ a subset of ${B}$ ${X,Y}$ conditionally independent relative to ${Z}$ ${A \cap B \subset C}$

Every (linear) inequality or identity about entropy (and related quantities, such as mutual information) then specialises to a combinatorial inequality or identity about finite sets that is easily verified. For instance, the Shannon inequality ${H(X,Y) \leq H(X)+H(Y)}$ becomes the union bound ${|A \cup B| \leq |A| + |B|}$, and the definition of mutual information becomes the inclusion-exclusion formula

$\displaystyle |A \cap B| = |A| + |B| - |A \cup B|.$

For a more advanced example, consider the data processing inequality that asserts that if ${X, Z}$ are conditionally independent relative to ${Y}$, then ${I(X:Z) \leq I(X:Y)}$. Specialising to sets, this now says that if ${A, C}$ are disjoint outside of ${B}$, then ${|A \cap C| \leq |A \cap B|}$; this can be made apparent by considering the corresponding Venn diagram. This dictionary also suggests how to prove the data processing inequality using the existing Shannon inequalities. Firstly, if ${A}$ and ${C}$ are not necessarily disjoint outside of ${B}$, then a consideration of Venn diagrams gives the more general inequality

$\displaystyle |A \cap C| \leq |A \cap B| + |(A \cap C) \backslash B|$

and a further inspection of the diagram then reveals the more precise identity

$\displaystyle |A \cap C| + |(A \cap B) \backslash C| = |A \cap B| + |(A \cap C) \backslash B|.$

Using the dictionary in the reverse direction, one is then led to conjecture the identity

$\displaystyle I( X : Z ) + I( X : Y | Z ) = I( X : Y ) + I( X : Z | Y )$

which (together with non-negativity of conditional mutual information) implies the data processing inequality, and this identity is in turn easily established from the definition of mutual information.

On the other hand, not every assertion about cardinalities of sets generalises to entropies of random variables that are not arising from restricting random boolean functions to sets. For instance, a basic property of sets is that disjointness from a given set ${C}$ is preserved by unions:

$\displaystyle A \cap C = B \cap C = \emptyset \implies (A \cup B) \cap C = \emptyset.$

Indeed, one has the union bound

$\displaystyle |(A \cup B) \cap C| \leq |A \cap C| + |B \cap C|. \ \ \ \ \ (1)$

Applying the dictionary in the reverse direction, one might now conjecture that if ${X}$ was independent of ${Z}$ and ${Y}$ was independent of ${Z}$, then ${(X,Y)}$ should also be independent of ${Z}$, and furthermore that

$\displaystyle I(X,Y:Z) \leq I(X:Z) + I(Y:Z)$

but these statements are well known to be false (for reasons related to pairwise independence of random variables being strictly weaker than joint independence). For a concrete counterexample, one can take ${X, Y \in {\bf F}_2}$ to be independent, uniformly distributed random elements of the finite field ${{\bf F}_2}$ of two elements, and take ${Z := X+Y}$ to be the sum of these two field elements. One can easily check that each of ${X}$ and ${Y}$ is separately independent of ${Z}$, but the joint variable ${(X,Y)}$ determines ${Z}$ and thus is not independent of ${Z}$.

From the inclusion-exclusion identities

$\displaystyle |A \cap C| = |A| + |C| - |A \cup C|$

$\displaystyle |B \cap C| = |B| + |C| - |B \cup C|$

$\displaystyle |(A \cup B) \cap C| = |A \cup B| + |C| - |A \cup B \cup C|$

$\displaystyle |A \cap B \cap C| = |A| + |B| + |C| - |A \cup B| - |B \cup C| - |A \cup C|$

$\displaystyle + |A \cup B \cup C|$

one can check that (1) is equivalent to the trivial lower bound ${|A \cap B \cap C| \geq 0}$. The basic issue here is that in the dictionary between entropy and combinatorics, there is no satisfactory entropy analogue of the notion of a triple intersection ${A \cap B \cap C}$. (Even the double intersection ${A \cap B}$ only exists information theoretically in a “virtual” sense; the mutual information ${I(X:Y)}$ allows one to “compute the entropy” of this “intersection”, but does not actually describe this intersection itself as a random variable.)

However, this issue only arises with three or more variables; it is not too difficult to show that the only linear equalities and inequalities that are necessarily obeyed by the information-theoretic quantities ${H(X), H(Y), H(X,Y), I(X:Y), H(X|Y), H(Y|X)}$ associated to just two variables ${X,Y}$ are those that are also necessarily obeyed by their combinatorial analogues ${|A|, |B|, |A \cup B|, |A \cap B|, |A \backslash B|, |B \backslash A|}$. (See for instance the Venn diagram at the Wikipedia page for mutual information for a pictorial summation of this statement.)

One can work with a larger class of special cases of Shannon entropy by working with random linear functions rather than random boolean functions. Namely, let ${S}$ be some finite-dimensional vector space over a finite field ${{\mathbf F}}$, and let ${f: S \rightarrow {\mathbf F}}$ be a random linear functional on ${S}$, selected uniformly among all such functions. Every subspace ${U}$ of ${S}$ then gives rise to a random variable ${X = X_U: U \rightarrow {\mathbf F}}$ formed by restricting ${f}$ to ${U}$. This random variable is also distributed uniformly amongst all linear functions on ${U}$, and its entropy can be easily computed to be ${\mathrm{dim}(U) \log |\mathbf{F}|}$. Given two random variables ${X, Y}$ formed by restricting ${f}$ to ${U, V}$ respectively, the joint random variable ${(X,Y)}$ determines the random linear function ${f}$ on the union ${U \cup V}$ on the two spaces, and thus by linearity on the Minkowski sum ${U+V}$ as well; thus ${(X,Y)}$ is equivalent to the restriction of ${f}$ to ${U+V}$. In particular, ${H(X,Y) = \mathrm{dim}(U+V) \log |\mathbf{F}|}$. This implies that ${I(X:Y) = \mathrm{dim}(U \cap V) \log |\mathbf{F}|}$ and also ${H(X|Y) = \mathrm{dim}(\pi_V(U)) \log |\mathbf{F}|}$, where ${\pi_V: S \rightarrow S/V}$ is the quotient map. After discarding the normalising constant ${\log |\mathbf{F}|}$, this leads to the following dictionary between information theoretic quantities and linear algebra quantities, analogous to the previous dictionary:

 Random variables ${X,Y,Z}$ Subspaces ${U,V,W}$ Entropy ${H(X)}$ Dimension ${\mathrm{dim}(U)}$ Joint variable ${(X,Y)}$ Sum ${U+V}$ Mutual information ${I(X:Y)}$ Dimension of intersection ${\mathrm{dim}(U \cap V)}$ Conditional entropy ${H(X|Y)}$ Dimension of projection ${\mathrm{dim}(\pi_V(U))}$ Conditional mutual information ${I(X:Y|Z)}$ ${\mathrm{dim}(\pi_W(U) \cap \pi_W(V))}$ ${X, Y}$ independent ${U, V}$ transverse (${U \cap V = \{0\}}$) ${X}$ determined by ${Y}$ ${U}$ a subspace of ${V}$ ${X,Y}$ conditionally independent relative to ${Z}$ ${\pi_W(U)}$, ${\pi_W(V)}$ transverse.

The combinatorial dictionary can be regarded as a specialisation of the linear algebra dictionary, by taking ${S}$ to be the vector space ${\mathbf{F}_2^\Omega}$ over the finite field ${\mathbf{F}_2}$ of two elements, and only considering those subspaces ${U}$ that are coordinate subspaces ${U = {\bf F}_2^A}$ associated to various subsets ${A}$ of ${\Omega}$.

As before, every linear inequality or equality that is valid for the information-theoretic quantities discussed above, is automatically valid for the linear algebra counterparts for subspaces of a vector space over a finite field by applying the above specialisation (and dividing out by the normalising factor of ${\log |\mathbf{F}|}$). In fact, the requirement that the field be finite can be removed by applying the compactness theorem from logic (or one of its relatives, such as Los’s theorem on ultraproducts, as done in this previous blog post).

The linear algebra model captures more of the features of Shannon entropy than the combinatorial model. For instance, in contrast to the combinatorial case, it is possible in the linear algebra setting to have subspaces ${U,V,W}$ such that ${U}$ and ${V}$ are separately transverse to ${W}$, but their sum ${U+V}$ is not; for instance, in a two-dimensional vector space ${{\bf F}^2}$, one can take ${U,V,W}$ to be the one-dimensional subspaces spanned by ${(0,1)}$, ${(1,0)}$, and ${(1,1)}$ respectively. Note that this is essentially the same counterexample from before (which took ${{\bf F}}$ to be the field of two elements). Indeed, one can show that any necessarily true linear inequality or equality involving the dimensions of three subspaces ${U,V,W}$ (as well as the various other quantities on the above table) will also be necessarily true when applied to the entropies of three discrete random variables ${X,Y,Z}$ (as well as the corresponding quantities on the above table).

However, the linear algebra model does not completely capture the subtleties of Shannon entropy once one works with four or more variables (or subspaces). This was first observed by Ingleton, who established the dimensional inequality

$\displaystyle \mathrm{dim}(U \cap V) \leq \mathrm{dim}(\pi_W(U) \cap \pi_W(V)) + \mathrm{dim}(\pi_X(U) \cap \pi_X(V)) + \mathrm{dim}(W \cap X) \ \ \ \ \ (2)$

for any subspaces ${U,V,W,X}$. This is easiest to see when the three terms on the right-hand side vanish; then ${\pi_W(U), \pi_W(V)}$ are transverse, which implies that ${U\cap V \subset W}$; similarly ${U \cap V \subset X}$. But ${W}$ and ${X}$ are transverse, and this clearly implies that ${U}$ and ${V}$ are themselves transverse. To prove the general case of Ingleton’s inequality, one can define ${Y := U \cap V}$ and use ${\mathrm{dim}(\pi_W(Y)) \leq \mathrm{dim}(\pi_W(U) \cap \pi_W(V))}$ (and similarly for ${X}$ instead of ${W}$) to reduce to establishing the inequality

$\displaystyle \mathrm{dim}(Y) \leq \mathrm{dim}(\pi_W(Y)) + \mathrm{dim}(\pi_X(Y)) + \mathrm{dim}(W \cap X) \ \ \ \ \ (3)$

which can be rearranged using ${\mathrm{dim}(\pi_W(Y)) = \mathrm{dim}(Y) - \mathrm{dim}(W) + \mathrm{dim}(\pi_Y(W))}$ (and similarly for ${X}$ instead of ${W}$) and ${\mathrm{dim}(W \cap X) = \mathrm{dim}(W) + \mathrm{dim}(X) - \mathrm{dim}(W + X)}$ as

$\displaystyle \mathrm{dim}(W + X ) \leq \mathrm{dim}(\pi_Y(W)) + \mathrm{dim}(\pi_Y(X)) + \mathrm{dim}(Y)$

but this is clear since ${\mathrm{dim}(W + X ) \leq \mathrm{dim}(\pi_Y(W) + \pi_Y(X)) + \mathrm{dim}(Y)}$.

Returning to the entropy setting, the analogue

$\displaystyle H( V ) \leq H( V | Z ) + H(V | W ) + I(Z:W)$

of (3) is true (exercise!), but the analogue

$\displaystyle I(X:Y) \leq I(X:Y|Z) + I(X:Y|W) + I(Z:W) \ \ \ \ \ (4)$

of Ingleton’s inequality is false in general. Again, this is easiest to see when all the terms on the right-hand side vanish; then ${X,Y}$ are conditionally independent relative to ${Z}$, and relative to ${W}$, and ${Z}$ and ${W}$ are independent, and the claim (4) would then be asserting that ${X}$ and ${Y}$ are independent. While there is no linear counterexample to this statement, there are simple non-linear ones: for instance, one can take ${Z,W}$ to be independent uniform variables from ${\mathbf{F}_2}$, and take ${X}$ and ${Y}$ to be (say) ${ZW}$ and ${(1-Z)(1-W)}$ respectively (thus ${X, Y}$ are the indicators of the events ${Z=W=1}$ and ${Z=W=0}$ respectively). Once one conditions on either ${Z}$ or ${W}$, one of ${X,Y}$ has positive conditional entropy and the other has zero entropy, and so ${X, Y}$ are conditionally independent relative to either ${Z}$ or ${W}$; also, ${Z}$ or ${W}$ are independent of each other. But ${X}$ and ${Y}$ are not independent of each other (they cannot be simultaneously equal to ${1}$). Somehow, the feature of the linear algebra model that is not present in general is that in the linear algebra setting, every pair of subspaces ${U, V}$ has a well-defined intersection ${U \cap V}$ that is also a subspace, whereas for arbitrary random variables ${X, Y}$, there does not necessarily exist the analogue of an intersection, namely a “common information” random variable ${V}$ that has the entropy of ${I(X:Y)}$ and is determined either by ${X}$ or by ${Y}$.

I do not know if there is any simpler model of Shannon entropy that captures all the inequalities available for four variables. One significant complication is that there exist some information inequalities in this setting that are not of Shannon type, such as the Zhang-Yeung inequality

$\displaystyle I(X:Y) \leq 2 I(X:Y|Z) + I(X:Z|Y) + I(Y:Z|X)$

$\displaystyle + I(X:Y|W) + I(Z:W).$

One can however still use these simpler models of Shannon entropy to be able to guess arguments that would work for general random variables. An example of this comes from my paper on the logarithmically averaged Chowla conjecture, in which I showed among other things that

$\displaystyle |\sum_{n \leq x} \frac{\lambda(n) \lambda(n+1)}{n}| \leq \varepsilon \log x \ \ \ \ \ (5)$

whenever ${x}$ was sufficiently large depending on ${\varepsilon>0}$, where ${\lambda}$ is the Liouville function. The information-theoretic part of the proof was as follows. Given some intermediate scale ${H}$ between ${1}$ and ${x}$, one can form certain random variables ${X_H, Y_H}$. The random variable ${X_H}$ is a sign pattern of the form ${(\lambda(n+1),\dots,\lambda(n+H))}$ where ${n}$ is a random number chosen from ${1}$ to ${x}$ (with logarithmic weighting). The random variable ${Y_H}$ was tuple ${(n \hbox{ mod } p)_{p \sim \varepsilon^2 H}}$ of reductions of ${n}$ to primes ${p}$ comparable to ${\varepsilon^2 H}$. Roughly speaking, what was implicitly shown in the paper (after using the multiplicativity of ${\lambda}$, the circle method, and the Matomaki-Radziwill theorem on short averages of multiplicative functions) is that if the inequality (5) fails, then there was a lower bound

$\displaystyle I( X_H : Y_H ) \gg \varepsilon^7 \frac{H}{\log H}$

on the mutual information between ${X_H}$ and ${Y_H}$. From translation invariance, this also gives the more general lower bound

$\displaystyle I( X_{H_0,H} : Y_H ) \gg \varepsilon^7 \frac{H}{\log H} \ \ \ \ \ (6)$

for any ${H_0}$, where ${X_{H_0,H}}$ denotes the shifted sign pattern ${(\lambda(n+H_0+1),\dots,\lambda(n+H_0+H))}$. On the other hand, one had the entropy bounds

$\displaystyle H( X_{H_0,H} ), H(Y_H) \ll H$

and from concatenating sign patterns one could see that ${X_{H_0,H+H'}}$ is equivalent to the joint random variable ${(X_{H_0,H}, X_{H_0+H,H'})}$ for any ${H_0,H,H'}$. Applying these facts and using an “entropy decrement” argument, I was able to obtain a contradiction once ${H}$ was allowed to become sufficiently large compared to ${\varepsilon}$, but the bound was quite weak (coming ultimately from the unboundedness of ${\sum_{\log H_- \leq j \leq \log H_+} \frac{1}{j \log j}}$ as the interval ${[H_-,H_+]}$ of values of ${H}$ under consideration becomes large), something of the order of ${H \sim \exp\exp\exp(\varepsilon^{-7})}$; the quantity ${H}$ needs at various junctures to be less than a small power of ${\log x}$, so the relationship between ${x}$ and ${\varepsilon}$ becomes essentially quadruple exponential in nature, ${x \sim \exp\exp\exp\exp(\varepsilon^{-7})}$. The basic strategy was to observe that the lower bound (6) causes some slowdown in the growth rate ${H(X_{kH})/kH}$ of the mean entropy, in that this quantity decreased by ${\gg \frac{\varepsilon^7}{\log H}}$ as ${k}$ increased from ${1}$ to ${\log H}$, basically by dividing ${X_{kH}}$ into ${k}$ components ${X_{jH, H}}$, ${j=0,\dots,k-1}$ and observing from (6) each of these shares a bit of common information with the same variable ${Y_H}$. This is relatively clear when one works in a set model, in which ${Y_H}$ is modeled by a set ${B_H}$ of size ${O(H)}$, and ${X_{H_0,H}}$ is modeled by a set of the form

$\displaystyle X_{H_0,H} = \bigcup_{H_0 < h \leq H_0+H} A_h$

for various sets ${A_h}$ of size ${O(1)}$ (also there is some translation symmetry that maps ${A_h}$ to a shift ${A_{h+1}}$ while preserving all of the ${B_H}$).

However, on considering the set model recently, I realised that one can be a little more efficient by exploiting the fact (basically the Chinese remainder theorem) that the random variables ${Y_H}$ are basically jointly independent as ${H}$ ranges over dyadic values that are much smaller than ${\log x}$, which in the set model corresponds to the ${B_H}$ all being disjoint. One can then establish a variant

$\displaystyle I( X_{H_0,H} : Y_H | (Y_{H'})_{H' < H}) \gg \varepsilon^7 \frac{H}{\log H} \ \ \ \ \ (7)$

of (6), which in the set model roughly speaking asserts that each ${B_H}$ claims a portion of the ${\bigcup_{H_0 < h \leq H_0+H} A_h}$ of cardinality ${\gg \varepsilon^7 \frac{H}{\log H}}$ that is not claimed by previous choices of ${B_H}$. This leads to a more efficient contradiction (relying on the unboundedness of ${\sum_{\log H_- \leq j \leq \log H_+} \frac{1}{j}}$ rather than ${\sum_{\log H_- \leq j \leq \log H_+} \frac{1}{j \log j}}$) that looks like it removes one order of exponential growth, thus the relationship between ${x}$ and ${\varepsilon}$ is now ${x \sim \exp\exp\exp(\varepsilon^{-7})}$. Returning to the entropy model, one can use (7) and Shannon inequalities to establish an inequality of the form

$\displaystyle \frac{1}{2H} H(X_{2H} | (Y_{H'})_{H' \leq 2H}) \leq \frac{1}{H} H(X_{H} | (Y_{H'})_{H' \leq H}) - \frac{c \varepsilon^7}{\log H}$

for a small constant ${c>0}$, which on iterating and using the boundedness of ${\frac{1}{H} H(X_{H} | (Y_{H'})_{H' \leq H})}$ gives the claim. (A modification of this analysis, at least on the level of the back of the envelope calculation, suggests that the Matomaki-Radziwill theorem is needed only for ranges ${H}$ greater than ${\exp( (\log\log x)^{\varepsilon^{7}} )}$ or so, although at this range the theorem is not significantly simpler than the general case).

Klaus Roth, who made fundamental contributions to analytic number theory, died this Tuesday, aged 90.

I never met or communicated with Roth personally, but was certainly influenced by his work; he wrote relatively few papers, but they tended to have outsized impact. For instance, he was one of the key people (together with Bombieri) to work on simplifying and generalising the large sieve, taking it from the technically formidable original formulation of Linnik and Rényi to the clean and general almost orthogonality principle that we have today (discussed for instance in these lecture notes of mine). The paper of Roth that had the most impact on my own personal work was his three-page paper proving what is now known as Roth’s theorem on arithmetic progressions:

Theorem 1 (Roth’s theorem on arithmetic progressions) Let ${A}$ be a set of natural numbers of positive upper density (thus ${\limsup_{N \rightarrow\infty} |A \cap \{1,\dots,N\}|/N > 0}$). Then ${A}$ contains infinitely many arithmetic progressions ${a,a+r,a+2r}$ of length three (with ${r}$ non-zero of course).

At the heart of Roth’s elegant argument was the following (surprising at the time) dichotomy: if ${A}$ had some moderately large density within some arithmetic progression ${P}$, either one could use Fourier-analytic methods to detect the presence of an arithmetic progression of length three inside ${A \cap P}$, or else one could locate a long subprogression ${P'}$ of ${P}$ on which ${A}$ had increased density. Iterating this dichotomy by an argument now known as the density increment argument, one eventually obtains Roth’s theorem, no matter which side of the dichotomy actually holds. This argument (and the many descendants of it), based on various “dichotomies between structure and randomness”, became essential in many other results of this type, most famously perhaps in Szemerédi’s proof of his celebrated theorem on arithmetic progressions that generalised Roth’s theorem to progressions of arbitrary length. More recently, my recent work on the Chowla and Elliott conjectures that was a crucial component of the solution of the Erdös discrepancy problem, relies on an entropy decrement argument which was directly inspired by the density increment argument of Roth.

The Erdös discrepancy problem also is connected with another well known theorem of Roth:

Theorem 2 (Roth’s discrepancy theorem for arithmetic progressions) Let ${f(1),\dots,f(n)}$ be a sequence in ${\{-1,+1\}}$. Then there exists an arithmetic progression ${a+r, a+2r, \dots, a+kr}$ in ${\{1,\dots,n\}}$ with ${r}$ positive such that

$\displaystyle |\sum_{j=1}^k f(a+jr)| \geq c n^{1/4}$

for an absolute constant ${c>0}$.

In fact, Roth proved a stronger estimate regarding mean square discrepancy, which I am not writing down here; as with the Roth theorem in arithmetic progressions, his proof was short and Fourier-analytic in nature (although non-Fourier-analytic proofs have since been found, for instance the semidefinite programming proof of Lovasz). The exponent ${1/4}$ is known to be sharp (a result of Matousek and Spencer).

As a particular corollary of the above theorem, for an infinite sequence ${f(1), f(2), \dots}$ of signs, the sums ${|\sum_{j=1}^k f(a+jr)|}$ are unbounded in ${a,r,k}$. The Erdös discrepancy problem asks whether the same statement holds when ${a}$ is restricted to be zero. (Roth also established discrepancy theorems for other sets, such as rectangles, which will not be discussed here.)

Finally, one has to mention Roth’s most famous result, cited for instance in his Fields medal citation:

Theorem 3 (Roth’s theorem on Diophantine approximation) Let ${\alpha}$ be an irrational algebraic number. Then for any ${\varepsilon > 0}$ there is a quantity ${c_{\alpha,\varepsilon}}$ such that

$\displaystyle |\alpha - \frac{a}{q}| > \frac{c_{\alpha,\varepsilon}}{q^{2+\varepsilon}}.$

From the Dirichlet approximation theorem (or from the theory of continued fractions) we know that the exponent ${2+\varepsilon}$ in the denominator cannot be reduced to ${2}$ or below. A classical and easy theorem of Liouville gives the claim with the exponent ${2+\varepsilon}$ replaced by the degree of the algebraic number ${\alpha}$; work of Thue and Siegel reduced this exponent, but Roth was the one who obtained the near-optimal result. An important point is that the constant ${c_{\alpha,\varepsilon}}$ is ineffective – it is a major open problem in Diophantine approximation to produce any bound significantly stronger than Liouville’s theorem with effective constants. This is because the proof of Roth’s theorem does not exclude any single rational ${a/q}$ from being close to ${\alpha}$, but instead very ingeniously shows that one cannot have two different rationals ${a/q}$, ${a'/q'}$ that are unusually close to ${\alpha}$, even when the denominators ${q,q'}$ are very different in size. (I refer to this sort of argument as a “dueling conspiracies” argument; they are strangely prevalent throughout analytic number theory.)

The most fundamental unsolved problem in complexity theory is undoubtedly the P=NP problem, which asks (roughly speaking) whether a problem which can be solved by a non-deterministic polynomial-time (NP) algorithm, can also be solved by a deterministic polynomial-time (P) algorithm. The general belief is that ${P \neq NP}$, i.e. there exist problems which can be solved by non-deterministic polynomial-time algorithms but not by deterministic polynomial-time algorithms.

One reason why the ${P \neq NP}$ question is so difficult to resolve is that a certain generalisation of this question has an affirmative answer in some cases, and a negative answer in other cases. More precisely, if we give all the algorithms access to an oracle, then for one choice ${A}$ of this oracle, all the problems that are solvable by non-deterministic polynomial-time algorithms that calls ${A}$ (${NP^A}$), can also be solved by a deterministic polynomial-time algorithm algorithm that calls ${A}$ (${P^A}$), thus ${P^A = NP^A}$; but for another choice ${B}$ of this oracle, there exist problems solvable by non-deterministic polynomial-time algorithms that call ${B}$, which cannot be solved by a deterministic polynomial-time algorithm that calls ${B}$, thus ${P^B \neq NP^B}$. One particular consequence of this result (which is due to Baker, Gill, and Solovay) is that there cannot be any relativisable proof of either ${P=NP}$ or ${P \neq NP}$, where “relativisable” means that the proof would also work without any changes in the presence of an oracle.

The Baker-Gill-Solovay result was quite surprising, but the idea of the proof turns out to be rather simple. To get an oracle ${A}$ such that ${P^A=NP^A}$, one basically sets ${A}$ to be a powerful simulator that can simulate non-deterministic machines (and, furthermore, can also simulate itself); it turns out that any PSPACE-complete oracle would suffice for this task. To get an oracle ${B}$ for which ${P^B \neq NP^B}$, one has to be a bit sneakier, setting ${B}$ to be a query device for a sparse set of random (or high-complexity) strings, which are too complex to be guessed at by any deterministic polynomial-time algorithm.

Unfortunately, the simple idea of the proof can be obscured by various technical details (e.g. using Turing machines to define ${P}$ and ${NP}$ precisely), which require a certain amount of time to properly absorb. To help myself try to understand this result better, I have decided to give a sort of “allegory” of the proof, based around a (rather contrived) story about various students trying to pass a multiple choice test, which avoids all the technical details but still conveys the basic ideas of the argument. This allegory was primarily for my own benefit, but I thought it might also be of interest to some readers here (and also has some tangential relation to the proto-polymath project of determinstically finding primes), so I reproduce it below.

This week I am in Bremen, where the 50th International Mathematical Olympiad is being held.  A number of former Olympians (Béla Bollobás, Tim Gowers, Laci Lovasz, Stas Smirnov, Jean-Christophe Yoccoz, and myself) were invited to give a short talk (20 minutes in length) at the celebratory event for this anniversary.  I chose to talk on a topic I have spoken about several times before, on “Structure and randomness in the prime numbers“.  Given the time constraints, there was a limit as to how much substance I could put into the talk; but I try to describe, in very general terms, what we know about the primes, and what we suspect to be true, but cannot yet establish.  As I have mentioned in previous talks, the key problem is that we suspect the distribution of the primes to obey no significant patterns (other than “local” structure, such as having a strong tendency to be odd (which is local information at the 2 place), or obeying the prime number theorem (which is local information at the infinity place)), but we still do not have fully satisfactory tools for establishing the absence of a pattern. (This is in contrast with many types of Olympiad problems, where the key to solving a problem often lies in discovering the right pattern or structure in the problem to exploit.)

The PDF of the talk is here; I decided to try out the Beamer LaTeX package for a change.

A remarkable phenomenon in probability theory is that of universality – that many seemingly unrelated probability distributions, which ostensibly involve large numbers of unknown parameters, can end up converging to a universal law that may only depend on a small handful of parameters. One of the most famous examples of the universality phenomenon is the central limit theorem; another rich source of examples comes from random matrix theory, which is one of the areas of my own research.

Analogous universality phenomena also show up in empirical distributions – the distributions of a statistic ${X}$ from a large population of “real-world” objects. Examples include Benford’s law, Zipf’s law, and the Pareto distribution (of which the Pareto principle or 80-20 law is a special case). These laws govern the asymptotic distribution of many statistics ${X}$ which

• (i) take values as positive numbers;
• (ii) range over many different orders of magnitude;
• (iiii) arise from a complicated combination of largely independent factors (with different samples of ${X}$ arising from different independent factors); and
• (iv) have not been artificially rounded, truncated, or otherwise constrained in size.

Examples here include the population of countries or cities, the frequency of occurrence of words in a language, the mass of astronomical objects, or the net worth of individuals or corporations. The laws are then as follows:

• Benford’s law: For ${k=1,\ldots,9}$, the proportion of ${X}$ whose first digit is ${k}$ is approximately ${\log_{10} \frac{k+1}{k}}$. Thus, for instance, ${X}$ should have a first digit of ${1}$ about ${30\%}$ of the time, but a first digit of ${9}$ only about ${5\%}$ of the time.
• Zipf’s law: The ${n^{th}}$ largest value of ${X}$ should obey an approximate power law, i.e. it should be approximately ${C n^{-\alpha}}$ for the first few ${n=1,2,3,\ldots}$ and some parameters ${C, \alpha > 0}$. In many cases, ${\alpha}$ is close to ${1}$.
• Pareto distribution: The proportion of ${X}$ with at least ${m}$ digits (before the decimal point), where ${m}$ is above the median number of digits, should obey an approximate exponential law, i.e. be approximately of the form ${c 10^{-m/\alpha}}$ for some ${c, \alpha > 0}$. Again, in many cases ${\alpha}$ is close to ${1}$.

Benford’s law and Pareto distribution are stated here for base ${10}$, which is what we are most familiar with, but the laws hold for any base (after replacing all the occurrences of ${10}$ in the above laws with the new base, of course). The laws tend to break down if the hypotheses (i)-(iv) are dropped. For instance, if the statistic ${X}$ concentrates around its mean (as opposed to being spread over many orders of magnitude), then the normal distribution tends to be a much better model (as indicated by such results as the central limit theorem). If instead the various samples of the statistics are highly correlated with each other, then other laws can arise (for instance, the eigenvalues of a random matrix, as well as many empirically observed matrices, are correlated to each other, with the behaviour of the largest eigenvalues being governed by laws such as the Tracy-Widom law rather than Zipf’s law, and the bulk distribution being governed by laws such as the semicircular law rather than the normal or Pareto distributions).

To illustrate these laws, let us take as a data set the populations of 235 countries and regions of the world in 2007 (using the CIA world factbook); I have put the raw data here. This is a relatively small sample (cf. my previous post), but is already enough to discern these laws in action. For instance, here is how the data set tracks with Benford’s law (rounded to three significant figures):

 ${k}$ Countries Number Benford prediction 1 Angola, Anguilla, Aruba, Bangladesh, Belgium, Botswana, Brazil, Burkina Faso, Cambodia, Cameroon, Chad, Chile, China, Christmas Island, Cook Islands, Cuba, Czech Republic, Ecuador, Estonia, Gabon, (The) Gambia, Greece, Guam, Guatemala, Guinea-Bissau, India, Japan, Kazakhstan, Kiribati, Malawi, Mali, Mauritius, Mexico, (Federated States of) Micronesia, Nauru, Netherlands, Niger, Nigeria, Niue, Pakistan, Portugal, Russia, Rwanda, Saint Lucia, Saint Vincent and the Grenadines, Senegal, Serbia, Swaziland, Syria, Timor-Leste (East-Timor), Tokelau, Tonga, Trinidad and Tobago, Tunisia, Tuvalu, (U.S.) Virgin Islands, Wallis and Futuna, Zambia, Zimbabwe 59 (${25.1\%}$) 71 (${30.1\%}$) 2 Armenia, Australia, Barbados, British Virgin Islands, Cote d’Ivoire, French Polynesia, Ghana, Gibraltar, Indonesia, Iraq, Jamaica, (North) Korea, Kosovo, Kuwait, Latvia, Lesotho, Macedonia, Madagascar, Malaysia, Mayotte, Mongolia, Mozambique, Namibia, Nepal, Netherlands Antilles, New Caledonia Norfolk Island, Palau, Peru, Romania, Saint Martin, Samoa, San Marino, Sao Tome and Principe, Saudi Arabia, Slovenia, Sri Lanka, Svalbard, Taiwan, Turks and Caicos Islands, Uzbekistan, Vanuatu, Venezuela, Yemen 44 (${18.7\%}$) 41 (${17.6\%}$) 3 Afghanistan, Albania, Algeria, (The) Bahamas, Belize, Brunei, Canada, (Rep. of the) Congo, Falkland Islands (Islas Malvinas), Iceland, Kenya, Lebanon, Liberia, Liechtenstein, Lithuania, Maldives, Mauritania, Monaco, Morocco, Oman, (Occupied) Palestinian Territory, Panama, Poland, Puerto Rico, Saint Kitts and Nevis, Uganda, United States of America, Uruguay, Western Sahara 29 (${12.3\%}$) 29 (${12.5\%}$) 4 Argentina, Bosnia and Herzegovina, Burma (Myanmar), Cape Verde, Cayman Islands, Central African Republic, Colombia, Costa Rica, Croatia, Faroe Islands, Georgia, Ireland, (South) Korea, Luxembourg, Malta, Moldova, New Zealand, Norway, Pitcairn Islands, Singapore, South Africa, Spain, Sudan, Suriname, Tanzania, Ukraine, United Arab Emirates 27 (${11.4\%}$) 22 (${9.7\%}$) 5 (Macao SAR) China, Cocos Islands, Denmark, Djibouti, Eritrea, Finland, Greenland, Italy, Kyrgyzstan, Montserrat, Nicaragua, Papua New Guinea, Slovakia, Solomon Islands, Togo, Turkmenistan 16 (${6.8\%}$) 19 (${7.9\%}$) 6 American Samoa, Bermuda, Bhutan, (Dem. Rep. of the) Congo, Equatorial Guinea, France, Guernsey, Iran, Jordan, Laos, Libya, Marshall Islands, Montenegro, Paraguay, Sierra Leone, Thailand, United Kingdom 17 (${7.2\%}$) 16 (${6.7\%}$) 7 Bahrain, Bulgaria, (Hong Kong SAR) China, Comoros, Cyprus, Dominica, El Salvador, Guyana, Honduras, Israel, (Isle of) Man, Saint Barthelemy, Saint Helena, Saint Pierre and Miquelon, Switzerland, Tajikistan, Turkey 17 (${7.2\%}$) 14 (${5.8\%}$) 8 Andorra, Antigua and Barbuda, Austria, Azerbaijan, Benin, Burundi, Egypt, Ethiopia, Germany, Haiti, Holy See (Vatican City), Northern Mariana Islands, Qatar, Seychelles, Vietnam 15 (${6.4\%}$) 12 (${5.1\%}$) 9 Belarus, Bolivia, Dominican Republic, Fiji, Grenada, Guinea, Hungary, Jersey, Philippines, Somalia, Sweden 11 (${4.5\%}$) 11 (${4.6\%}$)

Here is how the same data tracks Zipf’s law for the first twenty values of ${n}$, with the parameters ${C \approx 1.28 \times 10^9}$ and ${\alpha \approx 1.03}$ (selected by log-linear regression), again rounding to three significant figures:

 ${n}$ Country Population Zipf prediction Deviation from prediction 1 China 1,330,000,000 1,280,000,000 ${+4.1\%}$ 2 India 1,150,000,000 626,000,000 ${+83.5\%}$ 3 USA 304,000,000 412,000,000 ${-26.3\%}$ 4 Indonesia 238,000,000 307,000,000 ${-22.5\%}$ 5 Brazil 196,000,000 244,000,000 ${-19.4\%}$ 6 Pakistan 173,000,000 202,000,000 ${-14.4\%}$ 7 Bangladesh 154,000,000 172,000,000 ${-10.9\%}$ 8 Nigeria 146,000,000 150,000,000 ${-2.6\%}$ 9 Russia 141,000,000 133,000,000 ${+5.8\%}$ 10 Japan 128,000,000 120,000,000 ${+6.7\%}$ 11 Mexico 110,000,000 108,000,000 ${+1.7\%}$ 12 Philippines 96,100,000 98,900,000 ${-2.9\%}$ 13 Vietnam 86,100,000 91,100,000 ${-5.4\%}$ 14 Ethiopia 82,600,000 84,400,000 ${-2.1\%}$ 15 Germany 82,400,000 78,600,000 ${+4.8\%}$ 16 Egypt 81,700,000 73,500,000 ${+11.1\%}$ 17 Turkey 71,900,000 69,100,000 ${+4.1\%}$ 18 Congo 66,500,000 65,100,000 ${+2.2\%}$ 19 Iran 65,900,000 61,600,000 ${+6.9\%}$ 20 Thailand 65,500,000 58,400,000 ${+12.1\%}$

As one sees, Zipf’s law is not particularly precise at the extreme edge of the statistics (when ${n}$ is very small), but becomes reasonably accurate (given the small sample size, and given that we are fitting twenty data points using only two parameters) for moderate sizes of ${n}$.

This data set has too few scales in base ${10}$ to illustrate the Pareto distribution effectively – over half of the country populations are either seven or eight digits in that base. But if we instead work in base ${2}$, then country populations range in a decent number of scales (the majority of countries have population between ${2^{23}}$ and ${2^{32}}$), and we begin to see the law emerge, where ${m}$ is now the number of digits in binary, the best-fit parameters are ${\alpha \approx 1.18}$ and ${c \approx 1.7 \times 2^{26} / 235}$:

 ${m}$ Countries with ${\geq m}$ binary digit populations Number Pareto prediction 31 China, India 2 1 30 ” 2 2 29 “, United States of America 3 5 28 “, Indonesia, Brazil, Pakistan, Bangladesh, Nigeria, Russia 9 8 27 “, Japan, Mexico, Philippines, Vietnam, Ethiopia, Germany, Egypt, Turkey 17 15 26 “, (Dem. Rep. of the) Congo, Iran, Thailand, France, United Kingdom, Italy, South Africa, (South) Korea, Burma (Myanmar), Ukraine, Colombia, Spain, Argentina, Sudan, Tanzania, Poland, Kenya, Morocco, Algeria 36 27 25 “, Canada, Afghanistan, Uganda, Nepal, Peru, Iraq, Saudi Arabia, Uzbekistan, Venezuela, Malaysia, (North) Korea, Ghana, Yemen, Taiwan, Romania, Mozambique, Sri Lanka, Australia, Cote d’Ivoire, Madagascar, Syria, Cameroon 58 49 24 “, Netherlands, Chile, Kazakhstan, Burkina Faso, Cambodia, Malawi, Ecuador, Niger, Guatemala, Senegal, Angola, Mali, Zambia, Cuba, Zimbabwe, Greece, Portugal, Belgium, Tunisia, Czech Republic, Rwanda, Serbia, Chad, Hungary, Guinea, Belarus, Somalia, Dominican Republic, Bolivia, Sweden, Haiti, Burundi, Benin 91 88 23 “, Austria, Azerbaijan, Honduras, Switzerland, Bulgaria, Tajikistan, Israel, El Salvador, (Hong Kong SAR) China, Paraguay, Laos, Sierra Leone, Jordan, Libya, Papua New Guinea, Togo, Nicaragua, Eritrea, Denmark, Slovakia, Kyrgyzstan, Finland, Turkmenistan, Norway, Georgia, United Arab Emirates, Singapore, Bosnia and Herzegovina, Croatia, Central African Republic, Moldova, Costa Rica 123 159

Thus, with each new scale, the number of countries introduced increases by a factor of a little less than ${2}$, on the average. This approximate doubling of countries with each new scale begins to falter at about the population ${2^{23}}$ (i.e. at around ${4}$ million), for the simple reason that one has begun to run out of countries. (Note that the median-population country in this set, Singapore, has a population with ${23}$ binary digits.)

These laws are not merely interesting statistical curiosities; for instance, Benford’s law is often used to help detect fraudulent statistics (such as those arising from accounting fraud), as many such statistics are invented by choosing digits at random, and will therefore deviate significantly from Benford’s law. (This is nicely discussed in Robert Matthews’ New Scientist article “The power of one“; this article can also be found on the web at a number of other places.) In a somewhat analogous spirit, Zipf’s law and the Pareto distribution can be used to mathematically test various models of real-world systems (e.g. formation of astronomical objects, accumulation of wealth, population growth of countries, etc.), without necessarily having to fit all the parameters of that model with the actual data.

Being empirically observed phenomena rather than abstract mathematical facts, Benford’s law, Zipf’s law, and the Pareto distribution cannot be “proved” the same way a mathematical theorem can be proved. However, one can still support these laws mathematically in a number of ways, for instance showing how these laws are compatible with each other, and with other plausible hypotheses on the source of the data. In this post I would like to describe a number of ways (both technical and non-technical) in which one can do this; these arguments do not fully explain these laws (in particular, the empirical fact that the exponent ${\alpha}$ in Zipf’s law or the Pareto distribution is often close to ${1}$ is still quite a mysterious phenomenon), and do not always have the same universal range of applicability as these laws seem to have, but I hope that they do demonstrate that these laws are not completely arbitrary, and ought to have a satisfactory basis of mathematical support. Read the rest of this entry »

One further paper in this stream: László Erdős, José Ramírez, Benjamin Schlein, Van Vu, Horng-Tzer Yau, and myself have just uploaded to the arXiv the paper “Bulk universality for Wigner hermitian matrices with subexponential decay“, submitted to Mathematical Research Letters.  (Incidentally, this is my first six-author paper I have been involved in, not counting the polymath projects of course, though I have had a number of five-author papers.)

This short paper (9 pages) combines the machinery from two recent papers on the universality conjecture for the eigenvalue spacings in the bulk for Wigner random matrices (see my earlier blog post for more discussion).  On the one hand, the paper of Erdős-Ramírez-Schlein-Yau established this conjecture under the additional hypothesis that the distribution of the individual entries obeyed some smoothness and exponential decay conditions.  Meanwhile, the paper of Van Vu and myself (which I discussed in my earlier blog post) established the conjecture under a somewhat different set of hypotheses, namely that the distribution of the individual entries obeyed some moment conditions (in particular, the third moment had to vanish), a support condition (the entries had to have real part supported in at least three points), and an exponential decay condition.

After comparing our results, the six of us realised that our methods could in fact be combined rather easily to obtain a stronger result, establishing the universality conjecture assuming only a exponential decay (or more precisely, sub-exponential decay) bound ${\Bbb P}(|x_{\ell k}| > t ) \ll \exp( - t^c )$ on the coefficients; thus all regularity, moment, and support conditions have been eliminated.  (There is one catch, namely that we can no longer control a single spacing $\lambda_{i+1}-\lambda_i$ for a single fixed i, but must now average over all $1 \leq i \leq n$ before recovering the universality.  This is an annoying technical issue but it may be resolvable in the future with further refinements to the method.)

I can describe the main idea behind the unified approach here.  One can arrange the Wigner matrices in a hierarchy, from most structured to least structured:

• The most structured (or special) ensemble is the Gaussian Unitary Ensemble (GUE), in which the coefficients are gaussian. Here, one has very explicit and tractable formulae for the eigenvalue distributions, gap spacing, etc.
• The next most structured ensemble of Wigner matrices are the Gaussian-divisible or Johansson matrices, which are matrices H of the form $H = e^{-t/2} \hat H + (1-e^{-t})^{1/2} V$, where $\hat H$ is another Wigner matrix, V is a GUE matrix independent of $\hat H$, and $0 < t < 1$ is a fixed parameter independent of n.  Here, one still has quite explicit (though not quite as tractable) formulae for the joint eigenvalue distribution and related statistics.  Note that the limiting case t=1 is GUE.
• After this, one has the Ornstein-Uhlenbeck-evolved matrices, which are also of the form $H = e^{-t/2} \hat H + (1-e^{-t})^{1/2} V$, but now $t = n^{-1+\delta}$ decays at a power rate with n, rather than being comparable to 1.  Explicit formulae still exist for these matrices, but extracting universality out of this is hard work (and occupies the bulk of the paper of Erdős-Ramírez-Schlein-Yau).
• Finally, one has arbitrary Wigner matrices, which can be viewed as the t=0 limit of the above Ornstein-Uhlenbeck process.

The arguments in the paper of Erdős-Ramírez-Schlein-Yau can be summarised as follows (I assume subexponential decay throughout this discussion):

1. (Structured case) The universality conjecture is true for Ornstein-Uhlenbeck-evolved matrices with $t = n^{-1+\delta}$ for any $0 < \delta \leq 1$.  (The case $1/4 < \delta \leq 1$ was treated in an earlier paper of Erdős-Ramírez-Schlein-Yau, while the case where t is comparable to 1 was treated by Johansson.)
2. (Matching) Every Wigner matrix with suitable smoothness conditions can be “matched” with an Ornstein-Uhlenbeck-evolved matrix, in the sense that the eigenvalue statistics for the two matrices are asymptotically identical.  (This is relatively easy due to the fact that $\delta$ can be taken arbitrarily close to zero.)
3. Combining 1. and 2. one obtains universality for all Wigner matrices obeying suitable smoothness conditions.

The arguments in the paper of Van and myself can be summarised as follows:

1. (Structured case) The universality conjecture is true for Johansson matrices, by the paper of Johansson.
2. (Matching) Every Wigner matrix with some moment and support conditions can be “matched” with a Johansson matrix, in the sense that the first four moments of the entries agree, and hence (by the Lindeberg strategy in our paper) have asymptotically identical statistics.
3. Combining 1. and 2. one obtains universality for all Wigner matrices obtaining suitable moment and support conditions.

What we realised is by combining the hard part 1. of the paper of Erdős-Ramírez-Schlein-Yau with the hard part 2. of the paper of Van and myself, we can remove all regularity, moment, and support conditions.  Roughly speaking, the unified argument proceeds as follows:

1. (Structured case) By the arguments of Erdős-Ramírez-Schlein-Yau, the universality conjecture is true for Ornstein-Uhlenbeck-evolved matrices with $t = n^{-1+\delta}$ for any $0 < \delta \leq 1$.
2. (Matching) Every Wigner matrix $H$ can be “matched” with an Ornstein-Uhlenbeck-evolved matrix $e^{-t/2} H + (1-e^{-t})^{1/2} V$ for $t= n^{-1+0.01}$ (say), in the sense that the first four moments of the entries almost agree, which is enough (by the arguments of Van and myself) to show that these two matrices have asymptotically identical statistics on the average.
3. Combining 1. and 2. one obtains universality for the averaged statistics for all Wigner matrices.

The averaging should be removable, but this would require better convergence results to the semicircular law than are currently known (except with additional hypotheses, such as vanishing third moment).  The subexponential decay should also be relaxed to a condition of finiteness for some fixed moment ${\Bbb E} |x|^C$, but we did not pursue this direction in order to keep the paper short.

The AMS has just notified me that the book version of the first year of my blog, now retitled “Structure and Randomness: pages from year one of a mathematical blog“, is now available.  An official web page for this book has also been set up here, though it is fairly empty at present.  A (2MB) high-resolution PDF file of the cover can be found here.

I plan to start on converting this year’s blog posts to book form in January, and hopefully the process should be a little faster this time.  Given that my lecture notes on ergodic theory and on the Poincaré conjecture will form the bulk of that book, I have chosen the working title for that book to be “Poincaré’s legacies: pages from year two of a mathematical blog“.

One of the most important topological concepts in analysis is that of compactness (as discussed for instance in my Companion article on this topic).  There are various flavours of this concept, but let us focus on sequential compactness: a subset E of a topological space X is sequentially compact if every sequence in E has a convergent subsequence whose limit is also in E.  This property allows one to do many things with the set E.  For instance, it allows one to maximise a functional on E:

Proposition 1. (Existence of extremisers)  Let E be a non-empty sequentially compact subset of a topological space X, and let $F: E \to {\Bbb R}$ be a continuous function.  Then the supremum $\sup_{x \in E} f(x)$ is attained at at least one point $x_* \in E$, thus $F(x) \leq F(x_*)$ for all $x \in E$.  (In particular, this supremum is finite.)  Similarly for the infimum.

Proof. Let $-\infty < L \leq +\infty$ be the supremum $L := \sup_{x \in E} F(x)$.  By the definition of supremum (and the axiom of (countable) choice), one can find a sequence $x^{(n)}$ in E such that $F(x^{(n)}) \to L$.  By compactness, we can refine this sequence to a subsequence (which, by abuse of notation, we shall continue to call $x^{(n)}$) such that $x^{(n)}$ converges to a limit x in E.  Since we still have $f(x^{(n)}) \to L$, and f is continuous at x, we conclude that f(x)=L, and the claim for the supremum follows.  The claim for the infimum is similar.  $\Box$

Remark 1. An inspection of the argument shows that one can relax the continuity hypothesis on F somewhat: to attain the supremum, it suffices that F be upper semicontinuous, and to attain the infimum, it suffices that F be lower semicontinuous. $\diamond$

We thus see that sequential compactness is useful, among other things, for ensuring the existence of extremisers.  In finite-dimensional spaces (such as vector spaces), compact sets are plentiful; indeed, the Heine-Borel theorem asserts that every closed and bounded set is compact.  However, once one moves to infinite-dimensional spaces, such as function spaces, then the Heine-Borel theorem fails quite dramatically; most of the closed and bounded sets one encounters in a topological vector space are non-compact, if one insists on using a reasonably “strong” topology.  This causes a difficulty in (among other things) calculus of variations, which is often concerned to finding extremisers to a functional $F: E \to {\Bbb R}$ on a subset E of an infinite-dimensional function space X.

In recent decades, mathematicians have found a number of ways to get around this difficulty.  One of them is to weaken the topology to recover compactness, taking advantage of such results as the Banach-Alaoglu theorem (or its sequential counterpart).  Of course, there is a tradeoff: weakening the topology makes compactness easier to attain, but makes the continuity of F harder to establish.  Nevertheless, if F enjoys enough “smoothing” or “cancellation” properties, one can hope to obtain continuity in the weak topology, allowing one to do things such as locate extremisers.  (The phenomenon that cancellation can lead to continuity in the weak topology is sometimes referred to as compensated compactness.)

Another option is to abandon trying to make all sequences have convergent subsequences, and settle just for extremising sequences to have convergent subsequences, as this would still be enough to retain Theorem 1.  Pursuing this line of thought leads to the Palais-Smale condition, which is a substitute for compactness in some calculus of variations situations.

But in many situations, one cannot weaken the topology to the point where the domain E becomes compact, without destroying the continuity (or semi-continuity) of F, though one can often at least find an intermediate topology (or metric) in which F is continuous, but for which E is still not quite compact.  Thus one can find sequences $x^{(n)}$ in E which do not have any subsequences that converge to a constant element $x \in E$, even in this intermediate metric.  (As we shall see shortly, one major cause of this failure of compactness is the existence of a non-trivial action of a non-compact group G on E; such a group action can cause compensated compactness or the Palais-Smale condition to fail also.)  Because of this, it is a priori conceivable that a continuous function F need not attain its supremum or infimum.

Nevertheless, even though a sequence $x^{(n)}$ does not have any subsequences that converge to a constant x, it may have a subsequence (which we also call $x^{(n)}$) which converges to some non-constant sequence $y^{(n)}$ (in the sense that the distance $d(x^{(n)},y^{(n)})$ between the subsequence and the new sequence in a this intermediate metric), where the approximating sequence $y^{(n)}$ is of a very structured form (e.g. “concentrating” to a point, or “travelling” off to infinity, or a superposition $y^{(n)} = \sum_j y^{(n)}_j$ of several concentrating or travelling profiles of this form).  This weaker form of compactness, in which superpositions of a certain type of profile completely describe all the failures (or defects) of compactness, is known as concentration compactness, and the decomposition $x^{(n)} \approx \sum_j y^{(n)}_j$ of the subsequence is known as the profile decomposition.  In many applications, it is a sufficiently good substitute for compactness that one can still do things like locate extremisers for functionals F –  though one often has to make some additional assumptions of F to compensate for the more complicated nature of the compactness.  This phenomenon was systematically studied by P.L. Lions in the 80s, and found great application in calculus of variations and nonlinear elliptic PDE.  More recently, concentration compactness has been a crucial and powerful tool in the non-perturbative analysis of nonlinear dispersive PDE, in particular being used to locate “minimal energy blowup solutions” or “minimal mass blowup solutions” for such a PDE (analogously to how one can use calculus of variations to find minimal energy solutions to a nonlinear elliptic equation); see for instance this recent survey by Killip and Visan.

In typical applications, the concentration compactness phenomenon is exploited in moderately sophisticated function spaces (such as Sobolev spaces or Strichartz spaces), with the failure of traditional compactness being connected to a moderately complicated group G of symmetries (e.g. the group generated by translations and dilations).  Because of this, concentration compactness can appear to be a rather complicated and technical concept when it is first encountered.  In this note, I would like to illustrate concentration compactness in a simple toy setting, namely in the space $X = l^1({\Bbb Z})$ of absolutely summable sequences, with the uniform ($l^\infty$) metric playing the role of the intermediate metric, and the translation group ${\Bbb Z}$ playing the role of the symmetry group G.  This toy setting is significantly simpler than any model that one would actually use in practice [for instance, in most applications X is a Hilbert space], but hopefully it serves to illuminate this useful concept in a less technical fashion.