You are currently browsing the category archive for the ‘expository’ category.

This is another sequel to a recent post in which I showed the Riemann zeta function {\zeta} can be locally approximated by a polynomial, in the sense that for randomly chosen {t \in [T,2T]} one has an approximation

\displaystyle  \zeta(\frac{1}{2} + it - \frac{2\pi i z}{\log T}) \approx P_t( e^{2\pi i z/N} ) \ \ \ \ \ (1)

where {N} grows slowly with {T}, and {P_t} is a polynomial of degree {N}. It turns out that in the function field setting there is an exact version of this approximation which captures many of the known features of the Riemann zeta function, namely Dirichlet {L}-functions for a random character of given modulus over a function field. This model was (essentially) studied in a fairly recent paper by Andrade, Miller, Pratt, and Trinh; I am not sure if there is any further literature on this model beyond this paper (though the number field analogue of low-lying zeroes of Dirichlet {L}-functions is certainly well studied). In this model it is possible to set {N} fixed and let {T} go to infinity, thus providing a simple finite-dimensional model problem for problems involving the statistics of zeroes of the zeta function.

In this post I would like to record this analogue precisely. We will need a finite field {{\mathbb F}} of some order {q} and a natural number {N}, and set

\displaystyle  T := q^{N+1}.

We will primarily think of {q} as being large and {N} as being either fixed or growing very slowly with {q}, though it is possible to also consider other asymptotic regimes (such as holding {q} fixed and letting {N} go to infinity). Let {{\mathbb F}[X]} be the ring of polynomials of one variable {X} with coefficients in {{\mathbb F}}, and let {{\mathbb F}[X]'} be the multiplicative semigroup of monic polynomials in {{\mathbb F}[X]}; one should view {{\mathbb F}[X]} and {{\mathbb F}[X]'} as the function field analogue of the integers and natural numbers respectively. We use the valuation {|n| := q^{\mathrm{deg}(n)}} for polynomials {n \in {\mathbb F}[X]} (with {|0|=0}); this is the analogue of the usual absolute value on the integers. We select an irreducible polynomial {Q \in {\mathbb F}[X]} of size {|Q|=T} (i.e., {Q} has degree {N+1}). The multiplicative group {({\mathbb F}[X]/Q{\mathbb F}[X])^\times} can be shown to be cyclic of order {|Q|-1=T-1}. A Dirichlet character of modulus {Q} is a completely multiplicative function {\chi: {\mathbb F}[X] \rightarrow {\bf C}} of modulus {Q}, that is periodic of period {Q} and vanishes on those {n \in {\mathbb F}[X]} not coprime to {Q}. From Fourier analysis we see that there are exactly {\phi(Q) := |Q|-1} Dirichlet characters of modulus {Q}. A Dirichlet character is said to be odd if it is not identically one on the group {{\mathbb F}^\times} of non-zero constants; there are only {\frac{1}{q-1} \phi(Q)} non-odd characters (including the principal character), so in the limit {q \rightarrow \infty} most Dirichlet characters are odd. We will work primarily with odd characters in order to be able to ignore the effect of the place at infinity.

Let {\chi} be an odd Dirichlet character of modulus {Q}. The Dirichlet {L}-function {L(s, \chi)} is then defined (for {s \in {\bf C}} of sufficiently large real part, at least) as

\displaystyle  L(s,\chi) := \sum_{n \in {\mathbb F}[X]'} \frac{\chi(n)}{|n|^s}

\displaystyle  = \sum_{m=0}^\infty q^{-sm} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \chi(n).

Note that for {m \geq N+1}, the set {n \in {\mathbb F}[X]': |n| = q^m} is invariant under shifts {h} whenever {|h| < T}; since this covers a full set of residue classes of {{\mathbb F}[X]/Q{\mathbb F}[X]}, and the odd character {\chi} has mean zero on this set of residue classes, we conclude that the sum {\sum_{n \in {\mathbb F}[X]': |n| = q^m} \chi(n)} vanishes for {m \geq N+1}. In particular, the {L}-function is entire, and for any real number {t} and complex number {z}, we can write the {L}-function as a polynomial

\displaystyle  L(\frac{1}{2} + it - \frac{2\pi i z}{\log T},\chi) = P(Z) = P_{t,\chi}(Z) := \sum_{m=0}^N c^1_m(t,\chi) Z^j

where {Z := e(z/N) = e^{2\pi i z/N}} and the coefficients {c^1_m = c^1_m(t,\chi)} are given by the formula

\displaystyle  c^1_m(t,\chi) := q^{-m/2-imt} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \chi(n).

Note that {t} can easily be normalised to zero by the relation

\displaystyle  P_{t,\chi}(Z) = P_{0,\chi}( q^{-it} Z ). \ \ \ \ \ (2)

In particular, the dependence on {t} is periodic with period {\frac{2\pi}{\log q}} (so by abuse of notation one could also take {t} to be an element of {{\bf R}/\frac{2\pi}{\log q}{\bf Z}}).

Fourier inversion yields a functional equation for the polynomial {P}:

Proposition 1 (Functional equation) Let {\chi} be an odd Dirichlet character of modulus {Q}, and {t \in {\bf R}}. There exists a phase {e(\theta)} (depending on {t,\chi}) such that

\displaystyle  a_{N-m}^1 = e(\theta) \overline{c^1_m}

for all {0 \leq m \leq N}, or equivalently that

\displaystyle  P(1/Z) = e^{i\theta} Z^{-N} \overline{P}(Z)

where {\overline{P}(Z) := \overline{P(\overline{Z})}}.

Proof: We can normalise {t=0}. Let {G} be the finite field {{\mathbb F}[X] / Q {\mathbb F}[X]}. We can write

\displaystyle  a_{N-m} = q^{-(N-m)/2} \sum_{n \in q^{N-m} + H_{N-m}} \chi(n)

where {H_j} denotes the subgroup of {G} consisting of (residue classes of) polynomials of degree less than {j}. Let {e_G: G \rightarrow S^1} be a non-trivial character of {G} whose kernel lies in the space {H_N} (this is easily achieved by pulling back a non-trivial character from the quotient {G/H_N \equiv {\mathbb F}}). We can use the Fourier inversion formula to write

\displaystyle  a_{N-m} = q^{(m-N)/2} \sum_{\xi \in G} \hat \chi(\xi) \sum_{n \in T^{N-m} + H_{N-m}} e_G( n\xi )

where

\displaystyle  \hat \chi(\xi) := q^{-N-1} \sum_{n \in G} \chi(n) e_G(-n\xi).

From change of variables we see that {\hat \chi} is a scalar multiple of {\overline{\chi}}; from Plancherel we conclude that

\displaystyle  \hat \chi = e(\theta_0) q^{-(N+1)/2} \overline{\chi} \ \ \ \ \ (3)

for some phase {e(\theta_0)}. We conclude that

\displaystyle  a_{N-m} = e(\theta_0) q^{-(2N-m+1)/2} \sum_{\xi \in G} \overline{\chi}(\xi) e_G( T^{N-j} \xi) \sum_{n \in H_{N-j}} e_G( n\xi ). \ \ \ \ \ (4)

The inner sum {\sum_{n \in H_{N-m}} e_G( n\xi )} equals {q^{N-m}} if {\xi \in H_{j+1}}, and vanishes otherwise, thus

\displaystyle a_{N-m} = e(\theta_0) q^{-(m+1)/2} \sum_{\xi \in H_{j+1}} \overline{\chi}(\xi) e_G( T^{N-m} \xi).

For {\xi} in {H_j}, {e_G(T^{N-m} \xi)=1} and the contribution of the sum vanishes as {\chi} is odd. Thus we may restrict {\xi} to {H_{m+1} \backslash H_m}, so that

\displaystyle a_{N-m} = e(\theta_0) q^{-(m+1)/2} \sum_{h \in {\mathbb F}^\times} e_G( T^{N} h) \sum_{\xi \in h T^m + H_{m}} \overline{\chi}(\xi).

By the multiplicativity of {\chi}, this factorises as

\displaystyle a_{N-m} = e(\theta_0) q^{-(m+1)/2} (\sum_{h \in {\mathbb F}^\times} \overline{\chi}(h) e_G( T^{N} h)) (\sum_{\xi \in T^m + H_{m}} \overline{\chi}(\xi)).

From the one-dimensional version of (3) (and the fact that {\chi} is odd) we have

\displaystyle  \sum_{h \in {\mathbb F}^\times} \overline{\chi}(h) e_G( T^{N} h) = e(\theta_1) q^{1/2}

for some phase {e(\theta_1)}. The claim follows. \Box

As one corollary of the functional equation, {a_N} is a phase rotation of {\overline{a_1} = 1} and thus is non-zero, so {P} has degree exactly {N}. The functional equation is then equivalent to the {N} zeroes of {P} being symmetric across the unit circle. In fact we have the stronger

Theorem 2 (Riemann hypothesis for Dirichlet {L}-functions over function fields) Let {\chi} be an odd Dirichlet character of modulus {Q}, and {t \in {\bf R}}. Then all the zeroes of {P} lie on the unit circle.

We derive this result from the Riemann hypothesis for curves over function fields below the fold.

In view of this theorem (and the fact that {a_1=1}), we may write

\displaystyle  P(Z) = \mathrm{det}(1 - ZU)

for some unitary {N \times N} matrix {U = U_{t,\chi}}. It is possible to interpret {U} as the action of the geometric Frobenius map on a certain cohomology group, but we will not do so here. The situation here is simpler than in the number field case because the factor {\exp(A)} arising from very small primes is now absent (in the function field setting there are no primes of size between {1} and {q}).

We now let {\chi} vary uniformly at random over all odd characters of modulus {Q}, and {t} uniformly over {{\bf R}/\frac{2\pi}{\log q}{\bf Z}}, independently of {\chi}; we also make the distribution of the random variable {U} conjugation invariant in {U(N)}. We use {{\mathbf E}_Q} to denote the expectation with respect to this randomness. One can then ask what the limiting distribution of {U} is in various regimes; we will focus in this post on the regime where {N} is fixed and {q} is being sent to infinity. In the spirit of the Sato-Tate conjecture, one should expect {U} to converge in distribution to the circular unitary ensemble (CUE), that is to say Haar probability measure on {U(N)}. This may well be provable from Deligne’s “Weil II” machinery (in the spirit of this monograph of Katz and Sarnak), though I do not know how feasible this is or whether it has already been done in the literature; here we shall avoid using this machinery and study what partial results towards this CUE hypothesis one can make without it.

If one lets {\lambda_1,\dots,\lambda_N} be the eigenvalues of {U} (ordered arbitrarily), then we now have

\displaystyle  \sum_{m=0}^N c^1_m Z^m = P(Z) = \prod_{j=1}^N (1 - \lambda_j Z)

and hence the {c^1_m} are essentially elementary symmetric polynomials of the eigenvalues:

\displaystyle  c^1_m = (-1)^j e_m( \lambda_1,\dots,\lambda_N). \ \ \ \ \ (5)

One can take log derivatives to conclude

\displaystyle  \frac{P'(Z)}{P(Z)} = \sum_{j=1}^N \frac{\lambda_j}{1-\lambda_j Z}.

On the other hand, as in the number field case one has the Dirichlet series expansion

\displaystyle  Z \frac{P'(Z)}{P(Z)} = \sum_{n \in {\mathbb F}[X]'} \frac{\Lambda_q(n) \chi(n)}{|n|^s}

where {s = \frac{1}{2} + it - \frac{2\pi i z}{\log T}} has sufficiently large real part, {Z = e(z/N)}, and the von Mangoldt function {\Lambda_q(n)} is defined as {\log_q |p| = \mathrm{deg} p} when {n} is the power of an irreducible {p} and {0} otherwise. We conclude the “explicit formula”

\displaystyle  c^{\Lambda_q}_m = \sum_{j=1}^N \lambda_j^m = \mathrm{tr}(U^m) \ \ \ \ \ (6)

for {m \geq 1}, where

\displaystyle  c^{\Lambda_q}_m := q^{-m/2-imt} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \Lambda_q(n) \chi(n).

Similarly on inverting {P(Z)} we have

\displaystyle  P(Z)^{-1} = \prod_{j=1}^N (1 - \lambda_j Z)^{-1}.

Since we also have

\displaystyle  P(Z)^{-1} = \sum_{n \in {\mathbb F}[X]'} \frac{\mu(n) \chi(n)}{|n|^s}

for {s} sufficiently large real part, where the Möbius function {\mu(n)} is equal to {(-1)^k} when {n} is the product of {k} distinct irreducibles, and {0} otherwise, we conclude that the Möbius coefficients

\displaystyle  c^\mu_m := q^{-m/2-imt} \sum_{n \in {\mathbb F}[X]': |n| = q^m} \mu(n) \chi(n)

are just the complete homogeneous symmetric polynomials of the eigenvalues:

\displaystyle  c^\mu_m = h_m( \lambda_1,\dots,\lambda_N). \ \ \ \ \ (7)

One can then derive various algebraic relationships between the coefficients {c^1_m, c^{\Lambda_q}_m, c^\mu_m} from various identities involving symmetric polynomials, but we will not do so here.

What do we know about the distribution of {U}? By construction, it is conjugation-invariant; from (2) it is also invariant with respect to the rotations {U \rightarrow e^{i\theta} U} for any phase {\theta \in{\bf R}}. We also have the function field analogue of the Rudnick-Sarnak asymptotics:

Proposition 3 (Rudnick-Sarnak asymptotics) Let {a_1,\dots,a_k,b_1,\dots,b_k} be nonnegative integers. If

\displaystyle  \sum_{j=1}^k j a_j \leq N, \ \ \ \ \ (8)

then the moment

\displaystyle  {\bf E}_{Q} \prod_{j=1}^k (\mathrm{tr} U^j)^{a_j} (\overline{\mathrm{tr} U^j})^{b_j} \ \ \ \ \ (9)

is equal to {o(1)} in the limit {q \rightarrow \infty} (holding {N,a_1,\dots,a_k,b_1,\dots,b_k} fixed) unless {a_j=b_j} for all {j}, in which case it is equal to

\displaystyle  \prod_{j=1}^k j^{a_j} a_j! + o(1). \ \ \ \ \ (10)

Comparing this with Proposition 1 from this previous post, we thus see that all the low moments of {U} are consistent with the CUE hypothesis (and also with the ACUE hypothesis, again by the previous post). The case {\sum_{j=1}^k a_j + \sum_{j=1}^k b_j \leq 2} of this proposition was essentially established by Andrade, Miller, Pratt, and Trinh.

Proof: We may assume the homogeneity relationship

\displaystyle  \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j \ \ \ \ \ (11)

since otherwise the claim follows from the invariance under phase rotation {U \mapsto e^{i\theta} U}. By (6), the expression (9) is equal to

\displaystyle  q^{-D} {\bf E}_Q \sum_{n_1,\dots,n_l,n'_1,\dots,n'_{l'} \in {\mathbb F}[X]': |n_i| = q^{s_i}, |n'_i| = q^{s'_i}} (\prod_{i=1}^l \Lambda_q(n_i) \chi(n_i)) \prod_{i=1}^{l'} \Lambda_q(n'_i) \overline{\chi(n'_i)}

where

\displaystyle  D := \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j

\displaystyle  l := \sum_{j=1}^k a_j

\displaystyle  l' := \sum_{j=1}^k b_j

and {s_1 \leq \dots \leq s_l} consists of {a_j} copies of {j} for each {j=1,\dots,k}, and similarly {s'_1 \leq \dots \leq s'_{l'}} consists of {b_j} copies of {j} for each {j=1,\dots,k}.

The polynomials {n_1 \dots n_l} and {n'_1 \dots n'_{l'}} are monic of degree {D}, which by hypothesis is less than the degree of {Q}, and thus they can only be scalar multiples of each other in {{\mathbb F}[X] / Q {\mathbb F}[X]} if they are identical (in {{\mathbb F}[X]}). As such, we see that the average

\displaystyle  {\bf E}_Q \chi(n_1) \dots \chi(n_l) \overline{\chi(n'_1)} \dots \overline{\chi(n'_{l'})}

vanishes unless {n_1 \dots n_l = n'_1 \dots n'_{l'}}, in which case this average is equal to {1}. Thus the expression (9) simplifies to

\displaystyle  q^{-D} \sum_{n_1,\dots,n_l,n'_1,\dots,n'_{l'}: |n_i| = q^{s_i}, |n'_i| = q^{s'_i}; n_1 \dots n_l = n'_1 \dots n'_l} (\prod_{i=1}^l \Lambda_q(n_i)) \prod_{i=1}^{l'} \Lambda_q(n'_i).

There are at most {q^D} choices for the product {n_1 \dots n_l}, and each one contributes {O_D(1)} to the above sum. All but {o(q^D)} of these choices are square-free, so by accepting an error of {o(1)}, we may restrict attention to square-free {n_1 \dots n_l}. This forces {n_1,\dots,n_l,n'_1,\dots,n'_{l'}} to all be irreducible (as opposed to powers of irreducibles); as {{\mathbb F}[X]} is a unique factorisation domain, this forces {l=l'} and {n_1,\dots,n_l} to be a permutation of {n'_1,\dots,n'_{l'}}. By the size restrictions, this then forces {a_j = b_j} for all {j} (if the above expression is to be anything other than {o(1)}), and each {n_1,\dots,n_l} is associated to {\prod_{j=1}^k a_j!} possible choices of {n'_1,\dots,n'_{l'}}. Writing {\Lambda_q(n'_i) = s'_i} and then reinstating the non-squarefree possibilities for {n_1 \dots n_l}, we can thus write the above expression as

\displaystyle  q^{-D} \prod_{j=1}^k j a_j! \sum_{n_1,\dots,n_l,n'_1,\dots,n'_{l'}\in {\mathbb F}[X]': |n_i| = q^{s_i}} \prod_{i=1}^l \Lambda_q(n_i) + o(1).

Using the prime number theorem {\sum_{n \in {\mathbb F}[X]': |n| = q^s} \Lambda_q(n) = q^s}, we obtain the claim. \Box

Comparing this with Proposition 1 from this previous post, we thus see that all the low moments of {U} are consistent with the CUE and ACUE hypotheses:

Corollary 4 (CUE statistics at low frequencies) Let {\lambda_1,\dots,\lambda_N} be the eigenvalues of {U}, permuted uniformly at random. Let {R(\lambda)} be a linear combination of monomials {\lambda_1^{a_1} \dots \lambda_N^{a_N}} where {a_1,\dots,a_N} are integers with either {\sum_{j=1}^N a_j \neq 0} or {\sum_{j=1}^N |a_j| \leq 2N}. Then

\displaystyle  {\bf E}_Q R(\lambda) = {\bf E}_{CUE} R(\lambda) + o(1).

The analogue of the GUE hypothesis in this setting would be the CUE hypothesis, which asserts that the threshold {2N} here can be replaced by an arbitrarily large quantity. As far as I know this is not known even for {2N+2} (though, as mentioned previously, in principle one may be able to resolve such cases using Deligne’s proof of the Riemann hypothesis for function fields). Among other things, this would allow one to distinguish CUE from ACUE, since as discussed in the previous post, these two distributions agree when tested against monomials up to threshold {2N}, though not to {2N+2}.

Proof: By permutation symmetry we can take {R} to be symmetric, and by linearity we may then take {R} to be the symmetrisation of a single monomial {\lambda_1^{a_1} \dots \lambda_N^{a_N}}. If {\sum_{j=1}^N a_j \neq 0} then both expectations vanish due to the phase rotation symmetry, so we may assume that {\sum_{j=1}^N a_j \neq 0} and {\sum_{j=1}^N |a_j| \leq 2N}. We can write this symmetric polynomial as a constant multiple of {\mathrm{tr}(U^{a_1}) \dots \mathrm{tr}(U^{a_N})} plus other monomials with a smaller value of {\sum_{j=1}^N |a_j|}. Since {\mathrm{tr}(U^{-a}) = \overline{\mathrm{tr}(U^a)}}, the claim now follows by induction from Proposition 3 and Proposition 1 from the previous post. \Box

Thus, for instance, for {k=1,2}, the {2k^{th}} moment

\displaystyle {\bf E}_Q |\det(1-U)|^{2k} = {\bf E}_Q |P(1)|^{2k} = {\bf E}_Q |L(\frac{1}{2} + it, \chi)|^{2k}

is equal to

\displaystyle  {\bf E}_{CUE} |\det(1-U)|^{2k} + o(1)

because all the monomials in {\prod_{j=1}^N (1-\lambda_j)^k (1-\lambda_j^{-1})^k} are of the required form when {k \leq 2}. The latter expectation can be computed exactly (for any natural number {k}) using a formula

\displaystyle  {\bf E}_{CUE} |\det(1-U)|^{2k} = \prod_{j=1}^N \frac{\Gamma(j) \Gamma(j+2k)}{\Gamma(j+k)^2}

of Baker-Forrester and Keating-Snaith, thus for instance

\displaystyle  {\bf E}_{CUE} |\det(1-U)|^2 = N+1

\displaystyle  {\bf E}_{CUE} |\det(1-U)|^4 = \frac{(N+1)(N+2)^2(N+3)}{12}

and more generally

\displaystyle  {\bf E}_{CUE}|\det(1-U)|^{2k} = \frac{g_k+o(1)}{(k^2)!} N^{k^2}

when {N \rightarrow \infty}, where {g_k} are the integers

\displaystyle  g_1 = 1, g_2 = 2, g_3 = 42, g_4 = 24024, \dots

and more generally

\displaystyle  g_k := \frac{(k^2)!}{\prod_{i=1}^{2k-1} i^{k-|k-i|}}

(OEIS A039622). Thus we have

\displaystyle {\bf E}_Q |\det(1-U)|^{2k} = \frac{g_k+o(1)}{k^2!} N^{k^2}

for {k=1,2} if {Q \rightarrow \infty} and {N} is sufficiently slowly growing depending on {Q}. The CUE hypothesis would imply that that this formula also holds for higher {k}. (The situation here is cleaner than in the number field case, in which the GUE hypothesis only suggests the correct lower bound for the moments rather than an asymptotic, due to the absence of the wildly fluctuating additional factor {\exp(A)} that is present in the Riemann zeta function model.)

Now we can recover the analogue of Montgomery’s work on the pair correlation conjecture. Consider the statistic

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j )

where

\displaystyle R(z) = \sum_m \hat R(m) z^m

is some finite linear combination of monomials {z^m} independent of {q}. We can expand the above sum as

\displaystyle  \sum_m \hat R(m) {\bf E}_Q \mathrm{tr}(U^m) \mathrm{tr}(U^{-m}).

Assuming the CUE hypothesis, then by Example 3 of the previous post, we would conclude that

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j ) = N^2 \hat R(0) + \sum_m \min(|m|,N) \hat R(m) + o(1). \ \ \ \ \ (12)

This is the analogue of Montgomery’s pair correlation conjecture. Proposition 3 implies that this claim is true whenever {\hat R} is supported on {[-N,N]}. If instead we assume the ACUE hypothesis (or the weaker Alternative Hypothesis that the phase gaps are non-zero multiples of {1/2N}), one should instead have

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j ) = \sum_{k \in {\bf Z}} N^2 \hat R(2Nk) + \sum_{1 \leq |m| \leq N} |m| \hat R(m+2Nk) + o(1)

for arbitrary {R}; this is the function field analogue of a recent result of Baluyot. In any event, since {\mathrm{tr}(U^m) \mathrm{tr}(U^{-m})} is non-negative, we unconditionally have the lower bound

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j ) \geq N^2 \hat R(0) + \sum_{1 \leq |m| \leq N} |m| \hat R(m) + o(1). \ \ \ \ \ (13)

if {\hat R(m)} is non-negative for {|m| > N}.

By applying (12) for various choices of test functions {R} we can obtain various bounds on the behaviour of eigenvalues. For instance suppose we take the Fejér kernel

\displaystyle  R(z) = |1 + z + \dots + z^N|^2 = \sum_{m=-N}^N (N+1-|m|) z^m.

Then (12) applies unconditionally and we conclude that

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} R( \lambda_i / \lambda_j ) = N^2 (N+1) + \sum_{1 \leq |m| \leq N} (N+1-|m|) |m| + o(1).

The right-hand side evaluates to {\frac{2}{3} N(N+1)(2N+1)+o(1)}. On the other hand, {R(\lambda_i/\lambda_j)} is non-negative, and equal to {(N+1)^2} when {\lambda_i = \lambda_j}. Thus

\displaystyle  {\bf E}_Q \sum_{1 \leq i,j \leq N} 1_{\lambda_i = \lambda_j} \leq \frac{2}{3} \frac{N(2N+1)}{N+1} + o(1).

The sum {\sum_{1 \leq j \leq N} 1_{\lambda_i = \lambda_j}} is at least {1}, and is at least {2} if {\lambda_i} is not a simple eigenvalue. Thus

\displaystyle  {\bf E}_Q \sum_{1 \leq i, \leq N} 1_{\lambda_i \hbox{ not simple}} \leq \frac{1}{3} \frac{N(N-1)}{N+1} + o(1),

and thus the expected number of simple eigenvalues is at least {\frac{2N}{3} \frac{N+4}{N+1} + o(1)}; in particular, at least two thirds of the eigenvalues are simple asymptotically on average. If we had (12) without any restriction on the support of {\hat R}, the same arguments allow one to show that the expected proportion of simple eigenvalues is {1-o(1)}.

Suppose that the phase gaps in {U} are all greater than {c/N} almost surely. Let {\hat R} is non-negative and {R(e^{i\theta})} non-positive for {\theta} outside of the arc {[-c/N,c/N]}. Then from (13) one has

\displaystyle  R(0) N \geq N^2 \hat R(0) + \sum_{1 \leq |m| \leq N} |m| \hat R(m) + o(1),

so by taking contrapositives one can force the existence of a gap less than {c/N} asymptotically if one can find {R} with {\hat R} non-negative, {R} non-positive for {\theta} outside of the arc {[-c/N,c/N]}, and for which one has the inequality

\displaystyle  R(0) N < N^2 \hat R(0) + \sum_{1 \leq |m| \leq N} |m| \hat R(m).

By a suitable choice of {R} (based on a minorant of Selberg) one can ensure this for {c \approx 0.6072} for {N} large; see Section 5 of these notes of Goldston. This is not the smallest value of {c} currently obtainable in the literature for the number field case (which is currently {0.50412}, due to Goldston and Turnage-Butterbaugh, by a somewhat different method), but is still significantly less than the trivial value of {1}. On the other hand, due to the compatibility of the ACUE distribution with Proposition 3, it is not possible to lower {c} below {0.5} purely through the use of Proposition 3.

In some cases it is possible to go beyond Proposition 3. Consider the mollified moment

\displaystyle  {\bf E}_Q |M(U) P(1)|^2

where

\displaystyle  M(U) = \sum_{m=0}^d a_m h_m(\lambda_1,\dots,\lambda_N)

for some coefficients {a_0,\dots,a_d}. We can compute this moment in the CUE case:

Proposition 5 We have

\displaystyle  {\bf E}_{CUE} |M(U) P(1)|^2 = |a_0|^2 + N \sum_{m=1}^d |a_m - a_{m-1}|^2.

Proof: From (5) one has

\displaystyle  P(1) = \sum_{i=0}^N (-1)^i e_i(\lambda_1,\dots,\lambda_N)

hence

\displaystyle  M(U) P(1) = \sum_{i=0}^N \sum_{m=0}^d (-1)^i a_m e_i h_m

where we suppress the dependence on the eigenvalues {\lambda}. Now observe the Pieri formula

\displaystyle  e_i h_m = s_{m 1^i} + s_{(m+1) 1^{i-1}}

where {s_{m 1^i}} are the hook Schur polynomials

\displaystyle  s_{m 1^i} = \sum_{a_1 \leq \dots \leq a_m; a_1 < b_1 < \dots < b_i} \lambda_{a_1} \dots \lambda_{a_m} \lambda_{b_1} \dots \lambda_{b_i}

and we adopt the convention that {s_{m 1^i}} vanishes for {i = -1}, or when {m = 0} and {i > 0}. Then {s_{m1^i}} also vanishes for {i\geq N}. We conclude that

\displaystyle  M(U) P(1) = a_0 s_{0 1^0} + \sum_{0 \leq i \leq N-1} \sum_{m \geq 1} (-1)^i (a_m - a_{m-1}) s_{m 1^i}.

As the Schur polynomials are orthonormal on the unitary group, the claim follows. \Box

The CUE hypothesis would then imply the corresponding mollified moment conjecture

\displaystyle  {\bf E}_{Q} |M(U) P(1)|^2 = |a_0|^2 + N \sum_{m=1}^d |a_m - a_{m-1}|^2 + o(1). \ \ \ \ \ (14)

(See this paper of Conrey, and this paper of Radziwill, for some discussion of the analogous conjecture for the zeta function, which is essentially due to Farmer.)

From Proposition 3 one sees that this conjecture holds in the range {d \leq \frac{1}{2} N}. It is likely that the function field analogue of the calculations of Conrey (based ultimately on deep exponential sum estimates of Deshouillers and Iwaniec) can extend this range to {d < \theta N} for any {\theta < \frac{4}{7}}, if {N} is sufficiently large depending on {\theta}; these bounds thus go beyond what is available from Proposition 3. On the other hand, as discussed in Remark 7 of the previous post, ACUE would also predict (14) for {d} as large as {N-2}, so the available mollified moment estimates are not strong enough to rule out ACUE. It would be interesting to see if there is some other estimate in the function field setting that can be used to exclude the ACUE hypothesis (possibly one that exploits the fact that GRH is available in the function field case?).

Read the rest of this entry »

In a recent post I discussed how the Riemann zeta function {\zeta} can be locally approximated by a polynomial, in the sense that for randomly chosen {t \in [T,2T]} one has an approximation

\displaystyle  \zeta(\frac{1}{2} + it - \frac{2\pi i z}{\log T}) \approx P_t( e^{2\pi i z/N} ) \ \ \ \ \ (1)

where {N} grows slowly with {T}, and {P_t} is a polynomial of degree {N}. Assuming the Riemann hypothesis (as we will throughout this post), the zeroes of {P_t} should all lie on the unit circle, and one should then be able to write {P_t} as a scalar multiple of the characteristic polynomial of (the inverse of) a unitary matrix {U = U_t \in U(N)}, which we normalise as

\displaystyle  P_t(Z) = \exp(A_t) \mathrm{det}(1 - ZU). \ \ \ \ \ (2)

Here {A_t} is some quantity depending on {t}. We view {U} as a random element of {U(N)}; in the limit {T \rightarrow \infty}, the GUE hypothesis is equivalent to {U} becoming equidistributed with respect to Haar measure on {U(N)} (also known as the Circular Unitary Ensemble, CUE; it is to the unit circle what the Gaussian Unitary Ensemble (GUE) is on the real line). One can also view {U} as analogous to the “geometric Frobenius” operator in the function field setting, though unfortunately it is difficult at present to make this analogy any more precise (due, among other things, to the lack of a sufficiently satisfactory theory of the “field of one element“).

Taking logarithmic derivatives of (2), we have

\displaystyle  -\frac{P'_t(Z)}{P_t(Z)} = \mathrm{tr}( U (1-ZU)^{-1} ) = \sum_{j=1}^\infty Z^{j-1} \mathrm{tr} U^j \ \ \ \ \ (3)

and hence on taking logarithmic derivatives of (1) in the {z} variable we (heuristically) have

\displaystyle  -\frac{2\pi i}{\log T} \frac{\zeta'}{\zeta}( \frac{1}{2} + it - \frac{2\pi i z}{\log T}) \approx \frac{2\pi i}{N} \sum_{j=1}^\infty e^{2\pi i jz/N} \mathrm{tr} U^j.

Morally speaking, we have

\displaystyle  - \frac{\zeta'}{\zeta}( \frac{1}{2} + it - \frac{2\pi i z}{\log T}) = \sum_{n=1}^\infty \frac{\Lambda(n)}{n^{1/2+it}} e^{2\pi i z (\log n/\log T)}

so on comparing coefficients we expect to interpret the moments {\mathrm{tr} U^j} of {U} as a finite Dirichlet series:

\displaystyle  \mathrm{tr} U^j \approx \frac{N}{\log T} \sum_{T^{(j-1)/N} < n \leq T^{j/N}} \frac{\Lambda(n)}{n^{1/2+it}}. \ \ \ \ \ (4)

To understand the distribution of {U} in the unitary group {U(N)}, it suffices to understand the distribution of the moments

\displaystyle  {\bf E}_t \prod_{j=1}^k (\mathrm{tr} U^j)^{a_j} (\overline{\mathrm{tr} U^j})^{b_j} \ \ \ \ \ (5)

where {{\bf E}_t} denotes averaging over {t \in [T,2T]}, and {k, a_1,\dots,a_k, b_1,\dots,b_k \geq 0}. The GUE hypothesis asserts that in the limit {T \rightarrow \infty}, these moments converge to their CUE counterparts

\displaystyle  {\bf E}_{\mathrm{CUE}} \prod_{j=1}^k (\mathrm{tr} U^j)^{a_j} (\overline{\mathrm{tr} U^j})^{b_j} \ \ \ \ \ (6)

where {U} is now drawn uniformly in {U(n)} with respect to the CUE ensemble, and {{\bf E}_{\mathrm{CUE}}} denotes expectation with respect to that measure.

The moment (6) vanishes unless one has the homogeneity condition

\displaystyle  \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j. \ \ \ \ \ (7)

This follows from the fact that for any phase {\theta \in {\bf R}}, {e(\theta) U} has the same distribution as {U}, where we use the number theory notation {e(\theta) := e^{2\pi i\theta}}.

In the case when the degree {\sum_{j=1}^k j a_j} is low, we can use representation theory to establish the following simple formula for the moment (6), as evaluated by Diaconis and Shahshahani:

Proposition 1 (Low moments in CUE model) If

\displaystyle  \sum_{j=1}^k j a_j \leq N, \ \ \ \ \ (8)

then the moment (6) vanishes unless {a_j=b_j} for all {j}, in which case it is equal to

\displaystyle  \prod_{j=1}^k j^{a_j} a_j!. \ \ \ \ \ (9)

Another way of viewing this proposition is that for {U} distributed according to CUE, the random variables {\mathrm{tr} U^j} are distributed like independent complex random variables of mean zero and variance {j}, as long as one only considers moments obeying (8). This identity definitely breaks down for larger values of {a_j}, so one only obtains central limit theorems in certain limiting regimes, notably when one only considers a fixed number of {j}‘s and lets {N} go to infinity. (The paper of Diaconis and Shahshahani writes {\sum_{j=1}^k a_j + b_j} in place of {\sum_{j=1}^k j a_j}, but I believe this to be a typo.)

Proof: Let {D} be the left-hand side of (8). We may assume that (7) holds since we are done otherwise, hence

\displaystyle  D = \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j.

Our starting point is Schur-Weyl duality. Namely, we consider the {n^D}-dimensional complex vector space

\displaystyle  ({\bf C}^n)^{\otimes D} = {\bf C}^n \otimes \dots \otimes {\bf C}^n.

This space has an action of the product group {S_D \times GL_n({\bf C})}: the symmetric group {S_D} acts by permutation on the {D} tensor factors, while the general linear group {GL_n({\bf C})} acts diagonally on the {{\bf C}^n} factors, and the two actions commute with each other. Schur-Weyl duality gives a decomposition

\displaystyle  ({\bf C}^n)^{\otimes D} \equiv \bigoplus_\lambda V^\lambda_{S_D} \otimes V^\lambda_{GL_n({\bf C})} \ \ \ \ \ (10)

where {\lambda} ranges over Young tableaux of size {D} with at most {n} rows, {V^\lambda_{S_D}} is the {S_D}-irreducible unitary representation corresponding to {\lambda} (which can be constructed for instance using Specht modules), and {V^\lambda_{GL_n({\bf C})}} is the {GL_n({\bf C})}-irreducible polynomial representation corresponding with highest weight {\lambda}.

Let {\pi \in S_D} be a permutation consisting of {a_j} cycles of length {j} (this is uniquely determined up to conjugation), and let {g \in GL_n({\bf C})}. The pair {(\pi,g)} then acts on {({\bf C}^n)^{\otimes D}}, with the action on basis elements {e_{i_1} \otimes \dots \otimes e_{i_D}} given by

\displaystyle  g e_{\pi(i_1)} \otimes \dots \otimes g_{\pi(i_D)}.

The trace of this action can then be computed as

\displaystyle  \sum_{i_1,\dots,i_D \in \{1,\dots,n\}} g_{\pi(i_1),i_1} \dots g_{\pi(i_D),i_D}

where {g_{i,j}} is the {ij} matrix coefficient of {g}. Breaking up into cycles and summing, this is just

\displaystyle  \prod_{j=1}^k \mathrm{tr}(g^j)^{a_j}.

But we can also compute this trace using the Schur-Weyl decomposition (10), yielding the identity

\displaystyle  \prod_{j=1}^k \mathrm{tr}(g^j)^{a_j} = \sum_\lambda \chi_\lambda(\pi) s_\lambda(g) \ \ \ \ \ (11)

where {\chi_\lambda: S_D \rightarrow {\bf C}} is the character on {S_D} associated to {V^\lambda_{S_D}}, and {s_\lambda: GL_n({\bf C}) \rightarrow {\bf C}} is the character on {GL_n({\bf C})} associated to {V^\lambda_{GL_n({\bf C})}}. As is well known, {s_\lambda(g)} is just the Schur polynomial of weight {\lambda} applied to the (algebraic, generalised) eigenvalues of {g}. We can specialise to unitary matrices to conclude that

\displaystyle  \prod_{j=1}^k \mathrm{tr}(U^j)^{a_j} = \sum_\lambda \chi_\lambda(\pi) s_\lambda(U)

and similarly

\displaystyle  \prod_{j=1}^k \mathrm{tr}(U^j)^{b_j} = \sum_\lambda \chi_\lambda(\pi') s_\lambda(U)

where {\pi' \in S_D} consists of {b_j} cycles of length {j} for each {j=1,\dots,k}. On the other hand, the characters {s_\lambda} are an orthonormal system on {L^2(U(N))} with the CUE measure. Thus we can write the expectation (6) as

\displaystyle  \sum_\lambda \chi_\lambda(\pi) \overline{\chi_\lambda(\pi')}. \ \ \ \ \ (12)

Now recall that {\lambda} ranges over all the Young tableaux of size {D} with at most {N} rows. But by (8) we have {D \leq N}, and so the condition of having {N} rows is redundant. Hence {\lambda} now ranges over all Young tableaux of size {D}, which as is well known enumerates all the irreducible representations of {S_D}. One can then use the standard orthogonality properties of characters to show that the sum (12) vanishes if {\pi}, {\pi'} are not conjugate, and is equal to {D!} divided by the size of the conjugacy class of {\pi} (or equivalently, by the size of the centraliser of {\pi}) otherwise. But the latter expression is easily computed to be {\prod_{j=1}^k j^{a_j} a_j!}, giving the claim. \Box

Example 2 We illustrate the identity (11) when {D=3}, {n \geq 3}. The Schur polynomials are given as

\displaystyle  s_{3}(g) = \sum_i \lambda_i^3 + \sum_{i<j} \lambda_i^2 \lambda_j + \lambda_i \lambda_j^2 + \sum_{i<j<k} \lambda_i \lambda_j \lambda_k

\displaystyle  s_{2,1}(g) = \sum_{i < j} \lambda_i^2 \lambda_j + \sum_{i < j,k} \lambda_i \lambda_j \lambda_k

\displaystyle  s_{1,1,1}(g) = \sum_{i<j<k} \lambda_i \lambda_j \lambda_k

where {\lambda_1,\dots,\lambda_n} are the (generalised) eigenvalues of {g}, and the formula (11) in this case becomes

\displaystyle  \mathrm{tr}(g^3) = s_{3}(g) - s_{2,1}(g) + s_{1,1,1}(g)

\displaystyle  \mathrm{tr}(g^2) \mathrm{tr}(g) = s_{3}(g) - s_{1,1,1}(g)

\displaystyle  \mathrm{tr}(g)^3 = s_{3}(g) + 2 s_{2,1}(g) + s_{1,1,1}(g).

The functions {s_{1,1,1}, s_{2,1}, s_3} are orthonormal on {U(n)}, so the three functions {\mathrm{tr}(g^3), \mathrm{tr}(g^2) \mathrm{tr}(g), \mathrm{tr}(g)^3} are also, and their {L^2} norms are {\sqrt{3}}, {\sqrt{2}}, and {\sqrt{6}} respectively, reflecting the size in {S_3} of the centralisers of the permutations {(123)}, {(12)}, and {\mathrm{id}} respectively. If {n} is instead set to say {2}, then the {s_{1,1,1}} terms now disappear (the Young tableau here has too many rows), and the three quantities here now have some non-trivial covariance.

Example 3 Consider the moment {{\bf E}_{\mathrm{CUE}} |\mathrm{tr} U^j|^2}. For {j \leq N}, the above proposition shows us that this moment is equal to {D}. What happens for {j>N}? The formula (12) computes this moment as

\displaystyle  \sum_\lambda |\chi_\lambda(\pi)|^2

where {\pi} is a cycle of length {j} in {S_j}, and {\lambda} ranges over all Young tableaux with size {j} and at most {N} rows. The Murnaghan-Nakayama rule tells us that {\chi_\lambda(\pi)} vanishes unless {\lambda} is a hook (all but one of the non-zero rows consisting of just a single box; this also can be interpreted as an exterior power representation on the space {{\bf C}^j_{\sum=0}} of vectors in {{\bf C}^j} whose coordinates sum to zero), in which case it is equal to {\pm 1} (depending on the parity of the number of non-zero rows). As such we see that this moment is equal to {N}. Thus in general we have

\displaystyle  {\bf E}_{\mathrm{CUE}} |\mathrm{tr} U^j|^2 = \min(j,N). \ \ \ \ \ (13)

Now we discuss what is known for the analogous moments (5). Here we shall be rather non-rigorous, in particular ignoring an annoying “Archimedean” issue that the product of the ranges {T^{(j-1)/N} < n \leq T^{j/N}} and {T^{(k-1)/N} < n \leq T^{k/N}} is not quite the range {T^{(j+k-1)/N} < n \leq T^{j+k/N}} but instead leaks into the adjacent range {T^{(j+k-2)/N} < n \leq T^{j+k-1/N}}. This issue can be addressed by working in a “weak" sense in which parameters such as {j,k} are averaged over fairly long scales, or by passing to a function field analogue of these questions, but we shall simply ignore the issue completely and work at a heuristic level only. For similar reasons we will ignore some technical issues arising from the sharp cutoff of {t} to the range {[T,2T]} (it would be slightly better technically to use a smooth cutoff).

One can morally expand out (5) using (4) as

\displaystyle  (\frac{N}{\log T})^{J+K} \sum_{n_1,\dots,n_J,m_1,\dots,m_K} \frac{\Lambda(n_1) \dots \Lambda(n_J) \Lambda(m_1) \dots \Lambda(m_K)}{n_1^{1/2} \dots n_J^{1/2} m_1^{1/2} \dots m_K^{1/2}} \times \ \ \ \ \ (14)

\displaystyle  \times {\bf E}_t (m_1 \dots m_K / n_1 \dots n_J)^{it}

where {J := \sum_{j=1}^k a_j}, {K := \sum_{j=1}^k b_j}, and the integers {n_i,m_i} are in the ranges

\displaystyle  T^{(j-1)/N} < n_{a_1 + \dots + a_{j-1} + i} \leq T^{j/N}

for {j=1,\dots,k} and {1 \leq i \leq a_j}, and

\displaystyle  T^{(j-1)/N} < m_{b_1 + \dots + b_{j-1} + i} \leq T^{j/N}

for {j=1,\dots,k} and {1 \leq i \leq b_j}. Morally, the expectation here is negligible unless

\displaystyle  m_1 \dots m_K = (1 + O(1/T)) n_1 \dots n_J \ \ \ \ \ (15)

in which case the expecation is oscillates with magnitude one. In particular, if (7) fails (with some room to spare) then the moment (5) should be negligible, which is consistent with the analogous behaviour for the moments (6). Now suppose that (8) holds (with some room to spare). Then {n_1 \dots n_J} is significantly less than {T}, so the {O(1/T)} multiplicative error in (15) becomes an additive error of {o(1)}. On the other hand, because of the fundamental integrality gap – that the integers are always separated from each other by a distance of at least {1} – this forces the integers {m_1 \dots m_K}, {n_1 \dots n_J} to in fact be equal:

\displaystyle  m_1 \dots m_K = n_1 \dots n_J. \ \ \ \ \ (16)

The von Mangoldt factors {\Lambda(n_1) \dots \Lambda(n_J) \Lambda(m_1) \dots \Lambda(m_K)} effectively restrict {n_1,\dots,n_J,m_1,\dots,m_K} to be prime (the effect of prime powers is negligible). By the fundamental theorem of arithmetic, the constraint (16) then forces {J=K}, and {n_1,\dots,n_J} to be a permutation of {m_1,\dots,m_K}, which then forces {a_j = b_j} for all {j=1,\dots,k}._ For a given {n_1,\dots,n_J}, the number of possible {m_1 \dots m_K} is then {\prod_{j=1}^k a_j!}, and the expectation in (14) is equal to {1}. Thus this expectation is morally

\displaystyle  (\frac{N}{\log T})^{J+K} \sum_{n_1,\dots,n_J} \frac{\Lambda^2(n_1) \dots \Lambda^2(n_J) }{n_1 \dots n_J} \prod_{j=1}^k a_j!

and using Mertens’ theorem this soon simplifies asymptotically to the same quantity in Proposition 1. Thus we see that (morally at least) the moments (5) associated to the zeta function asymptotically match the moments (6) coming from the CUE model in the low degree case (8), thus lending support to the GUE hypothesis. (These observations are basically due to Rudnick and Sarnak, with the degree {1} case of pair correlations due to Montgomery, and the degree {2} case due to Hejhal.)

With some rare exceptions (such as those estimates coming from “Kloostermania”), the moment estimates of Rudnick and Sarnak basically represent the state of the art for what is known for the moments (5). For instance, Montgomery’s pair correlation conjecture, in our language, is basically the analogue of (13) for {{\mathbf E}_t}, thus

\displaystyle  {\bf E}_{t} |\mathrm{tr} U^j|^2 \approx \min(j,N) \ \ \ \ \ (17)

for all {j \geq 0}. Montgomery showed this for (essentially) the range {j \leq N} (as remarked above, this is a special case of the Rudnick-Sarnak result), but no further cases of this conjecture are known.

These estimates can be used to give some non-trivial information on the largest and smallest spacings between zeroes of the zeta function, which in our notation corresponds to spacing between eigenvalues of {U}. One such method used today for this is due to Montgomery and Odlyzko and was greatly simplified by Conrey, Ghosh, and Gonek. The basic idea, translated to our random matrix notation, is as follows. Suppose {Q_t(Z)} is some random polynomial depending on {t} of degree at most {N}. Let {\lambda_1,\dots,\lambda_n} denote the eigenvalues of {U}, and let {c > 0} be a parameter. Observe from the pigeonhole principle that if the quantity

\displaystyle  \sum_{j=1}^n \int_0^{c/N} |Q_t( e(\theta) \lambda_j )|^2\ d\theta \ \ \ \ \ (18)

exceeds the quantity

\displaystyle  \int_{0}^{2\pi} |Q_t(e(\theta))|^2\ d\theta, \ \ \ \ \ (19)

then the arcs {\{ e(\theta) \lambda_j: 0 \leq \theta \leq c \}} cannot all be disjoint, and hence there exists a pair of eigenvalues making an angle of less than {c/N} ({c} times the mean angle separation). Similarly, if the quantity (18) falls below that of (19), then these arcs cannot cover the unit circle, and hence there exists a pair of eigenvalues making an angle of greater than {c} times the mean angle separation. By judiciously choosing the coefficients of {Q_t} as functions of the moments {\mathrm{tr}(U^j)}, one can ensure that both quantities (18), (19) can be computed by the Rudnick-Sarnak estimates (or estimates of equivalent strength); indeed, from the residue theorem one can write (18) as

\displaystyle  \frac{1}{2\pi i} \int_0^{c/N} (\int_{|z| = 1+\varepsilon} - \int_{|z|=1-\varepsilon}) Q_t( e(\theta) z ) \overline{Q_t}( \frac{1}{e(\theta) z} ) \frac{P'_t(z)}{P_t(z)}\ dz

for sufficiently small {\varepsilon>0}, and this can be computed (in principle, at least) using (3) if the coefficients of {Q_t} are in an appropriate form. Using this sort of technology (translated back to the Riemann zeta function setting), one can show that gaps between consecutive zeroes of zeta are less than {\mu} times the mean spacing and greater than {\lambda} times the mean spacing infinitely often for certain {0 < \mu < 1 < \lambda}; the current records are {\mu = 0.50412} (due to Goldston and Turnage-Butterbaugh) and {\lambda = 3.18} (due to Bui and Milinovich, who input some additional estimates beyond the Rudnick-Sarnak set, namely the twisted fourth moment estimates of Bettin, Bui, Li, and Radziwill, and using a technique based on Hall’s method rather than the Montgomery-Odlyzko method).

It would be of great interest if one could push the upper bound {\mu} for the smallest gap below {1/2}. The reason for this is that this would then exclude the Alternative Hypothesis that the spacing between zeroes are asymptotically always (or almost always) a non-zero half-integer multiple of the mean spacing, or in our language that the gaps between the phases {\theta} of the eigenvalues {e^{2\pi i\theta}} of {U} are nasymptotically always non-zero integer multiples of {1/2N}. The significance of this hypothesis is that it is implied by the existence of a Siegel zero (of conductor a small power of {T}); see this paper of Conrey and Iwaniec. (In our language, what is going on is that if there is a Siegel zero in which {L(1,\chi)} is very close to zero, then {1*\chi} behaves like the Kronecker delta, and hence (by the Riemann-Siegel formula) the combined {L}-function {\zeta(s) L(s,\chi)} will have a polynomial approximation which in our language looks like a scalar multiple of {1 + e(\theta) Z^{2N+M}}, where {q \approx T^{M/N}} and {\theta} is a phase. The zeroes of this approximation lie on a coset of the {(2N+M)^{th}} roots of unity; the polynomial {P} is a factor of this approximation and hence will also lie in this coset, implying in particular that all eigenvalue spacings are multiples of {1/(2N+M)}. Taking {M = o(N)} then gives the claim.)

Unfortunately, the known methods do not seem to break this barrier without some significant new input; already the original paper of Montgomery and Odlyzko observed this limitation for their particular technique (and in fact fall very slightly short, as observed in unpublished work of Goldston and of Milinovich). In this post I would like to record another way to see this, by providing an “alternative” probability distribution to the CUE distribution (which one might dub the Alternative Circular Unitary Ensemble (ACUE) which is indistinguishable in low moments in the sense that the expectation {{\bf E}_{ACUE}} for this model also obeys Proposition 1, but for which the phase spacings are always a multiple of {1/2N}. This shows that if one is to rule out the Alternative Hypothesis (and thus in particular rule out Siegel zeroes), one needs to input some additional moment information beyond Proposition 1. It would be interesting to see if any of the other known moment estimates that go beyond this proposition are consistent with this alternative distribution. (UPDATE: it looks like they are, see Remark 7 below.)

To describe this alternative distribution, let us first recall the Weyl description of the CUE measure on the unitary group {U(n)} in terms of the distribution of the phases {\theta_1,\dots,\theta_N \in {\bf R}/{\bf Z}} of the eigenvalues, randomly permuted in any order. This distribution is given by the probability measure

\displaystyle  \frac{1}{N!} |V(\theta)|^2\ d\theta_1 \dots d\theta_N; \ \ \ \ \ (20)

where

\displaystyle  V(\theta) := \prod_{1 \leq i<j \leq N} (e(\theta_i)-e(\theta_j))

is the Vandermonde determinant; see for instance this previous blog post for the derivation of a very similar formula for the GUE distribution, which can be adapted to CUE without much difficulty. To see that this is a probability measure, first observe the Vandermonde determinant identity

\displaystyle  V(\theta) = \sum_{\pi \in S_N} \mathrm{sgn}(\pi) e(\theta \cdot \pi(\rho))

where {\theta := (\theta_1,\dots,\theta_N)}, {\cdot} denotes the dot product, and {\rho := (1,2,\dots,N)} is the “long word”, which implies that (20) is a trigonometric series with constant term {1}; it is also clearly non-negative, so it is a probability measure. One can thus generate a random CUE matrix by first drawing {(\theta_1,\dots,\theta_n) \in ({\bf R}/{\bf Z})^N} using the probability measure (20), and then generating {U} to be a random unitary matrix with eigenvalues {e(\theta_1),\dots,e(\theta_N)}.

For the alternative distribution, we first draw {(\theta_1,\dots,\theta_N)} on the discrete torus {(\frac{1}{2N}{\bf Z}/{\bf Z})^N} (thus each {\theta_j} is a {2N^{th}} root of unity) with probability density function

\displaystyle  \frac{1}{(2N)^N} \frac{1}{N!} |V(\theta)|^2 \ \ \ \ \ (21)

shift by a phase {\alpha \in {\bf R}/{\bf Z}} drawn uniformly at random, and then select {U} to be a random unitary matrix with eigenvalues {e^{i(\theta_1+\alpha)}, \dots, e^{i(\theta_N+\alpha)}}. Let us first verify that (21) is a probability density function. Clearly it is non-negative. It is the linear combination of exponentials of the form {e(\theta \cdot (\pi(\rho)-\pi'(\rho))} for {\pi,\pi' \in S_N}. The diagonal contribution {\pi=\pi'} gives the constant function {\frac{1}{(2N)^N}}, which has total mass one. All of the other exponentials have a frequency {\pi(\rho)-\pi'(\rho)} that is not a multiple of {2N}, and hence will have mean zero on {(\frac{1}{2N}{\bf Z}/{\bf Z})^N}. The claim follows.

From construction it is clear that the matrix {U} drawn from this alternative distribution will have all eigenvalue phase spacings be a non-zero multiple of {1/2N}. Now we verify that the alternative distribution also obeys Proposition 1. The alternative distribution remains invariant under rotation by phases, so the claim is again clear when (8) fails. Inspecting the proof of that proposition, we see that it suffices to show that the Schur polynomials {s_\lambda} with {\lambda} of size at most {N} and of equal size remain orthonormal with respect to the alternative measure. That is to say,

\displaystyle  \int_{U(N)} s_\lambda(U) \overline{s_{\lambda'}(U)}\ d\mu_{\mathrm{CUE}}(U) = \int_{U(N)} s_\lambda(U) \overline{s_{\lambda'}(U)}\ d\mu_{\mathrm{ACUE}}(U)

when {\lambda,\lambda'} have size equal to each other and at most {N}. In this case the phase {\alpha} in the definition of {U} is irrelevant. In terms of eigenvalue measures, we are then reduced to showing that

\displaystyle  \int_{({\bf R}/{\bf Z})^N} s_\lambda(\theta) \overline{s_{\lambda'}(\theta)} |V(\theta)|^2\ d\theta = \frac{1}{(2N)^N} \sum_{\theta \in (\frac{1}{2N}{\bf Z}/{\bf Z})^N} s_\lambda(\theta) \overline{s_{\lambda'}(\theta)} |V(\theta)|^2.

By Fourier decomposition, it then suffices to show that the trigonometric polynomial {s_\lambda(\theta) \overline{s_{\lambda'}(\theta)} |V(\theta)|^2} does not contain any components of the form {e( \theta \cdot 2N k)} for some non-zero lattice vector {k \in {\bf Z}^N}. But we have already observed that {|V(\theta)|^2} is a linear combination of plane waves of the form {e(\theta \cdot (\pi(\rho)-\pi'(\rho))} for {\pi,\pi' \in S_N}. Also, as is well known, {s_\lambda(\theta)} is a linear combination of plane waves {e( \theta \cdot \kappa )} where {\kappa} is majorised by {\lambda}, and similarly {s_{\lambda'}(\theta)} is a linear combination of plane waves {e( \theta \cdot \kappa' )} where {\kappa'} is majorised by {\lambda'}. So the product {s_\lambda(\theta) \overline{s_{\lambda'}(\theta)} |V(\theta)|^2} is a linear combination of plane waves of the form {e(\theta \cdot (\kappa - \kappa' + \pi(\rho) - \pi'(\rho)))}. But every coefficient of the vector {\kappa - \kappa' + \pi(\rho) - \pi'(\rho)} lies between {1-2N} and {2N-1}, and so cannot be of the form {2Nk} for any non-zero lattice vector {k}, giving the claim.

Example 4 If {N=2}, then the distribution (21) assigns a probability of {\frac{1}{4^2 2!} 2} to any pair {(\theta_1,\theta_2) \in (\frac{1}{4} {\bf Z}/{\bf Z})^2} that is a permuted rotation of {(0,\frac{1}{4})}, and a probability of {\frac{1}{4^2 2!} 4} to any pair that is a permuted rotation of {(0,\frac{1}{2})}. Thus, a matrix {U} drawn from the alternative distribution will be conjugate to a phase rotation of {\mathrm{diag}(1, i)} with probability {1/2}, and to {\mathrm{diag}(1,-1)} with probability {1/2}.

A similar computation when {N=3} gives {U} conjugate to a phase rotation of {\mathrm{diag}(1, e(1/6), e(1/3))} with probability {1/12}, to a phase rotation of {\mathrm{diag}( 1, e(1/6), -1)} or its adjoint with probability of {1/3} each, and a phase rotation of {\mathrm{diag}(1, e(1/3), e(2/3))} with probability {1/4}.

Remark 5 For large {N} it does not seem that this specific alternative distribution is the only distribution consistent with Proposition 1 and which has all phase spacings a non-zero multiple of {1/2N}; in particular, it may not be the only distribution consistent with a Siegel zero. Still, it is a very explicit distribution that might serve as a test case for the limitations of various arguments for controlling quantities such as the largest or smallest spacing between zeroes of zeta. The ACUE is in some sense the distribution that maximally resembles CUE (in the sense that it has the greatest number of Fourier coefficients agreeing) while still also being consistent with the Alternative Hypothesis, and so should be the most difficult enemy to eliminate if one wishes to disprove that hypothesis.

In some cases, even just a tiny improvement in known results would be able to exclude the alternative hypothesis. For instance, if the alternative hypothesis held, then {|\mathrm{tr}(U^j)|} is periodic in {j} with period {2N}, so from Proposition 1 for the alternative distribution one has

\displaystyle  {\bf E}_{\mathrm{ACUE}} |\mathrm{tr} U^j|^2 = \min_{k \in {\bf Z}} |j-2Nk|

which differs from (13) for any {|j| > N}. (This fact was implicitly observed recently by Baluyot, in the original context of the zeta function.) Thus a verification of the pair correlation conjecture (17) for even a single {j} with {|j| > N} would rule out the alternative hypothesis. Unfortunately, such a verification appears to be on comparable difficulty with (an averaged version of) the Hardy-Littlewood conjecture, with power saving error term. (This is consistent with the fact that Siegel zeroes can cause distortions in the Hardy-Littlewood conjecture, as (implicitly) discussed in this previous blog post.)

Remark 6 One can view the CUE as normalised Lebesgue measure on {U(N)} (viewed as a smooth submanifold of {{\bf C}^{N^2}}). One can similarly view ACUE as normalised Lebesgue measure on the (disconnected) smooth submanifold of {U(N)} consisting of those unitary matrices whose phase spacings are non-zero integer multiples of {1/2N}; informally, ACUE is CUE restricted to this lower dimensional submanifold. As is well known, the phases of CUE eigenvalues form a determinantal point process with kernel {K(\theta,\theta') = \frac{1}{N} \sum_{j=0}^{N-1} e(j(\theta - \theta'))} (or one can equivalently take {K(\theta,\theta') = \frac{\sin(\pi N (\theta-\theta'))}{N\sin(\pi(\theta-\theta'))}}; in a similar spirit, the phases of ACUE eigenvalues, once they are rotated to be {2N^{th}} roots of unity, become a discrete determinantal point process on those roots of unity with exactly the same kernel (except for a normalising factor of {\frac{1}{2}}). In particular, the {k}-point correlation functions of ACUE (after this rotation) are precisely the restriction of the {k}-point correlation functions of CUE after normalisation, that is to say they are proportional to {\mathrm{det}( K( \theta_i,\theta_j) )_{1 \leq i,j \leq k}}.

Remark 7 One family of estimates that go beyond the Rudnick-Sarnak family of estimates are twisted moment estimates for the zeta function, such as ones that give asymptotics for

\displaystyle  \int_T^{2T} |\zeta(\frac{1}{2}+it)|^{2k} |Q(\frac{1}{2}+it)|^2\ dt

for some small even exponent {2k} (almost always {2} or {4}) and some short Dirichlet polynomial {Q}; see for instance this paper of Bettin, Bui, Li, and Radziwill for some examples of such estimates. The analogous unitary matrix average would be something like

\displaystyle  {\bf E}_t |P_t(1)|^{2k} |Q_t(1)|^2

where {Q_t} is now some random medium degree polynomial that depends on the unitary matrix {U} associated to {P_t} (and in applications will typically also contain some negative power of {\exp(A_t)} to cancel the corresponding powers of {\exp(A_t)} in {|P_t(1)|^{2k}}). Unfortunately such averages generally are unable to distinguish the CUE from the ACUE. For instance, if all the coefficients of {Q} involve products of traces {\mathrm{tr}(U^k)} of total order less than {N-k}, then in terms of the eigenvalue phases {\theta}, {|Q(1)|^2} is a linear combination of plane waves {e(\theta \cdot \xi)} where the frequencies {\xi} have coefficients of magnitude less than {N-k}. On the other hand, as each coefficient of {P_t} is an elementary symmetric function of the eigenvalues, {P_t(1)} is a linear combination of plane waves {e(\theta \cdot \xi)} where the frequencies {\xi} have coefficients of magnitude at most {1}. Thus {|P_t(1)|^{2k} |Q_t(1)|^2} is a linear combination of plane waves where the frequencies {\xi} have coefficients of magnitude less than {N}, and thus is orthogonal to the difference between the CUE and ACUE measures on the phase torus {({\bf R}/{\bf Z})^n} by the previous arguments. In other words, {|P_t(1)|^{2k} |Q_t(1)|^2} has the same expectation with respect to ACUE as it does with respect to CUE. Thus one can only start distinguishing CUE from ACUE if the mollifier {Q_t} has degree close to or exceeding {N}, which corresponds to Dirichlet polynomials {Q} of length close to or exceeding {T}, which is far beyond current technology for such moment estimates.

Remark 8 The GUE hypothesis for the zeta function asserts that the average

\displaystyle  \lim_{T \rightarrow \infty} \frac{1}{T} \int_T^{2T} \sum_{\gamma_1,\dots,\gamma_n \hbox{ distinct}} \eta( \frac{\log T}{2\pi}(\gamma_1-t),\dots, \frac{\log T}{2\pi}(\gamma_k-t))\ dt \ \ \ \ \ (22)

is equal to

\displaystyle  \int_{{\bf R}^n} \eta(x) \det(K(x_i-x_j))_{1 \leq i,j \leq k}\ dx_1 \dots dx_k \ \ \ \ \ (23)

for any {k \geq 1} and any test function {\eta: {\bf R}^k \rightarrow {\bf C}}, where {K(x) := \frac{\sin \pi x}{\pi x}} is the Dyson sine kernel and {\gamma_i} are the ordinates of zeroes of the zeta function. This corresponds to the CUE distribution for {U}. The ACUE distribution then corresponds to an “alternative gaussian unitary ensemble (AGUE)” hypothesis, in which the average (22) is instead predicted to equal a Riemann sum version of the integral (23):

\displaystyle  \int_0^1 2^{-k} \sum_{x_1,\dots,x_k \in \frac{1}{2} {\bf Z} + \theta} \eta(x) \det(K(x_i-x_j))_{1 \leq i,j \leq k}\ d\theta.

This is a stronger version of the alternative hypothesis that the spacing between adjacent zeroes is almost always approximately a half-integer multiple of the mean spacing. I do not know of any known moment estimates for Dirichlet series that is able to eliminate this AGUE hypothesis (even assuming GRH). (UPDATE: These facts have also been independently observed in forthcoming work of Lagarias and Rodgers.)

A useful rule of thumb in complex analysis is that holomorphic functions {f(z)} behave like large degree polynomials {P(z)}. This can be evidenced for instance at a “local” level by the Taylor series expansion for a complex analytic function in the disk, or at a “global” level by factorisation theorems such as the Weierstrass factorisation theorem (or the closely related Hadamard factorisation theorem). One can truncate these theorems in a variety of ways (e.g., Taylor’s theorem with remainder) to be able to approximate a holomorphic function by a polynomial on various domains.

In some cases it can be convenient instead to work with polynomials {P(Z)} of another variable {Z} such as {Z = e^{2\pi i z}} (or more generally {Z=e^{2\pi i z/N}} for a scaling parameter {N}). In the case of the Riemann zeta function, defined by meromorphic continuation of the formula

\displaystyle  \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} \ \ \ \ \ (1)

one ends up having the following heuristic approximation in the neighbourhood of a point {\frac{1}{2}+it} on the critical line:

Heuristic 1 (Polynomial approximation) Let {T \ggg 1} be a height, let {t} be a “typical” element of {[T,2T]}, and let {1 \lll N \ll \log T} be an integer. Let {\phi_t = \phi_{t,T}: {\bf C} \rightarrow {\bf C}} be the linear change of variables

\displaystyle  \phi_t(z) := \frac{1}{2} + it - \frac{2\pi i z}{\log T}.

Then one has an approximation

\displaystyle  \zeta( \phi_t(z) ) \approx P_t( e^{2\pi i z/N} ) \ \ \ \ \ (2)

for {z = o(N)} and some polynomial {P_t = P_{t,T}} of degree {N}.

The requirement {z=o(N)} is necessary since the right-hand side is periodic with period {N} in the {z} variable (or period {\frac{2\pi i N}{\log T}} in the {s = \phi_t(z)} variable), whereas the zeta function is not expected to have any such periodicity, even approximately.

Let us give two non-rigorous justifications of this heuristic. Firstly, it is standard that inside the critical strip (with {\mathrm{Im}(s) = O(T)}) we have an approximate form

\displaystyle  \zeta(s) \approx \sum_{n \leq T} \frac{1}{n^s}

of (11). If we group the integers {n} from {1} to {T} into {N} bins depending on what powers of {T^{1/N}} they lie between, we thus have

\displaystyle  \zeta(s) \approx \sum_{j=0}^N \sum_{T^{j/N} \leq n < T^{(j+1)/N}} \frac{1}{n^s}

For {s = \phi_t(z)} with {z = o(N)} and {T^{j/N} \leq n < T^{(j+1)/N}} we heuristically have

\displaystyle  \frac{1}{n^s} \approx \frac{1}{n^{\frac{1}{2}+it}} e^{2\pi i j z / N}

and so

\displaystyle  \zeta(s) \approx \sum_{j=0}^N a_j(t) (e^{2\pi i z/N})^j

where {a_j(t)} are the partial Dirichlet series

\displaystyle  a_j(t) \approx \sum_{T^{j/N} \leq n < T^{(j+1)/N}} \frac{1}{n^{\frac{1}{2}+it}}. \ \ \ \ \ (3)

This gives the desired polynomial approximation.

A second non-rigorous justification is as follows. From factorisation theorems such as the Hadamard factorisation theorem we expect to have

\displaystyle  \zeta(s) \propto \prod_\rho (s-\rho) \times \dots

where {\rho} runs over the non-trivial zeroes of {\zeta}, and there are some additional factors arising from the trivial zeroes and poles of {\zeta} which we will ignore here; we will also completely ignore the issue of how to renormalise the product to make it converge properly. In the region {s = \frac{1}{2} + it + o( N / \log T) = \phi_t( \{ z: z = o(N) \})}, the dominant contribution to this product (besides multiplicative constants) should arise from zeroes {\rho} that are also in this region. The Riemann-von Mangoldt formula suggests that for “typical” {t} one should have about {N} such zeroes. If one lets {\rho_1,\dots,\rho_N} be any enumeration of {N} zeroes closest to {\frac{1}{2}+it}, and then repeats this set of zeroes periodically by period {\frac{2\pi i N}{\log T}}, one then expects to have an approximation of the form

\displaystyle  \zeta(s) \propto \prod_{j=1}^N \prod_{k \in {\bf Z}} (s-(\rho_j+\frac{2\pi i kN}{\log T}) )

again ignoring all issues of convergence. If one writes {s = \phi_t(z)} and {\rho_j = \phi_t(\lambda_j)}, then Euler’s famous product formula for sine basically gives

\displaystyle  \prod_{k \in {\bf Z}} (s-(\rho_j+\frac{2\pi i kN}{\log T}) ) \propto \prod_{k \in {\bf Z}} (z - (\lambda_j+2\pi k N) )

\displaystyle  \propto (e^{2\pi i z/N} - e^{2\pi i \lambda j/N})

(here we are glossing over some technical issues regarding renormalisation of the infinite products, which can be dealt with by studying the asymptotics as {\mathrm{Im}(z) \rightarrow \infty}) and hence we expect

\displaystyle  \zeta(s) \propto \prod_{j=1}^N (e^{2\pi i z/N} - e^{2\pi i \lambda j/N}).

This again gives the desired polynomial approximation.

Below the fold we give a rigorous version of the second argument suitable for “microscale” analysis. More precisely, we will show

Theorem 2 Let {N = N(T)} be an integer going sufficiently slowly to infinity. Let {W_0 \ll N} go to zero sufficiently slowly depending on {N}. Let {t} be drawn uniformly at random from {[T,2T]}. Then with probability {1-o(1)} (in the limit {T \rightarrow \infty}), and possibly after adjusting {N} by {1}, there exists a polynomial {P_t(Z)} of degree {N} and obeying the functional equation (9) below, such that

\displaystyle  \zeta( \phi_t(z) ) = (1+o(1)) P_t( e^{2\pi i z/N} ) \ \ \ \ \ (4)

whenever {|z| \leq W_0}.

It should be possible to refine the arguments to extend this theorem to the mesoscale setting by letting {N} be anything growing like {o(\log T)}, and {W_0} anything growing like {o(N)}; also we should be able to delete the need to adjust {N} by {1}. We have not attempted these optimisations here.

Many conjectures and arguments involving the Riemann zeta function can be heuristically translated into arguments involving the polynomials {P_t(Z)}, which one can view as random degree {N} polynomials if {t} is interpreted as a random variable drawn uniformly at random from {[T,2T]}. These can be viewed as providing a “toy model” for the theory of the Riemann zeta function, in which the complex analysis is simplified to the study of the zeroes and coefficients of this random polynomial (for instance, the role of the gamma function is now played by a monomial in {Z}). This model also makes the zeta function theory more closely resemble the function field analogues of this theory (in which the analogue of the zeta function is also a polynomial (or a rational function) in some variable {Z}, as per the Weil conjectures). The parameter {N} is at our disposal to choose, and reflects the scale {\approx N/\log T} at which one wishes to study the zeta function. For “macroscopic” questions, at which one wishes to understand the zeta function at unit scales, it is natural to take {N \approx \log T} (or very slightly larger), while for “microscopic” questions one would take {N} close to {1} and only growing very slowly with {T}. For the intermediate “mesoscopic” scales one would take {N} somewhere between {1} and {\log T}. Unfortunately, the statistical properties of {P_t} are only understood well at a conjectural level at present; even if one assumes the Riemann hypothesis, our understanding of {P_t} is largely restricted to the computation of low moments (e.g., the second or fourth moments) of various linear statistics of {P_t} and related functions (e.g., {1/P_t}, {P'_t/P_t}, or {\log P_t}).

Let’s now heuristically explore the polynomial analogues of this theory in a bit more detail. The Riemann hypothesis basically corresponds to the assertion that all the {N} zeroes of the polynomial {P_t(Z)} lie on the unit circle {|Z|=1} (which, after the change of variables {Z = e^{2\pi i z/N}}, corresponds to {z} being real); in a similar vein, the GUE hypothesis corresponds to {P_t(Z)} having the asymptotic law of a random scalar {a_N(t)} times the characteristic polynomial of a random unitary {N \times N} matrix. Next, we consider what happens to the functional equation

\displaystyle  \zeta(s) = \chi(s) \zeta(1-s) \ \ \ \ \ (5)

where

\displaystyle  \chi(s) := 2^s \pi^{s-1} \sin(\frac{\pi s}{2}) \Gamma(1-s).

A routine calculation involving Stirling’s formula reveals that

\displaystyle  \chi(\frac{1}{2}+it) = (1+o(1)) e^{-2\pi i L(t)} \ \ \ \ \ (6)

with {L(t) := \frac{t}{2\pi} \log \frac{t}{2\pi} - \frac{t}{2\pi} + \frac{7}{8}}; one also has the closely related approximation

\displaystyle  \frac{\chi'}{\chi}(s) = -\log T + O(1) \ \ \ \ \ (7)

and hence

\displaystyle  \chi(\phi_t(z)) = (1+o(1)) e^{-2\pi i \theta(t)} e^{2\pi i z} \ \ \ \ \ (8)

when {z = o(\log T)}. Since {\zeta(1-s) = \overline{\zeta(\overline{1-s})}}, applying (5) with {s = \phi_t(z)} and using the approximation (2) suggests a functional equation for {P_t}:

\displaystyle  P_t(e^{2\pi i z/N}) = e^{-2\pi i L(t)} e^{2\pi i z} \overline{P_t(e^{2\pi i \overline{z}/N})}

or in terms of {Z := e^{2\pi i z/N}},

\displaystyle  P_t(Z) = e^{-2\pi i L(t)} Z^N \overline{P_t}(1/Z) \ \ \ \ \ (9)

where {\overline{P_t}(Z) := \overline{P_t(\overline{Z})}} is the polynomial {P_t} with all the coefficients replaced by their complex conjugate. Thus if we write

\displaystyle  P_t(Z) = \sum_{j=0}^N a_j Z^j

then the functional equation can be written as

\displaystyle  a_j(t) = e^{-2\pi i L(t)} \overline{a_{N-j}(t)}.

We remark that if we use the heuristic (3) (interpreting the cutoffs in the {n} summation in a suitably vague fashion) then this equation can be viewed as an instance of the Poisson summation formula.

Another consequence of the functional equation is that the zeroes of {P_t} are symmetric with respect to inversion {Z \mapsto 1/\overline{Z}} across the unit circle. This is of course consistent with the Riemann hypothesis, but does not obviously imply it. The phase {L(t)} is of little consequence in this functional equation; one could easily conceal it by working with the phase rotation {e^{\pi i L(t)} P_t} of {P_t} instead.

One consequence of the functional equation is that {e^{\pi i L(t)} e^{-i N \theta/2} P_t(e^{i\theta})} is real for any {\theta \in {\bf R}}; the same is then true for the derivative {e^{\pi i L(t)} e^{i N \theta} (i e^{i\theta} P'_t(e^{i\theta}) - i \frac{N}{2} P_t(e^{i\theta})}. Among other things, this implies that {P'_t(e^{i\theta})} cannot vanish unless {P_t(e^{i\theta})} does also; thus the zeroes of {P'_t} will not lie on the unit circle except where {P_t} has repeated zeroes. The analogous statement is true for {\zeta}; the zeroes of {\zeta'} will not lie on the critical line except where {\zeta} has repeated zeroes.

Relating to this fact, it is a classical result of Speiser that the Riemann hypothesis is true if and only if all the zeroes of the derivative {\zeta'} of the zeta function in the critical strip lie on or to the right of the critical line. The analogous result for polynomials is

Proposition 3 We have

\displaystyle  \# \{ |Z| = 1: P_t(Z) = 0 \} = N - 2 \# \{ |Z| > 1: P'_t(Z) = 0 \}

(where all zeroes are counted with multiplicity.) In particular, the zeroes of {P_t(Z)} all lie on the unit circle if and only if the zeroes of {P'_t(Z)} lie in the closed unit disk.

Proof: From the functional equation we have

\displaystyle  \# \{ |Z| = 1: P_t(Z) = 0 \} = N - 2 \# \{ |Z| > 1: P_t(Z) = 0 \}.

Thus it will suffice to show that {P_t} and {P'_t} have the same number of zeroes outside the closed unit disk.

Set {f(z) := z \frac{P'(z)}{P(z)}}, then {f} is a rational function that does not have a zero or pole at infinity. For {e^{i\theta}} not a zero of {P_t}, we have already seen that {e^{\pi i L(t)} e^{-i N \theta/2} P_t(e^{i\theta})} and {e^{\pi i L(t)} e^{i N \theta} (i e^{i\theta} P'_t(e^{i\theta}) - i \frac{N}{2} P_t(e^{i\theta})} are real, so on dividing we see that {i f(e^{i\theta}) - \frac{iN}{2}} is always real, that is to say

\displaystyle  \mathrm{Re} f(e^{i\theta}) = \frac{N}{2}.

(This can also be seen by writing {f(e^{i\theta}) = \sum_\lambda \frac{1}{1-e^{-i\theta} \lambda}}, where {\lambda} runs over the zeroes of {P_t}, and using the fact that these zeroes are symmetric with respect to reflection across the unit circle.) When {e^{i\theta}} is a zero of {P_t}, {f(z)} has a simple pole at {e^{i\theta}} with residue a positive multiple of {e^{i\theta}}, and so {f(z)} stays on the right half-plane if one traverses a semicircular arc around {e^{i\theta}} outside the unit disk. From this and continuity we see that {f} stays on the right-half plane in a circle slightly larger than the unit circle, and hence by the argument principle it has the same number of zeroes and poles outside of this circle, giving the claim. \Box

From the functional equation and the chain rule, {Z} is a zero of {P'_t} if and only if {1/\overline{Z}} is a zero of {N P_t - P'_t}. We can thus write the above proposition in the equivalent form

\displaystyle  \# \{ |Z| = 1: P_t(Z) = 0 \} = N - 2 \# \{ |Z| < 1: NP_t(Z) - P'_t(Z) = 0 \}.

One can use this identity to get a lower bound on the number of zeroes of {P_t} by the method of mollifiers. Namely, for any other polynomial {M_t}, we clearly have

\displaystyle  \# \{ |Z| = 1: P_t(Z) = 0 \}

\displaystyle \geq N - 2 \# \{ |Z| < 1: M_t(Z)(NP_t(Z) - P'_t(Z)) = 0 \}.

By Jensen’s formula, we have for any {r>1} that

\displaystyle  \log |M_t(0)| |NP_t(0)-P'_t(0)|

\displaystyle \leq -(\log r) \# \{ |Z| < 1: M_t(Z)(NP_t(Z) - P'_t(Z)) = 0 \}

\displaystyle + \frac{1}{2\pi} \int_0^{2\pi} \log |M_t(re^{i\theta})(NP_t(e^{i\theta}) - P'_t(re^{i\theta}))|\ d\theta.

We therefore have

\displaystyle  \# \{ |Z| = 1: P_t(Z) = 0 \} \geq N + \frac{2}{\log r} \log |M_t(0)| |NP_t(0)-P'_t(0)|

\displaystyle - \frac{1}{\log r} \frac{1}{2\pi} \int_0^{2\pi} \log |M_t(re^{i\theta})(NP_t(e^{i\theta}) - P'_t(re^{i\theta}))|^2\ d\theta.

As the logarithm function is concave, we can apply Jensen’s inequality to conclude

\displaystyle  {\bf E} \# \{ |Z| = 1: P_t(Z) = 0 \} \geq N

\displaystyle + {\bf E} \frac{2}{\log r} \log |M_t(0)| |NP_t(0)-P'_t(0)|

\displaystyle - \frac{1}{\log r} \log \left( \frac{1}{2\pi} \int_0^{2\pi} {\bf E} |M_t(re^{i\theta})(NP_t(e^{i\theta}) - P'_t(re^{i\theta}))|^2\ d\theta\right).

where the expectation is over the {t} parameter. It turns out that by choosing the mollifier {M_t} carefully in order to make {M_t P_t} behave like the function {1} (while keeping the degree {M_t} small enough that one can compute the second moment here), and then optimising in {r}, one can use this inequality to get a positive fraction of zeroes of {P_t} on the unit circle on average. This is the polynomial analogue of a classical argument of Levinson, who used this to show that at least one third of the zeroes of the Riemann zeta function are on the critical line; all later improvements on this fraction have been based on some version of Levinson’s method, mainly focusing on more advanced choices for the mollifier {M_t} and of the differential operator {N - \partial_z} that implicitly appears in the above approach. (The most recent lower bound I know of is {0.4191637}, due to Pratt and Robles. In principle (as observed by Farmer) this bound can get arbitrarily close to {1} if one is allowed to use arbitrarily long mollifiers, but establishing this seems of comparable difficulty to unsolved problems such as the pair correlation conjecture; see this paper of Radziwill for more discussion.) A variant of these techniques can also establish “zero density estimates” of the following form: for any {W \geq 1}, the number of zeroes of {P_t} that lie further than {\frac{W}{N}} from the unit circle is of order {O( e^{-cW} N )} on average for some absolute constant {c>0}. Thus, roughly speaking, most zeroes of {P_t} lie within {O(1/N)} of the unit circle. (Analogues of these results for the Riemann zeta function were worked out by Selberg, by Jutila, and by Conrey, with increasingly strong values of {c}.)

The zeroes of {P'_t} tend to live somewhat closer to the origin than the zeroes of {P_t}. Suppose for instance that we write

\displaystyle  P_t(Z) = \sum_{j=0}^N a_j(t) Z^j = a_N(t) \prod_{j=1}^N (Z - \lambda_j)

where {\lambda_1,\dots,\lambda_N} are the zeroes of {P_t(Z)}, then by evaluating at zero we see that

\displaystyle  \lambda_1 \dots \lambda_N = (-1)^N a_0(t) / a_N(t)

and the right-hand side is of unit magnitude by the functional equation. However, if we differentiate

\displaystyle  P'_t(Z) = \sum_{j=1}^N a_j(t) j Z^{j-1} = N a_N(t) \prod_{j=1}^{N-1} (Z - \lambda'_j)

where {\lambda'_1,\dots,\lambda'_{N-1}} are the zeroes of {P'_t}, then by evaluating at zero we now see that

\displaystyle  \lambda'_1 \dots \lambda'_{N-1} = (-1)^N a_1(t) / N a_N(t).

The right-hand side would now be typically expected to be of size {O(1/N) \approx \exp(- \log N)}, and so on average we expect the {\lambda'_j} to have magnitude like {\exp( - \frac{\log N}{N} )}, that is to say pushed inwards from the unit circle by a distance roughly {\frac{\log N}{N}}. The analogous result for the Riemann zeta function is that the zeroes of {\zeta'(s)} at height {\sim T} lie at a distance roughly {\frac{\log\log T}{\log T}} to the right of the critical line on the average; see this paper of Levinson and Montgomery for a precise statement.

Read the rest of this entry »

(This post is mostly intended for my own reference, as I found myself repeatedly looking up several conversions between polynomial bases on various occasions.)

Let {\mathrm{Poly}_{\leq n}} denote the vector space of polynomials {P:{\bf R} \rightarrow {\bf R}} of one variable {x} with real coefficients of degree at most {n}. This is a vector space of dimension {n+1}, and the sequence of these spaces form a filtration:

\displaystyle  \mathrm{Poly}_{\leq 0} \subset \mathrm{Poly}_{\leq 1} \subset \mathrm{Poly}_{\leq 2} \subset \dots

A standard basis for these vector spaces are given by the monomials {x^0, x^1, x^2, \dots}: every polynomial {P(x)} in {\mathrm{Poly}_{\leq n}} can be expressed uniquely as a linear combination of the first {n+1} monomials {x^0, x^1, \dots, x^n}. More generally, if one has any sequence {Q_0(x), Q_1(x), Q_2(x)} of polynomials, with each {Q_n} of degree exactly {n}, then an easy induction shows that {Q_0(x),\dots,Q_n(x)} forms a basis for {\mathrm{Poly}_{\leq n}}.

In particular, if we have two such sequences {Q_0(x), Q_1(x), Q_2(x),\dots} and {R_0(x), R_1(x), R_2(x), \dots} of polynomials, with each {Q_n} of degree {n} and each {R_k} of degree {k}, then {Q_n} must be expressible uniquely as a linear combination of the polynomials {R_0,R_1,\dots,R_n}, thus we have an identity of the form

\displaystyle  Q_n(x) = \sum_{k=0}^n c_{QR}(n,k) R_k(x)

for some change of basis coefficients {c_{QR}(n,k) \in {\bf R}}. These coefficients describe how to convert a polynomial expressed in the {Q_n} basis into a polynomial expressed in the {R_k} basis.

Many standard combinatorial quantities {c(n,k)} involving two natural numbers {0 \leq k \leq n} can be interpreted as such change of basis coefficients. The most familiar example are the binomial coefficients {\binom{n}{k}}, which measures the conversion from the shifted monomial basis {(x+1)^n} to the monomial basis {x^k}, thanks to (a special case of) the binomial formula:

\displaystyle  (x+1)^n = \sum_{k=0}^n \binom{n}{k} x^k,

thus for instance

\displaystyle  (x+1)^3 = \binom{3}{0} x^0 + \binom{3}{1} x^1 + \binom{3}{2} x^2 + \binom{3}{3} x^3

\displaystyle  = 1 + 3x + 3x^2 + x^3.

More generally, for any shift {h}, the conversion from {(x+h)^n} to {x^k} is measured by the coefficients {h^{n-k} \binom{n}{k}}, thanks to the general case of the binomial formula.

But there are other bases of interest too. For instance if one uses the falling factorial basis

\displaystyle  (x)_n := x (x-1) \dots (x-n+1)

then the conversion from falling factorials to monomials is given by the Stirling numbers of the first kind {s(n,k)}:

\displaystyle  (x)_n = \sum_{k=0}^n s(n,k) x^k,

thus for instance

\displaystyle  (x)_3 = s(3,0) x^0 + s(3,1) x^1 + s(3,2) x^2 + s(3,3) x^3

\displaystyle  = 0 + 2 x - 3x^2 + x^3

and the conversion back is given by the Stirling numbers of the second kind {S(n,k)}:

\displaystyle  x^n = \sum_{k=0}^n S(n,k) (x)_k

thus for instance

\displaystyle  x^3 = S(3,0) (x)_0 + S(3,1) (x)_1 + S(3,2) (x)_2 + S(3,3) (x)_3

\displaystyle  = 0 + x + 3 x(x-1) + x(x-1)(x-2).

If one uses the binomial functions {\binom{x}{n} = \frac{1}{n!} (x)_n} as a basis instead of the falling factorials, one of course can rewrite these conversions as

\displaystyle  \binom{x}{n} = \sum_{k=0}^n \frac{1}{n!} s(n,k) x^k

and

\displaystyle  x^n = \sum_{k=0}^n k! S(n,k) \binom{x}{k}

thus for instance

\displaystyle  \binom{x}{3} = 0 + \frac{1}{3} x - \frac{1}{2} x^2 + \frac{1}{6} x^3

and

\displaystyle  x^3 = 0 + \binom{x}{1} + 6 \binom{x}{2} + 6 \binom{x}{3}.

As a slight variant, if one instead uses rising factorials

\displaystyle  (x)^n := x (x+1) \dots (x+n-1)

then the conversion to monomials yields the unsigned Stirling numbers {|s(n,k)|} of the first kind:

\displaystyle  (x)^n = \sum_{k=0}^n |s(n,k)| x^k

thus for instance

\displaystyle  (x)^3 = 0 + 2x + 3x^2 + x^3.

One final basis comes from the polylogarithm functions

\displaystyle  \mathrm{Li}_{-n}(x) := \sum_{j=1}^\infty j^n x^j.

For instance one has

\displaystyle  \mathrm{Li}_1(x) = -\log(1-x)

\displaystyle  \mathrm{Li}_0(x) = \frac{x}{1-x}

\displaystyle  \mathrm{Li}_{-1}(x) = \frac{x}{(1-x)^2}

\displaystyle  \mathrm{Li}_{-2}(x) = \frac{x}{(1-x)^3} (1+x)

\displaystyle  \mathrm{Li}_{-3}(x) = \frac{x}{(1-x)^4} (1+4x+x^2)

\displaystyle  \mathrm{Li}_{-4}(x) = \frac{x}{(1-x)^5} (1+11x+11x^2+x^3)

and more generally one has

\displaystyle  \mathrm{Li}_{-n-1}(x) = \frac{x}{(1-x)^{n+2}} E_n(x)

for all natural numbers {n} and some polynomial {E_n} of degree {n} (the Eulerian polynomials), which when converted to the monomial basis yields the (shifted) Eulerian numbers

\displaystyle  E_n(x) = \sum_{k=0}^n A(n+1,k) x^k.

For instance

\displaystyle  E_3(x) = A(4,0) x^0 + A(4,1) x^1 + A(4,2) x^2 + A(4,3) x^3

\displaystyle  = 1 + 11x + 11x^2 + x^3.

These particular coefficients also have useful combinatorial interpretations. For instance:

  • The binomial coefficient {\binom{n}{k}} is of course the number of {k}-element subsets of {\{1,\dots,n\}}.
  • The unsigned Stirling numbers {|s(n,k)|} of the first kind are the number of permutations of {\{1,\dots,n\}} with exactly {k} cycles. The signed Stirling numbers {s(n,k)} are then given by the formula {s(n,k) = (-1)^{n-k} |s(n,k)|}.
  • The Stirling numbers {S(n,k)} of the second kind are the number of ways to partition {\{1,\dots,n\}} into {k} non-empty subsets.
  • The Eulerian numbers {A(n,k)} are the number of permutations of {\{1,\dots,n\}} with exactly {k} ascents.

These coefficients behave similarly to each other in several ways. For instance, the binomial coefficients {\binom{n}{k}} obey the well known Pascal identity

\displaystyle  \binom{n+1}{k} = \binom{n}{k} + \binom{n}{k-1}

(with the convention that {\binom{n}{k}} vanishes outside of the range {0 \leq k \leq n}). In a similar spirit, the unsigned Stirling numbers {|s(n,k)|} of the first kind obey the identity

\displaystyle  |s(n+1,k)| = n |s(n,k)| + |s(n,k-1)|

and the signed counterparts {s(n,k)} obey the identity

\displaystyle  s(n+1,k) = -n s(n,k) + s(n,k-1).

The Stirling numbers of the second kind {S(n,k)} obey the identity

\displaystyle  S(n+1,k) = k S(n,k) + S(n,k-1)

and the Eulerian numbers {A(n,k)} obey the identity

\displaystyle  A(n+1,k) = (k+1) A(n,k) + (n-k+1) A(n,k-1).

I was pleased to learn this week that the 2019 Abel Prize was awarded to Karen Uhlenbeck. Uhlenbeck laid much of the foundations of modern geometric PDE. One of the few papers I have in this area is in fact a joint paper with Gang Tian extending a famous singularity removal theorem of Uhlenbeck for four-dimensional Yang-Mills connections to higher dimensions. In both these papers, it is crucial to be able to construct “Coulomb gauges” for various connections, and there is a clever trick of Uhlenbeck for doing so, introduced in another important paper of hers, which is absolutely critical in my own paper with Tian. Nowadays it would be considered a standard technique, but it was definitely not so at the time that Uhlenbeck introduced it.

Suppose one has a smooth connection {A} on a (closed) unit ball {B(0,1)} in {{\bf R}^n} for some {n \geq 1}, taking values in some Lie algebra {{\mathfrak g}} associated to a compact Lie group {G}. This connection then has a curvature {F(A)}, defined in coordinates by the usual formula

\displaystyle F(A)_{\alpha \beta} = \partial_\alpha A_\beta - \partial_\beta A_\alpha + [A_\alpha,A_\beta]. \ \ \ \ \ (1)

It is natural to place the curvature in a scale-invariant space such as {L^{n/2}(B(0,1))}, and then the natural space for the connection would be the Sobolev space {W^{n/2,1}(B(0,1))}. It is easy to see from (1) and Sobolev embedding that if {A} is bounded in {W^{n/2,1}(B(0,1))}, then {F(A)} will be bounded in {L^{n/2}(B(0,1))}. One can then ask the converse question: if {F(A)} is bounded in {L^{n/2}(B(0,1))}, is {A} bounded in {W^{n/2,1}(B(0,1))}? This can be viewed as asking whether the curvature equation (1) enjoys “elliptic regularity”.

There is a basic obstruction provided by gauge invariance. For any smooth gauge {U: B(0,1) \rightarrow G} taking values in the Lie group, one can gauge transform {A} to

\displaystyle A^U_\alpha := U^{-1} \partial_\alpha U + U^{-1} A_\alpha U

and then a brief calculation shows that the curvature is conjugated to

\displaystyle F(A^U)_{\alpha \beta} = U^{-1} F_{\alpha \beta} U.

This gauge symmetry does not affect the {L^{n/2}(B(0,1))} norm of the curvature tensor {F(A)}, but can make the connection {A} extremely large in {W^{n/2,1}(B(0,1))}, since there is no control on how wildly {U} can oscillate in space.

However, one can hope to overcome this problem by gauge fixing: perhaps if {F(A)} is bounded in {L^{n/2}(B(0,1))}, then one can make {A} bounded in {W^{n/2,1}(B(0,1))} after applying a gauge transformation. The basic and useful result of Uhlenbeck is that this can be done if the {L^{n/2}} norm of {F(A)} is sufficiently small (and then the conclusion is that {A} is small in {W^{n/2,1}}). (For large connections there is a serious issue related to the Gribov ambiguity.) In my (much) later paper with Tian, we adapted this argument, replacing Lebesgue spaces by Morrey space counterparts. (This result was also independently obtained at about the same time by Meyer and Riviére.)

To make the problem elliptic, one can try to impose the Coulomb gauge condition

\displaystyle \partial^\alpha A_\alpha = 0 \ \ \ \ \ (2)

(also known as the Lorenz gauge or Hodge gauge in various papers), together with a natural boundary condition on {\partial B(0,1)} that will not be discussed further here. This turns (1), (2) into a divergence-curl system that is elliptic at the linear level at least. Indeed if one takes the divergence of (1) using (2) one sees that

\displaystyle \partial^\alpha F(A)_{\alpha \beta} = \Delta A_\beta + \partial^\alpha [A_\alpha,A_\beta] \ \ \ \ \ (3)

and if one could somehow ignore the nonlinear term {\partial^\alpha [A_\alpha,A_\beta]} then we would get the required regularity on {A} by standard elliptic regularity estimates.

The problem is then how to handle the nonlinear term. If we already knew that {A} was small in the right norm {W^{n/2,1}(B(0,1))} then one can use Sobolev embedding, Hölder’s inequality, and elliptic regularity to show that the second term in (3) is small compared to the first term, and so one could then hope to eliminate it by perturbative analysis. However, proving that {A} is small in this norm is exactly what we are trying to prove! So this approach seems circular.

Uhlenbeck’s clever way out of this circularity is a textbook example of what is now known as a “continuity” argument. Instead of trying to work just with the original connection {A}, one works with the rescaled connections {A^{(t)}_\alpha(x) := t A_\alpha(tx)} for {0 \leq t \leq 1}, with associated rescaled curvatures {F(A^{(t)})_\alpha = t^2 F(A)_{\alpha \beta}(tx)}. If the original curvature {F(A)} is small in {L^{n/2}} norm (e.g. bounded by some small {\varepsilon>0}), then so are all the rescaled curvatures {F(A^{(t)})}. We want to obtain a Coulomb gauge at time {t=1}; this is difficult to do directly, but it is trivial to obtain a Coulomb gauge at time {t=0}, because the connection vanishes at this time. On the other hand, once one has successfully obtained a Coulomb gauge at some time {t \in [0,1]} with {A^{(t)}} small in the natural norm {W^{n/2,1}} (say bounded by {C \varepsilon} for some constant {C} which is large in absolute terms, but not so large compared with say {1/\varepsilon}), the perturbative argument mentioned earlier (combined with the qualitative hypothesis that {A} is smooth) actually works to show that a Coulomb gauge can also be constructed and be small for all sufficiently close nearby times {t' \in [0,1]} to {t}; furthermore, the perturbative analysis actually shows that the nearby gauges enjoy a slightly better bound on the {W^{n/2,1}} norm, say {C\varepsilon/2} rather than {C\varepsilon}. As a consequence of this, the set of times {t} for which one has a good Coulomb gauge obeying the claimed estimates is both open and closed in {[0,1]}, and also contains {t=0}. Since the unit interval {[0,1]} is connected, it must then also contain {t=1}. This concludes the proof.

One of the lessons I drew from this example is to not be deterred (especially in PDE) by an argument seeming to be circular; if the argument is still sufficiently “nontrivial” in nature, it can often be modified into a usefully non-circular argument that achieves what one wants (possibly under an additional qualitative hypothesis, such as a continuity or smoothness hypothesis).

The celebrated decomposition theorem of Fefferman and Stein shows that every function {f \in \mathrm{BMO}({\bf R}^n)} of bounded mean oscillation can be decomposed in the form

\displaystyle f = f_0 + \sum_{i=1}^n R_i f_i \ \ \ \ \ (1)

 

modulo constants, for some {f_0,f_1,\dots,f_n \in L^\infty({\bf R}^n)}, where {R_i := |\nabla|^{-1} \partial_i} are the Riesz transforms. A technical note here a function in BMO is defined only up to constants (as well as up to the usual almost everywhere equivalence); related to this, if {f_i} is an {L^\infty({\bf R}^n)} function, then the Riesz transform {R_i f_i} is well defined as an element of {\mathrm{BMO}({\bf R}^n)}, but is also only defined up to constants and almost everywhere equivalence.

The original proof of Fefferman and Stein was indirect (relying for instance on the Hahn-Banach theorem). A constructive proof was later given by Uchiyama, and was in fact the topic of the second post on this blog. A notable feature of Uchiyama’s argument is that the construction is quite nonlinear; the vector-valued function {(f_0,f_1,\dots,f_n)} is defined to take values on a sphere, and the iterative construction to build these functions from {f} involves repeatedly projecting a potential approximant to this function to the sphere (also, the high-frequency components of this approximant are constructed in a manner that depends nonlinearly on the low-frequency components, which is a type of technique that has become increasingly common in analysis and PDE in recent years).

It is natural to ask whether the Fefferman-Stein decomposition (1) can be made linear in {f}, in the sense that each of the {f_i, i=0,\dots,n} depend linearly on {f}. Strictly speaking this is easily accomplished using the axiom of choice: take a Hamel basis of {\mathrm{BMO}({\bf R}^n)}, choose a decomposition (1) for each element of this basis, and then extend linearly to all finite linear combinations of these basis functions, which then cover {\mathrm{BMO}({\bf R}^n)} by definition of Hamel basis. But these linear operations have no reason to be continuous as a map from {\mathrm{BMO}({\bf R}^n)} to {L^\infty({\bf R}^n)}. So the correct question is whether the decomposition can be made continuously linear (or equivalently, boundedly linear) in {f}, that is to say whether there exist continuous linear transformations {T_i: \mathrm{BMO}({\bf R}^n) \rightarrow L^\infty({\bf R}^n)} such that

\displaystyle f = T_0 f + \sum_{i=1}^n R_i T_i f \ \ \ \ \ (2)

 

modulo constants for all {f \in \mathrm{BMO}({\bf R}^n)}. Note from the open mapping theorem that one can choose the functions {f_0,\dots,f_n} to depend in a bounded fashion on {f} (thus {\|f_i\|_{L^\infty} \leq C \|f\|_{BMO}} for some constant {C}, however the open mapping theorem does not guarantee linearity. Using a result of Bartle and Graves one can also make the {f_i} depend continuously on {f}, but again the dependence is not guaranteed to be linear.

It is generally accepted folklore that continuous linear dependence is known to be impossible, but I had difficulty recently tracking down an explicit proof of this assertion in the literature (if anyone knows of a reference, I would be glad to know of it). The closest I found was a proof of a similar statement in this paper of Bourgain and Brezis, which I was able to adapt to establish the current claim. The basic idea is to average over the symmetries of the decomposition, which in the case of (1) are translation invariance, rotation invariance, and dilation invariance. This effectively makes the operators {T_0,T_1,\dots,T_n} invariant under all these symmetries, which forces them to themselves be linear combinations of the identity and Riesz transform operators; however, no such non-trivial linear combination maps {\mathrm{BMO}} to {L^\infty}, and the claim follows. Formal details of this argument (which we phrase in a dual form in order to avoid some technicalities) appear below the fold.

Read the rest of this entry »

Let {k} be a field, and let {E} be a finite extension of that field; in this post we will denote such a relationship by {k \hookrightarrow E}. We say that {E} is a Galois extension of {k} if the cardinality of the automorphism group {\mathrm{Aut}(E/k)} of {E} fixing {k} is as large as it can be, namely the degree {[E:k]} of the extension. In that case, we call {\mathrm{Aut}(E/k)} the Galois group of {E} over {k} and denote it also by {\mathrm{Gal}(E/k)}. The fundamental theorem of Galois theory then gives a one-to-one correspondence (also known as the Galois correspondence) between the intermediate extensions between {E} and {k} and the subgroups of {\mathrm{Gal}(E/k)}:

Theorem 1 (Fundamental theorem of Galois theory) Let {E} be a Galois extension of {k}.

  • (i) If {k \hookrightarrow F \hookrightarrow E} is an intermediate field betwen {k} and {E}, then {E} is a Galois extension of {F}, and {\mathrm{Gal}(E/F)} is a subgroup of {\mathrm{Gal}(E/k)}.
  • (ii) Conversely, if {H} is a subgroup of {\mathrm{Gal}(E/k)}, then there is a unique intermediate field {k \hookrightarrow F \hookrightarrow E} such that {\mathrm{Gal}(E/F)=H}; namely {F} is the set of elements of {E} that are fixed by {H}.
  • (iii) If {k \hookrightarrow F_1 \hookrightarrow E} and {k \hookrightarrow F_2 \hookrightarrow E}, then {F_1 \hookrightarrow F_2} if and only if {\mathrm{Gal}(E/F_2)} is a subgroup of {\mathrm{Gal}(E/F_1)}.
  • (iv) If {k \hookrightarrow F \hookrightarrow E} is an intermediate field between {k} and {E}, then {F} is a Galois extension of {k} if and only if {\mathrm{Gal}(E/F)} is a normal subgroup of {\mathrm{Gal}(E/k)}. In that case, {\mathrm{Gal}(F/k)} is isomorphic to the quotient group {\mathrm{Gal}(E/k) / \mathrm{Gal}(E/F)}.

Example 2 Let {k= {\bf Q}}, and let {E = {\bf Q}(e^{2\pi i/n})} be the degree {\phi(n)} Galois extension formed by adjoining a primitive {n^{th}} root of unity (that is to say, {E} is the cyclotomic field of order {n}). Then {\mathrm{Gal}(E/k)} is isomorphic to the multiplicative cyclic group {({\bf Z}/n{\bf Z})^\times} (the invertible elements of the ring {{\bf Z}/n{\bf Z}}). Amongst the intermediate fields, one has the cyclotomic fields of the form {F = {\bf Q}(e^{2\pi i/m})} where {m} divides {n}; they are also Galois extensions, with {\mathrm{Gal}(F/k)} isomorphic to {({\bf Z}/m{\bf Z})^\times} and {\mathrm{Gal}(E/F)} isomorphic to the elements {a} of {({\bf Z}/n{\bf Z})^\times} such that {a(n/m) = (n/m)} modulo {n}. (There can also be other intermediate fields, corresponding to other subgroups of {({\bf Z}/n{\bf Z})^\times}.)

Example 3 Let {k = {\bf C}(z)} be the field of rational functions of one indeterminate {z} with complex coefficients, and let {E = {\bf C}(w)} be the field formed by adjoining an {n^{th}} root {w = z^{1/n}} to {k}, thus {k = {\bf C}(w^n)}. Then {E} is a degree {n} Galois extension of {k} with Galois group isomorphic to {{\bf Z}/n{\bf Z}} (with an element {a \in {\bf Z}/n{\bf Z}} corresponding to the field automorphism of {k} that sends {w} to {e^{2\pi i a/n} w}). The intermediate fields are of the form {F = {\bf C}(w^{n/m})} where {m} divides {n}; they are also Galois extensions, with {\mathrm{Gal}(F/k)} isomorphic to {{\bf Z}/m{\bf Z}} and {\mathrm{Gal}(E/F)} isomorphic to the multiples of {m} in {{\bf Z}/n{\bf Z}}.

There is an analogous Galois correspondence in the covering theory of manifolds. For simplicity we restrict attention to finite covers. If {L} is a connected manifold and {\pi_{L \leftarrow M}: M \rightarrow L} is a finite covering map of {L} by another connected manifold {M}, we denote this relationship by {L \leftarrow M}. (Later on we will change our function notations slightly and write {\pi_{L \leftarrow M}: L \leftarrow M} in place of the more traditional {\pi_{L \leftarrow M}: M \rightarrow L}, and similarly for the deck transformations {g: M \leftarrow M} below; more on this below the fold.) If {L \leftarrow M}, we can define {\mathrm{Aut}(M/L)} to be the group of deck transformations: continuous maps {g: M \rightarrow M} which preserve the fibres of {\pi}. We say that this covering map is a Galois cover if the cardinality of the group {\mathrm{Aut}(M/L)} is as large as it can be. In that case we call {\mathrm{Aut}(M/L)} the Galois group of {M} over {L} and denote it by {\mathrm{Gal}(M/L)}.

Suppose {M} is a finite cover of {L}. An intermediate cover {N} between {M} and {L} is a cover of {N} by {L}, such that {L \leftarrow N \leftarrow M}, in such a way that the covering maps are compatible, in the sense that {\pi_{L \leftarrow M}} is the composition of {\pi_{L \leftarrow N}} and {\pi_{N \leftarrow M}}. This sort of compatibilty condition will be implicitly assumed whenever we chain together multiple instances of the {\leftarrow} notation. Two intermediate covers {N,N'} are equivalent if they cover each other, in a fashion compatible with all the other covering maps, thus {L \leftarrow N \leftarrow N' \leftarrow M} and {L \leftarrow N' \leftarrow N \leftarrow M}. We then have the analogous Galois correspondence:

Theorem 4 (Fundamental theorem of covering spaces) Let {L \leftarrow M} be a Galois covering.

  • (i) If {L \leftarrow N \leftarrow M} is an intermediate cover betwen {L} and {M}, then {M} is a Galois extension of {N}, and {\mathrm{Gal}(M/N)} is a subgroup of {\mathrm{Gal}(M/L)}.
  • (ii) Conversely, if {H} is a subgroup of {\mathrm{Gal}(M/L)}, then there is a intermediate cover {L \leftarrow N \leftarrow M}, unique up to equivalence, such that {\mathrm{Gal}(M/N)=H}.
  • (iii) If {L \leftarrow N_1 \leftarrow M} and {L \leftarrow N_2 \leftarrow M}, then {L \leftarrow N_1 \leftarrow N_2 \leftarrow M} if and only if {\mathrm{Gal}(M/N_2)} is a subgroup of {\mathrm{Gal}(M/N_1)}.
  • (iv) If {L \leftarrow N \leftarrow M}, then {N} is a Galois cover of {L} if and only if {\mathrm{Gal}(M/N)} is a normal subgroup of {\mathrm{Gal}(M/L)}. In that case, {\mathrm{Gal}(N/L)} is isomorphic to the quotient group {\mathrm{Gal}(M/L) / \mathrm{Gal}(N/L)}.

Example 5 Let {L= {\bf C}^\times := {\bf C} \backslash \{0\}}, and let {M = {\bf C}^\times} be the {n}-fold cover of {L} with covering map {\pi_{L \leftarrow M}(w) := w^n}. Then {M} is a Galois cover of {L}, and {\mathrm{Gal}(M/L)} is isomorphic to the cyclic group {{\bf Z}/n{\bf Z}}. The intermediate covers are (up to equivalence) of the form {N = {\bf C}^\times} with covering map {\pi_{L \leftarrow N}(u) := u^m} where {m} divides {n}; they are also Galois covers, with {\mathrm{Gal}(N/L)} isomorphic to {{\bf Z}/m{\bf Z}} and {\mathrm{Gal}(M/N)} isomorphic to the multiples of {m} in {{\bf Z}/n{\bf Z}}.

Given the strong similarity between the two theorems, it is natural to ask if there is some more concrete connection between Galois theory and the theory of finite covers.

In one direction, if the manifolds {L,M,N} have an algebraic structure (or a complex structure), then one can relate covering spaces to field extensions by considering the field of rational functions (or meromorphic functions) on the space. For instance, if {L = {\bf C}^\times} and {z} is the coordinate on {L}, one can consider the field {{\bf C}(z)} of rational functions on {L}; the {n}-fold cover {M = {\bf C}^\times} with coordinate {w} from Example 5 similarly has a field {{\bf C}(w)} of rational functions. The covering {\pi_{L \leftarrow M}(w) = w^n} relates the two coordinates {z,w} by the relation {z = w^n}, at which point one sees that the rational functions {{\bf C}(w)} on {L} are a degree {n} extension of that of {{\bf C}(z)} (formed by adjoining the {n^{th}} root of unity {w} to {z}). In this way we see that Example 5 is in fact closely related to Example 3.

Exercise 6 What happens if one uses meromorphic functions in place of rational functions in the above example? (To answer this question, I found it convenient to use a discrete Fourier transform associated to the multiplicative action of the {n^{th}} roots of unity on {M} to decompose the meromorphic functions on {M} as a linear combination of functions invariant under this action, times a power {w^j} of the coordinate {w} for {j=0,\dots,n-1}.)

I was curious however about the reverse direction. Starting with some field extensions {k \hookrightarrow F \hookrightarrow E}, is it is possible to create manifold like spaces {M_k \leftarrow M_F \leftarrow M_E} associated to these fields in such a fashion that (say) {M_E} behaves like a “covering space” to {M_k} with a group {\mathrm{Aut}(M_E/M_k)} of deck transformations isomorphic to {\mathrm{Aut}(E/k)}, so that the Galois correspondences agree? Also, given how the notion of a path (and associated concepts such as loops, monodromy and the fundamental group) play a prominent role in the theory of covering spaces, can spaces such as {M_k} or {M_E} also come with a notion of a path that is somehow compatible with the Galois correspondence?

The standard answer from modern algebraic geometry (as articulated for instance in this nice MathOverflow answer by Minhyong Kim) is to set {M_E} equal to the spectrum {\mathrm{Spec}(E)} of the field {E}. As a set, the spectrum {\mathrm{Spec}(R)} of a commutative ring {R} is defined as the set of prime ideals of {R}. Generally speaking, the map {R \mapsto \mathrm{Spec}(R)} that maps a commutative ring to its spectrum tends to act like an inverse of the operation that maps a space {X} to a ring of functions on that space. For instance, if one considers the commutative ring {{\bf C}[z, z^{-1}]} of regular functions on {M = {\bf C}^\times}, then each point {z_0} in {M} gives rise to the prime ideal {\{ f \in {\bf C}[z, z^{-1}]: f(z_0)=0\}}, and one can check that these are the only such prime ideals (other than the zero ideal {(0)}), giving an almost one-to-one correspondence between {\mathrm{Spec}( {\bf C}[z,z^{-1}] )} and {M}. (The zero ideal corresponds instead to the generic point of {M}.)

Of course, the spectrum of a field such as {E} is just a point, as the zero ideal {(0)} is the only prime ideal. Naively, it would then seem that there is not enough space inside such a point to support a rich enough structure of paths to recover the Galois theory of this field. In modern algebraic geometry, one addresses this issue by considering not just the set-theoretic elements of {E}, but more general “base points” {p: \mathrm{Spec}(b) \rightarrow \mathrm{Spec}(E)} that map from some other (affine) scheme {\mathrm{Spec}(b)} to {\mathrm{Spec}(E)} (one could also consider non-affine base points of course). One has to rework many of the fundamentals of the subject to accommodate this “relative point of view“, for instance replacing the usual notion of topology with an étale topology, but once one does so one obtains a very satisfactory theory.

As an exercise, I set myself the task of trying to interpret Galois theory as an analogue of covering space theory in a more classical fashion, without explicit reference to more modern concepts such as schemes, spectra, or étale topology. After some experimentation, I found a reasonably satisfactory way to do so as follows. The space {M_E} that one associates with {E} in this classical perspective is not the single point {\mathrm{Spec}(E)}, but instead the much larger space consisting of ring homomorphisms {p: E \rightarrow b} from {E} to arbitrary integral domains {b}; informally, {M_E} consists of all the “models” or “representations” of {E} (in the spirit of this previous blog post). (There is a technical set-theoretic issue here because the class of integral domains {R} is a proper class, so that {M_E} will also be a proper class; I will completely ignore such technicalities in this post.) We view each such homomorphism {p: E \rightarrow b} as a single point in {M_E}. The analogous notion of a path from one point {p: E \rightarrow b} to another {p': E \rightarrow b'} is then a homomorphism {\gamma: b \rightarrow b'} of integral domains, such that {p'} is the composition of {p} with {\gamma}. Note that every prime ideal {I} in the spectrum {\mathrm{Spec}(R)} of a commutative ring {R} gives rise to a point {p_I} in the space {M_R} defined here, namely the quotient map {p_I: R \rightarrow R/I} to the ring {R/I}, which is an integral domain because {I} is prime. So one can think of {\mathrm{Spec}(R)} as being a distinguished subset of {M_R}; alternatively, one can think of {M_R} as a sort of “penumbra” surrounding {\mathrm{Spec}(R)}. In particular, when {E} is a field, {\mathrm{Spec}(E) = \{(0)\}} defines a special point {p_R} in {M_R}, namely the identity homomorphism {p_R: R \rightarrow R}.

Below the fold I would like to record this interpretation of Galois theory, by first revisiting the theory of covering spaces using paths as the basic building block, and then adapting that theory to the theory of field extensions using the spaces indicated above. This is not too far from the usual scheme-theoretic way of phrasing the connection between the two topics (basically I have replaced étale-type points {p: \mathrm{Spec}(b) \rightarrow \mathrm{Spec}(E)} with more classical points {p: E \rightarrow b}), but I had not seen it explicitly articulated before, so I am recording it here for my own benefit and for any other readers who may be interested.

Read the rest of this entry »

About six years ago on this blog, I started thinking about trying to make a web-based game based around high-school algebra, and ended up using Scratch to write a short but playable puzzle game in which one solves linear equations for an unknown {x} using a restricted set of moves. (At almost the same time, there were a number of more professionally made games released along similar lines, most notably Dragonbox.)

Since then, I have thought a couple times about whether there were other parts of mathematics which could be gamified in a similar fashion. Shortly after my first blog posts on this topic, I experimented with a similar gamification of Lewis Carroll’s classic list of logic puzzles, but the results were quite clunky, and I was never satisfied with the results.

Over the last few weeks I returned to this topic though, thinking in particular about how to gamify the rules of inference of propositional logic, in a manner that at least vaguely resembles how mathematicians actually go about making logical arguments (e.g., splitting into cases, arguing by contradiction, using previous result as lemmas to help with subsequent ones, and so forth). The rules of inference are a list of a dozen or so deductive rules concerning propositional sentences (things like “({A} AND {B}) OR (NOT {C})”, where {A,B,C} are some formulas). A typical such rule is Modus Ponens: if the sentence {A} is known to be true, and the implication “{A} IMPLIES {B}” is also known to be true, then one can deduce that {B} is also true. Furthermore, in this deductive calculus it is possible to temporarily introduce some unproven statements as an assumption, only to discharge them later. In particular, we have the deduction theorem: if, after making an assumption {A}, one is able to derive the statement {B}, then one can conclude that the implication “{A} IMPLIES {B}” is true without any further assumption.

It took a while for me to come up with a workable game-like graphical interface for all of this, but I finally managed to set one up, now using Javascript instead of Scratch (which would be hopelessly inadequate for this task); indeed, part of the motivation of this project was to finally learn how to program in Javascript, which turned out to be not as formidable as I had feared (certainly having experience with other C-like languages like C++, Java, or lua, as well as some prior knowledge of HTML, was very helpful). The main code for this project is available here. Using this code, I have created an interactive textbook in the style of a computer game, which I have titled “QED”. This text contains thirty-odd exercises arranged in twelve sections that function as game “levels”, in which one has to use a given set of rules of inference, together with a given set of hypotheses, to reach a desired conclusion. The set of available rules increases as one advances through the text; in particular, each new section gives one or more rules, and additionally each exercise one solves automatically becomes a new deduction rule one can exploit in later levels, much as lemmas and propositions are used in actual mathematics to prove more difficult theorems. The text automatically tries to match available deduction rules to the sentences one clicks on or drags, to try to minimise the amount of manual input one needs to actually make a deduction.

Most of one’s proof activity takes place in a “root environment” of statements that are known to be true (under the given hypothesis), but for more advanced exercises one has to also work in sub-environments in which additional assumptions are made. I found the graphical metaphor of nested boxes to be useful to depict this tree of sub-environments, and it seems to combine well with the drag-and-drop interface.

The text also logs one’s moves in a more traditional proof format, which shows how the mechanics of the game correspond to a traditional mathematical argument. My hope is that this will give students a way to understand the underlying concept of forming a proof in a manner that is more difficult to achieve using traditional, non-interactive textbooks.

I have tried to organise the exercises in a game-like progression in which one first works with easy levels that train the player on a small number of moves, and then introduce more advanced moves one at a time. As such, the order in which the rules of inference are introduced is a little idiosyncratic. The most powerful rule (the law of the excluded middle, which is what separates classical logic from intuitionistic logic) is saved for the final section of the text.

Anyway, I am now satisfied enough with the state of the code and the interactive text that I am willing to make both available (and open source; I selected a CC-BY licence for both), and would be happy to receive feedback on any aspect of the either. In principle one could extend the game mechanics to other mathematical topics than the propositional calculus – the rules of inference for first-order logic being an obvious next candidate – but it seems to make sense to focus just on propositional logic for now.

Let {(X,T,\mu)} be a measure-preserving system – a probability space {(X,\mu)} equipped with a measure-preserving translation {T} (which for simplicity of discussion we shall assume to be invertible). We will informally think of two points {x,y} in this space as being “close” if {y = T^n x} for some {n} that is not too large; this allows one to distinguish between “local” structure at a point {x} (in which one only looks at nearby points {T^n x} for moderately large {n}) and “global” structure (in which one looks at the entire space {X}). The local/global distinction is also known as the time-averaged/space-averaged distinction in ergodic theory.

A measure-preserving system is said to be ergodic if all the invariant sets are either zero measure or full measure. An equivalent form of this statement is that any measurable function {f: X \rightarrow {\bf R}} which is locally essentially constant in the sense that {f(Tx) = f(x)} for {\mu}-almost every {x}, is necessarily globally essentially constant in the sense that there is a constant {c} such that {f(x) = c} for {\mu}-almost every {x}. A basic consequence of ergodicity is the mean ergodic theorem: if {f \in L^2(X,\mu)}, then the averages {x \mapsto \frac{1}{N} \sum_{n=1}^N f(T^n x)} converge in {L^2} norm to the mean {\int_X f\ d\mu}. (The mean ergodic theorem also applies to other {L^p} spaces with {1 < p < \infty}, though it is usually proven first in the Hilbert space {L^2}.) Informally: in ergodic systems, time averages are asymptotically equal to space averages. Specialising to the case of indicator functions, this implies in particular that {\frac{1}{N} \sum_{n=1}^N \mu( E \cap T^n E)} converges to {\mu(E)^2} for any measurable set {E}.

In this short note I would like to use the mean ergodic theorem to show that ergodic systems also have the property that “somewhat locally constant” functions are necessarily “somewhat globally constant”; this is not a deep observation, and probably already in the literature, but I found it a cute statement that I had not previously seen. More precisely:

Corollary 1 Let {(X,T,\mu)} be an ergodic measure-preserving system, and let {f: X \rightarrow {\bf R}} be measurable. Suppose that

\displaystyle \limsup_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \mu( \{ x \in X: f(T^n x) = f(x) \} ) \geq \delta \ \ \ \ \ (1)

 

for some {0 \leq \delta \leq 1}. Then there exists a constant {c} such that {f(x)=c} for {x} in a set of measure at least {\delta}.

Informally: if {f} is locally constant on pairs {x, T^n x} at least {\delta} of the time, then {f} is globally constant at least {\delta} of the time. Of course the claim fails if the ergodicity hypothesis is dropped, as one can simply take {f} to be an invariant function that is not essentially constant, such as the indicator function of an invariant set of intermediate measure. This corollary can be viewed as a manifestation of the general principle that ergodic systems have the same “global” (or “space-averaged”) behaviour as “local” (or “time-averaged”) behaviour, in contrast to non-ergodic systems in which local properties do not automatically transfer over to their global counterparts.

Proof: By composing {f} with (say) the arctangent function, we may assume without loss of generality that {f} is bounded. Let {k>0}, and partition {X} as {\bigcup_{m \in {\bf Z}} E_{m,k}}, where {E_{m,k}} is the level set

\displaystyle E_{m,k} := \{ x \in X: m 2^{-k} \leq f(x) < (m+1) 2^{-k} \}.

For each {k}, only finitely many of the {E_{m,k}} are non-empty. By (1), one has

\displaystyle \limsup_{N \rightarrow \infty} \sum_m \frac{1}{N} \sum_{n=1}^N \mu( E_{m,k} \cap T^n E_{m,k} ) \geq \delta.

Using the ergodic theorem, we conclude that

\displaystyle \sum_m \mu( E_{m,k} )^2 \geq \delta.

On the other hand, {\sum_m \mu(E_{m,k}) = 1}. Thus there exists {m_k} such that {\mu(E_{m_k,k}) \geq \delta}, thus

\displaystyle \mu( \{ x \in X: m_k 2^{-k} \leq f(x) < (m_k+1) 2^{-k} \} ) \geq \delta.

By the Bolzano-Weierstrass theorem, we may pass to a subsequence where {m_k 2^{-k}} converges to a limit {c}, then we have

\displaystyle \mu( \{ x \in X: c-2^{-k} \leq f(x) \leq c+2^{-k} \}) \geq \delta

for infinitely many {k}, and hence

\displaystyle \mu( \{ x \in X: f(x) = c \}) \geq \delta.

The claim follows. \Box

Let {G = (G,+)}, {H = (H,+)} be additive groups (i.e., groups with an abelian addition group law). A map {f: G \rightarrow H} is a homomorphism if one has

\displaystyle  f(x+y) - f(x) - f(y) = 0

for all {x,y \in G}. A map {f: G \rightarrow H} is an affine homomorphism if one has

\displaystyle  f(x_1) - f(x_2) + f(x_3) - f(x_4) = 0 \ \ \ \ \ (1)

for all additive quadruples {(x_1,x_2,x_3,x_4)} in {G}, by which we mean that {x_1,x_2,x_3,x_4 \in G} and {x_1-x_2+x_3-x_4=0}. The two notions are closely related; it is easy to verify that {f} is an affine homomorphism if and only if {f} is the sum of a homomorphism and a constant.

Now suppose that {H} also has a translation-invariant metric {d}. A map {f: G \rightarrow H} is said to be a quasimorphism if one has

\displaystyle  f(x+y) - f(x) - f(y) = O(1) \ \ \ \ \ (2)

for all {x,y \in G}, where {O(1)} denotes a quantity at a bounded distance from the origin. Similarly, {f: G \rightarrow H} is an affine quasimorphism if

\displaystyle  f(x_1) - f(x_2) + f(x_3) - f(x_4) = O(1) \ \ \ \ \ (3)

for all additive quadruples {(x_1,x_2,x_3,x_4)} in {G}. Again, one can check that {f} is an affine quasimorphism if and only if it is the sum of a quasimorphism and a constant (with the implied constant of the quasimorphism controlled by the implied constant of the affine quasimorphism). (Since every constant is itself a quasimorphism, it is in fact the case that affine quasimorphisms are quasimorphisms, but now the implied constant in the latter is not controlled by the implied constant of the former.)

“Trivial” examples of quasimorphisms include the sum of a homomorphism and a bounded function. Are there others? In some cases, the answer is no. For instance, suppose we have a quasimorphism {f: {\bf Z} \rightarrow {\bf R}}. Iterating (2), we see that {f(kx) = kf(x) + O(k)} for any integer {x} and natural number {k}, which we can rewrite as {f(kx)/kx = f(x)/x + O(1/|x|)} for non-zero {x}. Also, {f} is Lipschitz. Sending {k \rightarrow \infty}, we can verify that {f(x)/x} is a Cauchy sequence as {x \rightarrow \infty} and thus tends to some limit {\alpha}; we have {\alpha = f(x)/x + O(1/x)} for {x \geq 1}, hence {f(x) = \alpha x + O(1)} for positive {x}, and then one can use (2) one last time to obtain {f(x) = \alpha x + O(1)} for all {x}. Thus {f} is the sum of the homomorphism {x \mapsto \alpha x} and a bounded sequence.

In general, one can phrase this problem in the language of group cohomology (discussed in this previous post). Call a map {f: G \rightarrow H} a {0}-cocycle. A {1}-cocycle is a map {\rho: G \times G \rightarrow H} obeying the identity

\displaystyle  \rho(x,y+z) + \rho(y,z) = \rho(x,y) + \rho(x+y,z)

for all {x,y,z \in G}. Given a {0}-cocycle {f: G \rightarrow H}, one can form its derivative {\partial f: G \times G \rightarrow H} by the formula

\displaystyle  \partial f(x,y) := f(x+y)-f(x)-f(y).

Such functions are called {1}-coboundaries. It is easy to see that the abelian group of {1}-coboundaries is a subgroup of the abelian group of {1}-cocycles. The quotient of these two groups is the first group cohomology of {G} with coefficients in {H}, and is denoted {H^1(G; H)}.

If a {0}-cocycle is bounded then its derivative is a bounded {1}-coboundary. The quotient of the group of bounded {1}-cocycles by the derivatives of bounded {0}-cocycles is called the bounded first group cohomology of {G} with coefficients in {H}, and is denoted {H^1_b(G; H)}. There is an obvious homomorphism {\phi} from {H^1_b(G; H)} to {H^1(G; H)}, formed by taking a coset of the space of derivatives of bounded {0}-cocycles, and enlarging it to a coset of the space of {1}-coboundaries. By chasing all the definitions, we see that all quasimorphism from {G} to {H} are the sum of a homomorphism and a bounded function if and only if this homomorphism {\phi} is injective; in fact the quotient of the space of quasimorphisms by the sum of homomorphisms and bounded functions is isomorphic to the kernel of {\phi}.

In additive combinatorics, one is often working with functions which only have additive structure a fraction of the time, thus for instance (1) or (3) might only hold “{1\%} of the time”. This makes it somewhat difficult to directly interpret the situation in terms of group cohomology. However, thanks to tools such as the Balog-Szemerédi-Gowers lemma, one can upgrade this sort of {1\%}-structure to {100\%}-structure – at the cost of restricting the domain to a smaller set. Here I record one such instance of this phenomenon, thus giving a tentative link between additive combinatorics and group cohomology. (I thank Yuval Wigderson for suggesting the problem of locating such a link.)

Theorem 1 Let {G = (G,+)}, {H = (H,+)} be additive groups with {|G|=N}, let {S} be a subset of {H}, let {E \subset G}, and let {f: E \rightarrow H} be a function such that

\displaystyle  f(x_1) - f(x_2) + f(x_3) - f(x_4) \in S

for {\geq K^{-1} N^3} additive quadruples {(x_1,x_2,x_3,x_4)} in {E}. Then there exists a subset {A} of {G} containing {0} with {|A| \gg K^{-O(1)} N}, a subset {X} of {H} with {|X| \ll K^{O(1)}}, and a function {g: 4A-4A \rightarrow H} such that

\displaystyle  g(x+y) - g(x)-g(y) \in X + 496S - 496S \ \ \ \ \ (4)

for all {x, y \in 2A-2A} (thus, the derivative {\partial g} takes values in {X + 496 S - 496 S} on {2A - 2A}), and such that for each {h \in A}, one has

\displaystyle  f(x+h) - f(x) - g(h) \in 8S - 8S \ \ \ \ \ (5)

for {\gg K^{-O(1)} N} values of {x \in E}.

Presumably the constants {8} and {496} can be improved further, but we have not attempted to optimise these constants. We chose {2A-2A} as the domain on which one has a bounded derivative, as one can use the Bogulybov lemma (see e.g, Proposition 4.39 of my book with Van Vu) to find a large Bohr set inside {2A-2A}. In applications, the set {S} need not have bounded size, or even bounded doubling; for instance, in the inverse {U^4} theory over a small finite fields {F}, one would be interested in the situation where {H} is the group of {n \times n} matrices with coefficients in {F} (for some large {n}, and {S} being the subset consisting of those matrices of rank bounded by some bound {C = O(1)}.

Proof: By hypothesis, there are {\geq K N^3} triples {(h,x,y) \in G^3} such that {x,x+h,y,y+h \in E} and

\displaystyle  f(x+h) - f(x) \in f(y+h)-f(y) + S. \ \ \ \ \ (6)

Thus, there is a set {B \subset G} with {|B| \gg K^{-1} N} such that for all {h \in B}, one has (6) for {\gg K^{-1} N^2} pairs {(x,y) \in G^2} with {x,x+h,y,y+h \in E}; in particular, there exists {y = y(h) \in E \cap (E-h)} such that (6) holds for {\gg K^{-1} N} values of {x \in E \cap (E-h)}. Setting {g_0(h) := f(y(h)+h) - f(y(h))}, we conclude that for each {h \in B}, one has

\displaystyle  f(x+h) - f(x) \in g_0(h) + S \ \ \ \ \ (7)

for {\gg K^{-1} N} values of {x \in E \cap (E-h)}.

Consider the bipartite graph whose vertex sets are two copies of {E}, and {x} and {x+h} connected by a (directed) edge if {h \in B} and (7) holds. Then this graph has {\gg K^{-2} N^2} edges. Applying (a slight modification of) the Balog-Szemerédi-Gowers theorem (for instance by modifying the proof of Corollary 5.19 of my book with Van Vu), we can then find a subset {C} of {E} with {|C| \gg K^{-O(1)} N} with the property that for any {x_1,x_3 \in C}, there exist {\gg K^{-O(1)} N^3} triples {(x_2,y_1,y_2) \in E^3} such that the edges {(x_1,y_1), (x_2,y_1), (x_2,y_2), (x_3,y_2)} all lie in this bipartite graph. This implies that, for all {x_1,x_3 \in C}, there exist {\gg K^{-O(1)} N^7} septuples {(x_2,y_1,y_2,z_{11},z_{21},z_{22},z_{32}) \in G^7} obeying the constraints

\displaystyle  f(y_j) - f(x_i), f(y_j+z_{ij}) - f(x_i+z_{ij}) \in g_0(y_j-x_i) + S

and {y_j, x_i, y_j+z_{ij}, x_i+z_{ij} \in E} for {ij = 11, 21, 22, 32}. These constraints imply in particular that

\displaystyle  f(x_3) - f(x_1) \in f(x_3+z_{32}) - f(y_2+z_{32}) + f(y_2+z_{22}) - f(x_2+z_{22}) + f(x_2+z_{21}) - f(y_1+z_{21}) + f(y_1+z_{11}) - f(x_1+z_{11}) + 4S - 4S.

Also observe that

\displaystyle  x_3 - x_1 = (x_3+z_{32}) - (y_2+z_{32}) + (y_2+z_{22}) - (x_2+z_{22}) + (x_2+z_{21}) - (y_1+z_{21}) + (y_1+z_{11}) - (x_1+z_{11}).

Thus, if {h \in G} and {x_3,x_1 \in C} are such that {x_3-x_1 = h}, we see that

\displaystyle  f(w_1) - f(w_2) + f(w_3) - f(w_4) + f(w_5) - f(w_6) + f(w_7) - f(w_8) \in f(x_3) - f(x_1) + 4S - 4S

for {\gg K^{-O(1)} N^7} octuples {(w_1,w_2,w_3,w_4,w_5,w_6,w_7,w_8) \in E^8} in the hyperplane

\displaystyle  h = w_1 - w_2 + w_3 - w_4 + w_5 - w_6 + w_7 - w_8.

By the pigeonhole principle, this implies that for any fixed {h \in G}, there can be at most {O(K^{O(1)})} sets of the form {f(x_3)-f(x_1) + 3S-3S} with {x_3-x_1=h}, {x_1,x_3 \in C} that are pairwise disjoint. Using a greedy algorithm, we conclude that there is a set {W_h} of cardinality {O(K^{O(1)})}, such that each set {f(x_3) - f(x_1) + 3S-3S} with {x_3-x_1=h}, {x_1,x_3 \in C} intersects {w+4S -4S} for some {w \in W_h}, or in other words that

\displaystyle  f(x_3) - f(x_1) \in W_{x_3-x_1} + 8S-8S \ \ \ \ \ (8)

whenever {x_1,x_3 \in C}. In particular,

\displaystyle  \sum_{h \in G} \sum_{w \in W_h} | \{ (x_1,x_3) \in C^2: x_3-x_1 = h; f(x_3) - f(x_1) \in w + 8S-8S \}| \geq |C|^2 \gg K^{-O(1)} N^2.

This implies that there exists a subset {A} of {G} with {|A| \gg K^{-O(1)} N}, and an element {g_1(h) \in W_h} for each {h \in A}, such that

\displaystyle  | \{ (x_1,x_3) \in C^2: x_3-x_1 = h; f(x_3) - f(x_1) \in g_1(h) + 8S-8S \}| \gg K^{-O(1)} N \ \ \ \ \ (9)

for all {h \in A}. Note we may assume without loss of generality that {0 \in A} and {g_1(0)=0}.

Suppose that {h_1,\dots,h_{16} \in A} are such that

\displaystyle  \sum_{i=1}^{16} (-1)^{i-1} h_i = 0. \ \ \ \ \ (10)

By construction of {A}, and permuting labels, we can find {\gg K^{-O(1)} N^{16}} 16-tuples {(x_1,\dots,x_{16},y_1,\dots,y_{16}) \in C^{32}} such that

\displaystyle  y_i - x_i = (-1)^{i-1} h_i

and

\displaystyle  f(y_i) - f(x_i) \in (-1)^{i-1} g_i(h) + 8S - 8S

for {i=1,\dots,16}. We sum this to obtain

\displaystyle  f(y_1) + \sum_{i=1}^{15} f(y_{i+1})-f(x_i) - f(x_8) \in \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) + 128 S - 128 S

and hence by (8)

\displaystyle  f(y_1) - f(x_{16}) + \sum_{i=1}^{15} W_{k_i} \in \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) + 248 S - 248 S

where {k_i := y_{i+1}-x_i}. Since

\displaystyle  y_1 - x_{16} + \sum_{i=1}^{15} k_i = 0

we see that there are only {N^{16}} possible values of {(y_1,x_{16},k_1,\dots,k_{15})}. By the pigeonhole principle, we conclude that at most {O(K^{O(1)})} of the sets {\sum_{i=1}^{16} (-1)^i g_1(h_i) + 248 S - 248 S} can be disjoint. Arguing as before, we conclude that there exists a set {X} of cardinality {O(K^{O(1)})} such that

\displaystyle  \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) \in X + 496 S - 496 S \ \ \ \ \ (11)

whenever (10) holds.

For any {h \in 4A-4A}, write {h} arbitrarily as {h = \sum_{i=1}^8 (-1)^{i-1} h_i} for some {h_1,\dots,h_8 \in A} (with {h_5=\dots=h_8=0} if {h \in 2A-2A}, and {h_2 = \dots = h_8 = 0} if {h \in A}) and then set

\displaystyle  g(h) := \sum_{i=1}^8 (-1)^i g_1(h_i).

Then from (11) we have (4). For {h \in A} we have {g(h) = g_1(h)}, and (5) then follows from (9). \Box

Archives